[{"content":"ByteBattles with Ather Energy: 30 Hours, One EV \u0026amp; a Lot of Coffee “When was the last time you coded on a real electric vehicle?”\n– Probably never. Until ByteBattles.\nWhat is ByteBattles? ByteBattles is a national-level firmware hackathon hosted by Ather Energy, inviting students and professionals from across India to tackle real-world challenges in the EV (Electric Vehicle) space.\nParticipants were challenged to develop firmware solutions for the Ather 450x, a flagship electric scooter—and yes, we were actually given the scooter to code on.\nPune to Bangalore: The Journey Begins 3:00 AM: Flight from Pune 8:00 AM: Land in Bangalore and reach Ather HQ 8:30 AM: Check-in, breakfast, orientation Swag secured: T-shirts + ID cards Office tour: Our base for the next 30 hours We were five caffeine-powered engineers, barely awake but buzzing with excitement. The Problem Statement At the orientation, we were introduced to the real challenge:\nImplement 7 firmware features on the Ather 450x over the weekend.\nWith access to development boards, firmware libraries, and the Ather 450x, we were off to a flying start. Kind of.\nHack Mode: ON 11:00 AM: Coding begins Problem 1 solved in 10 mins—or so we thought Turns out\u0026hellip; it was a faulty USB cable While the volunteers helped debug the hardware, we brainstormed and coded two more features—with no working hardware in hand. Grit.\nDebug, Debug, Repeat Two team members: Locked into debugging mode Rest of us: Shipping code like our lives depended on it Result: 4 features implemented by the end of the round But we missed the early submission points—by just a few minutes. That sealed our fate: we didn’t make it to Phase 2.\nPhase 2: Watching from the Sidelines While the top 15 teams moved upstairs to tackle advanced problem statements, we made the most of our time:\nNetworked with students \u0026amp; working pros Interactive Q\u0026amp;As with Ather engineers Explored stripped-down Ather 450x models Activities: Dumb charades, music, DJ, even a mystery guitar jam session Night Mode: Floor is the New Bed By 3 AM, the office had turned into a co-living space:\nBean bags, sleeping bags, and even just floors Brain fog, but hearts still racing with excitement The Plot Twist The next morning, we got another shot at accessing our EV.\n30 minutes later, we solved the bug that had blocked us the entire day before.\nIn one shot, 3 additional features started working.\nTalk about tragic timing.\nFinal Takeaways Despite not making it to the final round, ByteBattles was:\nA rare opportunity to work with real EV hardware Our first hands-on experience with firmware development A brilliant space to network with passionate engineers A steep learning curve that pushed all our boundaries ","date":"2025-04-08T13:23:44+05:30","permalink":"https://jhaakansha.github.io/p/bytebattles-2.0/","title":"ByteBattles 2.0"},{"content":"Salt Typhoon: China\u0026rsquo;s Covert Cyber Espionage Campaign In the shadows of global cyberspace, a formidable threat actor known as Salt Typhoon has emerged, orchestrating sophisticated cyber espionage operations with alarming precision. This Chinese state-sponsored group, also referred to as GhostEmperor, FamousSparrow, and UNC2286, has been linked to China\u0026rsquo;s Ministry of State Security (MSS) (home.treasury.gov).\nOrigins and Evolution Salt Typhoon\u0026rsquo;s activities date back to at least August 2019, with early attempts to infiltrate high-profile targets, including former President Donald Trump (home.treasury.gov). By late 2024, the group had escalated its operations, breaching major U.S. telecommunications companies such as Verizon, AT\u0026amp;T, and T-Mobile (bleepingcomputer.com). These attacks compromised sensitive data, including call metadata and, in some instances, audio recordings of high-profile individuals (bleepingcomputer.com).\nTactics and Tools Salt Typhoon employs a diverse arsenal of tools to infiltrate and maintain access to targeted networks. Notably, they utilize Demodex, a Windows kernel-mode rootkit, to gain remote control over servers (home.treasury.gov). Their toolkit includes:\nBITSAdmin and CertUtil: For downloading and executing malicious payloads. PowerShell scripts: For reconnaissance and lateral movement. SparrowDoor: A custom backdoor facilitating persistent access. Malleable C2: For command and control communication. These tools enable Salt Typhoon to operate stealthily, exfiltrating vast amounts of sensitive information over extended periods (home.treasury.gov).\nGlobal Impact Salt Typhoon\u0026rsquo;s operations are not confined to the United States. The group has targeted telecommunications companies across dozens of countries, exploiting vulnerabilities in core network components, including routers manufactured by Cisco (bleepingcomputer.com). Their activities have raised significant concerns about the security of global communication infrastructures.\nStrategic Objectives The group\u0026rsquo;s primary focus appears to be counterintelligence, aiming to monitor and intercept communications of government officials and high-profile individuals. By compromising telecommunications infrastructure, Salt Typhoon gains access to a wealth of sensitive information, which can be leveraged for strategic advantage in geopolitical contexts (home.treasury.gov).\nInternational Response In response to these cyber intrusions, the U.S. Department of the Treasury sanctioned Sichuan Juxinhe Network Technology, a Shanghai-based cybersecurity firm alleged to be directly involved with Salt Typhoon (home.treasury.gov). Additionally, the White House has issued advisories to assist system administrators in hardening network security to mitigate potential threats from such advanced persistent threats (bleepingcomputer.com).\nConclusion Salt Typhoon exemplifies the evolving nature of cyber espionage, where state-sponsored actors employ sophisticated tactics to infiltrate critical infrastructure and exfiltrate sensitive data. As cyber threats become increasingly complex and pervasive, it is imperative for organizations worldwide to bolster their cybersecurity measures and remain vigilant against such advanced persistent threats.\n","date":"2025-03-17T13:23:49+05:30","permalink":"https://jhaakansha.github.io/p/salt-typhoon/","title":"Salt Typhoon"},{"content":"Fueling Victory: How Formula One Uses Big Data to Win Races Formula One might look like a sport where victory comes down to the fastest car or most daring driver — but behind every blistering lap is an enormous engine of data. Welcome to the world where machine learning meets motorsport, and terabytes of real-time information help shape every twist, turn, and pit stop decision.\nIn this post, we’ll walk you through how Formula One teams use the big data lifecycle to gain a competitive edge — from sensor-packed cars and telemetry streams to predictive modeling and post-race analysis.\n1. Data Collection: The Stream Begins F1 cars are, quite literally, data machines on wheels.\nSensors Everywhere: Each car carries around 300 sensors, constantly measuring temperature, pressure, tire wear, fuel consumption, g-forces, and more — generating over 1.1 million data points per second. Telemetry: All of this data is transmitted in real time to the pit wall and team HQ. Teams monitor engine health, lap times, braking patterns, and tire degradation live as the race unfolds. Track and Weather Data: Sensors around the circuit track ambient and surface temperature, wind speed, and grip levels. Historical Performance: Teams also dig deep into archives of race history — weather patterns, past pit strategies, driver behaviors — to inform current decisions. \u0026ldquo;Every lap is a lesson — and the data is the teacher.\u0026rdquo;\n2. Data Storage: Where All That Info Lives Cloud Data Lakes: With terabytes of data per race weekend, teams rely on cloud infrastructure and data lakes for scalable storage and quick access. Relational Databases: Structured data like lap times, split timings, and car setup parameters are stored in SQL-based databases. Time-Series Databases: For telemetry and sensor data streaming second-by-second, time-series databases like InfluxDB are key. This combination ensures structured and unstructured data are both accessible for real-time and post-race analysis.\n3. Data Cleaning \u0026amp; Preprocessing: Removing the Noise Before the magic happens, data needs polishing.\nNoise Reduction: Filtering out sensor glitches or temporary drops in telemetry due to connection issues. Missing Data Handling: Using techniques like interpolation to fill in gaps when sensors fail. Data Normalization: Aligning units and formats so that telemetry, weather, and driver data can be compared or combined. Clean data ensures accurate insights and decisions — especially when the wrong move can cost a race.\n4. Data Analysis: Understanding the Race Once clean, the data goes through layers of analytics:\nDescriptive Analytics: What happened? Analyzing trends in lap times, tire wear, or driver performance. Predictive Analytics: What might happen? Machine learning models forecast tire degradation, fuel usage, and even opponent strategies. Prescriptive Analytics: What should we do? Data-driven suggestions for pit strategy, tire choice, or engine tuning. Real-Time Analysis: Engineers monitor live dashboards to react instantly — like changing pit strategy when a rival undercuts. This is where the data starts driving decisions.\n5. Data Visualization: Making It All Visible In the heat of a race, engineers don’t have time to read logs.\nDashboards: Real-time visualizations help the pit crew monitor performance metrics like tire temp, brake health, and lap deltas. Graphs \u0026amp; Heatmaps: Show trends like rising tire wear or engine stress over time. 3D Simulations: Entire race strategies can be modeled visually, showing how a pit stop now could play out 20 laps later. Tools like McLaren Applied\u0026rsquo;s F1 Tempo give teams a way to translate raw data into race-ready insights.\n6. Machine Learning \u0026amp; Model Refinement F1 teams are now fully embracing AI.\nModel Training: As new data flows in every race, models are retrained to better predict outcomes like undercut potential or optimal tire stint lengths. AI Simulation: Simulated races using neural networks help optimize strategies across hundreds of “what-if” scenarios. Continuous Learning: Models improve with every season, learning from what worked — and what didn’t. 7. Decision-Making: Turning Data into Action Strategic Adjustments: Mid-race decisions like undercutting an opponent or stretching tire life are made based on predictive analysis. Driver Feedback: Data is sent directly to the driver’s cockpit, offering guidance on tire performance, optimal lines, and gaps to rivals. Risk Mitigation: If a part is close to failure, predictive alerts can warn the crew before disaster strikes. Car Setup Optimization: In practice sessions, teams tweak setups — suspension, aero balance, etc. — based on how the data aligns with driver feel. 8. Feedback Loop: Post-Race Analysis After the checkered flag, the data work is far from over.\nPost-Race Review: Comparing expected vs. actual performance, pit strategy efficiency, and any anomalies. Driver Insights: Their feedback — combined with telemetry — helps fine-tune future car setups. Continuous Improvement: Learnings feed into the next race, improving everything from brake cooling systems to fuel maps. 9. What’s Next: Advanced Analytics in F1 Formula One is only accelerating its use of advanced data techniques:\nAI-Driven Strategy Engines: Predict outcomes based on live conditions and adjust strategies automatically. Predictive Maintenance: AI can now predict part failures before they happen, saving races (and millions in repairs). Biometric Analysis: Some teams are exploring heart rate, hydration levels, and fatigue in drivers to optimize performance and safety. Final Thoughts Formula One is no longer just a test of speed — it’s a test of data mastery. Every lap, every pit stop, every millisecond is backed by mountains of analysis and predictive modeling.\nIn this race, the car is fast. The driver is skilled.\nBut data is the difference.\nWant to dive deeper? Explore tools like F1 Tempo by McLaren Applied or learn how AI is reshaping motorsport.\n","date":"2025-02-14T13:23:53+05:30","permalink":"https://jhaakansha.github.io/p/understanding-big-data-analytics-with-formula-one/","title":"Understanding Big Data Analytics with Formula One"},{"content":"DeepSeek: The Future of Artificial Intelligence Artificial Intelligence (AI) is progressing at an incredible pace, with new advancements happening almost every day. Among the most exciting innovations is DeepSeek, a breakthrough technology that promises to revolutionize the way we interact with AI. In this post, we\u0026rsquo;ll explore what DeepSeek is, how it could impact the future of AI, and compare it with existing technologies like ChatGPT.\nWhat is DeepSeek? DeepSeek is a cutting-edge AI framework that combines advanced machine learning algorithms with deep neural networks to enhance the depth and accuracy of AI\u0026rsquo;s ability to understand and process human-like interactions. Unlike traditional models, which often rely on pre-defined rules or pattern recognition, DeepSeek aims to push the boundaries of artificial intelligence by enabling machines to \u0026ldquo;seek\u0026rdquo; meaning and context from vast amounts of unstructured data.\nAt its core, DeepSeek focuses on:\nContextual understanding: AI can better interpret nuanced human language, including sarcasm, ambiguity, and emotion. Advanced reasoning: It uses sophisticated algorithms to solve problems that require multi-step logic and abstract thinking. Cross-disciplinary knowledge: DeepSeek is designed to access and integrate information from various domains, making it a versatile AI tool for a wide range of industries. Why is DeepSeek a Game-Changer? DeepSeek isn\u0026rsquo;t just another incremental improvement in AI. It represents a shift towards true intelligence, where AI can do more than just perform predefined tasks or provide information based on patterns. Instead, DeepSeek’s goal is to enable machines to understand and reason at a higher level, moving us closer to AI systems that can think, adapt, and evolve with human-like sophistication.\nThe potential applications for DeepSeek are vast, ranging from advanced natural language processing (NLP) and personalized AI assistants to complex decision-making in fields like healthcare, law, and finance.\nThe Next Steps for Artificial Intelligence DeepSeek is a key stepping stone in the evolution of AI. It lays the groundwork for several exciting developments in the AI field:\n1. More Human-Like Interactions One of the biggest challenges in AI development has been creating machines that can understand and respond to human emotions and subtle cues. DeepSeek aims to create AI systems that are far more empathetic and intuitive, making interactions feel natural and engaging.\n2. Cross-Industry Intelligence DeepSeek’s ability to integrate knowledge from different sectors means that AI can become a more valuable tool across industries. Whether it’s legal research, medical diagnosis, or financial forecasting, DeepSeek could help provide deeper insights and more accurate predictions.\n3. Ethical AI Decision-Making As AI becomes more integrated into society, the importance of ethical decision-making grows. DeepSeek’s advanced reasoning capabilities can help ensure that AI systems make decisions that are aligned with ethical standards and societal values.\nChatGPT vs. DeepSeek: A Comparison While ChatGPT is a highly advanced language model that has revolutionized human-computer interactions, DeepSeek takes things a step further. Let’s compare these two technologies to understand their respective strengths and limitations.\n1. Understanding Context ChatGPT: ChatGPT is excellent at answering questions and providing information based on the context of the conversation. However, its understanding is limited to the data it’s been trained on, and it may struggle with ambiguous or highly contextual language. DeepSeek: DeepSeek improves on this by using a more dynamic and nuanced approach to context. It can better interpret complex sentences, grasp the meaning behind metaphors, and understand tone and emotion, offering a more sophisticated interaction. 2. Reasoning and Problem Solving ChatGPT: ChatGPT can generate creative content, help with brainstorming, or solve simple problems based on patterns. However, it may struggle with tasks that require multi-step reasoning or highly abstract thinking. DeepSeek: DeepSeek’s advanced reasoning capabilities give it an edge over ChatGPT when it comes to complex problem-solving. It can analyze and connect disparate pieces of information to make more informed, multi-faceted decisions. 3. Knowledge Integration ChatGPT: ChatGPT is trained on a large dataset of text and can recall information from various domains. However, it may not always accurately integrate information from different fields, and its knowledge is limited to its training cut-off. DeepSeek: DeepSeek integrates knowledge from a wider array of sources, including structured and unstructured data across domains, allowing it to make more comprehensive connections and provide more accurate answers. 4. Real-World Applications ChatGPT: Currently, ChatGPT excels in customer service, creative writing, coding assistance, and general information queries. It is well-suited for tasks that require general language understanding. DeepSeek: DeepSeek’s applications are broader and deeper. It’s ideal for industries that demand complex decision-making and cross-disciplinary insights, such as healthcare, law, and finance. It could also be used for personal AI assistants that anticipate and respond to the user’s needs in a more context-aware and sophisticated manner. Looking Ahead: The Future of AI with DeepSeek DeepSeek represents a new era in artificial intelligence, one where machines don’t just perform tasks—they understand, reason, and adapt. This could lead to breakthroughs in AI-driven solutions for everything from healthcare diagnosis to personalized education and beyond.\nWhile we’re still in the early stages of its development, the potential for DeepSeek is immense. As AI continues to evolve, it’s likely that we’ll see an increasing overlap between human and machine intelligence, allowing for more intuitive, efficient, and ethical AI systems.\nFor now, the journey is just beginning. But one thing is clear: DeepSeek is setting the stage for the next major leap in AI.\nConclusion DeepSeek is not just a new tool; it\u0026rsquo;s a glimpse into the future of artificial intelligence. By enhancing contextual understanding, reasoning, and knowledge integration, DeepSeek is primed to take AI beyond its current limitations, making it more human-like, intelligent, and adaptable.\nIn comparison to ChatGPT, DeepSeek offers more sophisticated capabilities, especially in terms of understanding complex language, solving intricate problems, and integrating diverse knowledge. While ChatGPT remains a powerful tool for many applications, DeepSeek represents the next frontier in AI\u0026rsquo;s evolution.\nAs AI continues to advance, technologies like DeepSeek will play a pivotal role in shaping the future of how we interact with machines, solve problems, and innovate across industries.\n","date":"2025-01-12T13:23:53+05:30","permalink":"https://jhaakansha.github.io/p/deepseek/","title":"DeepSeek"},{"content":" 1 2 3 4 5 6 7 8 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/\u0026#34;) def root(): return {\u0026#34;message\u0026#34;: \u0026#34;Hello World\u0026#34;} As FastAPI supports ASGI (Asyschronous Server Gateway Interface), the function root can also be an async function. Use async optionally with await when the function does not need to wait for steps in the function to be completed.\nRun the live server: uvicorn main:app --reload\nmain: the file main.py (the Python \u0026ldquo;module\u0026rdquo;). app: the object created inside of main.py with the line app = FastAPI(). --reload: make the server restart after code changes. Only use for development. This command will also ouput where the app is being served\nINFO: Uvicorn running on [http://127.0.0.1:8000](http://127.0.0.1:8000/) (Press CTRL+C to quit)\nOpen this (http://127.0.0.1:8000) on your browser to view the app.\nInteractive API Docs Automatic documentation created using swagger will be generated and served at http://127.0.0.1:8000/docs . Alternate documentation is provided by redoc and served at http://127.0.0.1:8000/redoc\nFastAPI will generate the OpenAPI schema of all the APIs at http://127.0.0.1:8000/openapi.json\nPath Parameters 1 2 3 4 5 6 7 8 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_item(item_id): return {\u0026#34;item_id\u0026#34;: item_id} Path Parameters with Types 1 2 3 4 5 6 7 8 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_item(item_id: int): return {\u0026#34;item_id\u0026#34;: item_id} In additon to the typing support in the editor, this does automatic data validation.\nAll the data validation is performed under the hood by the Pydantic library.\nFor example, http://127.0.0.1:8000/items/3 will return {\u0026quot;item_id\u0026quot;:3} .\nCalling http://127.0.0.1:8000/items/foo will return\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 { \u0026#34;detail\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;int_parsing\u0026#34;, \u0026#34;loc\u0026#34;: [ \u0026#34;path\u0026#34;, \u0026#34;item_id\u0026#34; ], \u0026#34;msg\u0026#34;: \u0026#34;Input should be a valid integer, unable to parse string as an integer\u0026#34;, \u0026#34;input\u0026#34;: \u0026#34;foo\u0026#34;, \u0026#34;url\u0026#34;: \u0026#34;https://errors.pydantic.dev/2.1/v/int_parsing\u0026#34; } ] } Order of path matters Because path operations are evaluated in order,make sure that the path for /users/me is declared before the one for /users/{user_id}.\n1 2 3 4 5 6 7 8 9 10 11 12 13 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/users/me\u0026#34;) async def read_user_me(): return {\u0026#34;user_id\u0026#34;: \u0026#34;the current user\u0026#34;} @app.get(\u0026#34;/users/{user_id}\u0026#34;) async def read_user(user_id: str): return {\u0026#34;user_id\u0026#34;: user_id} Similarly, path operation cannot be redefined. So, if there are two functions for the same path, the first one will be used.\nPredefined values If you have a path operation that receives a path parameter, but you want the possible valid path parameter values to be predefined, you can use a standard Python Enum.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 from enum import Enum from fastapi import FastAPI class ModelName(str, Enum): alexnet = \u0026#34;alexnet\u0026#34; resnet = \u0026#34;resnet\u0026#34; lenet = \u0026#34;lenet\u0026#34; app = FastAPI() @app.get(\u0026#34;/models/{model_name}\u0026#34;) async def get_model(model_name: ModelName): if model_name is ModelName.alexnet: return {\u0026#34;model_name\u0026#34;: model_name, \u0026#34;message\u0026#34;: \u0026#34;Deep Learning FTW!\u0026#34;} if model_name.value == \u0026#34;lenet\u0026#34;: return {\u0026#34;model_name\u0026#34;: model_name, \u0026#34;message\u0026#34;: \u0026#34;LeCNN all the images\u0026#34;} return {\u0026#34;model_name\u0026#34;: model_name, \u0026#34;message\u0026#34;: \u0026#34;Have some residuals\u0026#34;} Validations Before reading this section, please go through Query parameters and its validations: Query Parameters Validations Other Details\nIn the same way that you can declare more validations and metadata for query parameters with Query, you can declare the same type of validations and metadata for path parameters with Path.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 from typing import Annotated from fastapi import FastAPI, Path, Query app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_items( item_id: Annotated[int, Path(gt=0, le=1000, title=\u0026#34;The ID of the item to get\u0026#34;)], q: Annotated[str | None, Query(alias=\u0026#34;item-query\u0026#34;)] = None, ): results = {\u0026#34;item_id\u0026#34;: item_id} if q: results.update({\u0026#34;q\u0026#34;: q}) return results And you can also declare numeric validations:\ngt: greater than ge: greater than or equal lt: less than le: less than or equal Other Details Declare more metadata\nYou can declare all the same parameters as for Query.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from typing import Annotated from fastapi import FastAPI, Path, Query app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_items( item_id: Annotated[int, Path(title=\u0026#34;The ID of the item to get\u0026#34;)], q: Annotated[str | None, Query(alias=\u0026#34;item-query\u0026#34;)] = None, ): results = {\u0026#34;item_id\u0026#34;: item_id} if q: results.update({\u0026#34;q\u0026#34;: q}) return results Query Parameters The query is the set of key-value pairs that go after the ? in a URL, separated by \u0026amp; characters. For example, in the URL: http://127.0.0.1:8000/items/?skip=0\u0026amp;limit=10\nWhen you declare other function parameters that are not part of the path parameters, they are automatically interpreted as \u0026ldquo;query\u0026rdquo; parameters. As query parameters are not a fixed part of a path, they can be optional and can have default value.\n1 2 3 4 5 6 7 8 9 from fastapi import FastAPI app = FastAPI() fake_items_db = [{\u0026#34;item_name\u0026#34;: \u0026#34;Foo\u0026#34;}, {\u0026#34;item_name\u0026#34;: \u0026#34;Bar\u0026#34;}, {\u0026#34;item_name\u0026#34;: \u0026#34;Baz\u0026#34;}] @app.get(\u0026#34;/items/\u0026#34;) async def read_item(skip: int = 0, limit: int = 10): return fake_items_db[skip : skip + limit] All the same process that applied for path parameters also applies for query parameters:\nEditor support (obviously) Data \u0026ldquo;parsing\u0026rdquo; Data validation Automatic documentation Multiple path and query parameters You can declare multiple path parameters and query parameters at the same time, FastAPI knows which is which. And you don\u0026rsquo;t have to declare them in any specific order. They will be detected by name:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/users/{user_id}/items/{item_id}\u0026#34;) async def read_user_item( user_id: int, item_id: str, q: str | None = None, short: bool = False ): item = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;owner_id\u0026#34;: user_id} if q: item.update({\u0026#34;q\u0026#34;: q}) if not short: item.update( {\u0026#34;description\u0026#34;: \u0026#34;This is an amazing item that has a long description\u0026#34;} ) return item Required query parameters When you declare a default value for non-path parameters (for now, we have only seen query parameters), then it is not required.\nIf you don\u0026rsquo;t want to add a specific value but just make it optional, set the default as None.\nBut when you want to make a query parameter required, you can just not declare any default value:\n1 2 3 4 5 6 7 8 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_user_item(item_id: str, needy: str): item = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;needy\u0026#34;: needy} return item On opening: http://127.0.0.1:8000/items/foo-item without adding the required parameter needy, you will see an error like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 { \u0026#34;detail\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;missing\u0026#34;, \u0026#34;loc\u0026#34;: [ \u0026#34;query\u0026#34;, \u0026#34;needy\u0026#34; ], \u0026#34;msg\u0026#34;: \u0026#34;Field required\u0026#34;, \u0026#34;input\u0026#34;: null, \u0026#34;url\u0026#34;: \u0026#34;https://errors.pydantic.dev/2.1/v/missing\u0026#34; } ] } And of course, you can define some parameters as required, some as having a default value, and some entirely optional. In this case, there are 3 query parameters:\nneedy, a required str. skip, an int with a default value of 0. limit, an optional int. 1 2 3 4 5 6 7 8 9 10 11 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_user_item( item_id: str, needy: str, skip: int = 0, limit: int | None = None ): item = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;needy\u0026#34;: needy, \u0026#34;skip\u0026#34;: skip, \u0026#34;limit\u0026#34;: limit} return item Validations 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from typing import Annotated from fastapi import FastAPI, Query app = FastAPI() @app.get(\u0026#34;/items/\u0026#34;) async def read_items( q: Annotated[ str | None, Query(min_length=3, max_length=50, pattern=\u0026#34;^fixedquery$\u0026#34;) ] = None, ): results = {\u0026#34;items\u0026#34;: [{\u0026#34;item_id\u0026#34;: \u0026#34;Foo\u0026#34;}, {\u0026#34;item_id\u0026#34;: \u0026#34;Bar\u0026#34;}]} if q: results.update({\u0026#34;q\u0026#34;: q}) return results The query parameter q is of type str | None, that means that it\u0026rsquo;s of type str but could also be None, and indeed, the default value is None, so FastAPI will know it\u0026rsquo;s not required. Annotated can be used to add metadata to your parameters\nFastAPI will now:\nValidate the data making sure that the max length is 50 characters Validate the data making sure that the min length is 3 characters Validate this specific regular expression pattern checks that the received parameter value: ^: starts with the following characters, doesn\u0026rsquo;t have characters before. fixedquery: has the exact value fixedquery. $: ends there, doesn\u0026rsquo;t have any more characters after fixedquery. Show a clear error for the client when the data is not valid Document the parameter in the OpenAPI schema path operation (so it will show up in the automatic docs UI) If you want the q query parameter to have a default value of fixedquery, the function decalration in above example will become sync def read_items(q: Annotated[str, Query(min_length=3)] = \u0026quot;fixedquery\u0026quot;):\n💡 Having a default value of any type, including `None`, makes the parameter optional (not required).\rWe can make a query parameter required just by not declaring a default value. To explicitly declare that a value is required. You can set the default to the literal value … , the function decalration in above example will become async def read_items(q: Annotated[str, Query(min_length=3)] = ...):\nOther Details Query parameter list / multiple values\nWhen you define a query parameter explicitly with Query you can also declare it to receive a list of values, or said in other way, to receive multiple values.\nFor example, to declare a query parameter q that can appear multiple times in the URL, you can write:\n1 2 3 4 @app.get(\u0026#34;/items/\u0026#34;) async def read_items(q: Annotated[list[str] | None, Query()] = None): query_items = {\u0026#34;q\u0026#34;: q} return query_items Then, with a URL like: http://localhost:8000/items/?q=foo\u0026amp;q=bar\n💡 To declare a query parameter with a type of `list`, like in the example above, you need to explicitly use `Query`, otherwise it would be interpreted as a request body.\rWith defaults, the above function decalartion would be like: async def read_items(q: Annotated[list[str], Query()] = [\u0026quot;foo\u0026quot;, \u0026quot;bar\u0026quot;]):\nDeclare more metadata\nYou can add more information about the parameter. This information will be included in the generated OpenAPI and used by the documentation user interfaces and external tools.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 @app.get(\u0026#34;/items/\u0026#34;) async def read_items( q: Annotated[ str | None, Query( alias=\u0026#34;item-query\u0026#34;, title=\u0026#34;Query string\u0026#34;, description=\u0026#34;Query string for the items to search in the database that have a good match\u0026#34;, min_length=3, max_length=50, pattern=\u0026#34;^fixedquery$\u0026#34;, deprecated=True, ), ] = None, ): results = {\u0026#34;items\u0026#34;: [{\u0026#34;item_id\u0026#34;: \u0026#34;Foo\u0026#34;}, {\u0026#34;item_id\u0026#34;: \u0026#34;Bar\u0026#34;}]} if q: results.update({\u0026#34;q\u0026#34;: q}) return results 💡 For URL [`http://127.0.0.1:8000/items/?item-query=foobaritems`](http://127.0.0.1:8000/items/?item-query=foobaritems), query parameter name is `item_query`. This is not a valid Python variable name. Alias is what will be used to find the parameter value. So, `q` will have value `foobaritems`.\r💡\rTo exclude a query parameter from the generated OpenAPI schema, use Query(include_in_schema=False)\nRequest Body A request body is data sent by the client to your API. A response body is the data your API sends to the client. To send data, you should use one of: POST, PUT, DELETE or PATCH.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from fastapi import FastAPI from pydantic import BaseModel class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None app = FastAPI() @app.post(\u0026#34;/items/\u0026#34;) async def create_item(item: Item): item_dict = item.dict() if item.tax: price_with_tax = item.price + item.tax item_dict.update({\u0026#34;price_with_tax\u0026#34;: price_with_tax}) return item_dict The same as when declaring query parameters, when a model attribute has a default value, it is not required. Otherwise, it is required. Use None to make it just optional. So, bith the examples below are valid:\n1 2 3 4 5 6 { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;An optional description\u0026#34;, \u0026#34;price\u0026#34;: 45.2, \u0026#34;tax\u0026#34;: 3.5 } 1 2 3 4 { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;price\u0026#34;: 45.2 } With just that Python type declaration, FastAPI will:\nRead the body of the request as JSON. Convert the corresponding types (if needed). Validate the data. If the data is invalid, it will return a nice and clear error, indicating exactly where and what was the incorrect data. Give you the received data in the parameter item. As you declared it in the function to be of type Item, you will also have all the editor support (completion, etc) for all of the attributes and their types. Generate JSON Schema definitions for your model, you can also use them anywhere else you like if it makes sense for your project. Those schemas will be part of the generated OpenAPI schema, and used by the automatic documentation UIs. Request body + path + query parameters 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 from fastapi import FastAPI from pydantic import BaseModel class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None app = FastAPI() @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Item, q: str | None = None): result = {\u0026#34;item_id\u0026#34;: item_id, **item.dict()} if q: result.update({\u0026#34;q\u0026#34;: q}) return result The function parameters will be recognized as follows:\nIf the parameter is also declared in the path, it will be used as a path parameter. If the parameter is of a singular type (like int, float, str, bool, etc) it will be interpreted as a query parameter. If the parameter is declared to be of the type of a Pydantic model, it will be interpreted as a request body. Singular values in body The same way there is a Query and Path to define extra data for query and path parameters, FastAPI provides an equivalent Body. Body also has all the same extra validation and metadata parameters as Query,Path and others. For example, extending the previous model, you could decide that you want to have another key importance in the same body, besides the item and user. If you declare it as is, because it is a singular value, FastAPI will assume that it is a query parameter. But you can instruct FastAPI to treat it as another body key using Body:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 from typing import Annotated from fastapi import Body, FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None class User(BaseModel): username: str full_name: str | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item( item_id: int, item: Item, user: User, importance: Annotated[int, Body()] ): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item, \u0026#34;user\u0026#34;: user, \u0026#34;importance\u0026#34;: importance} return results In this case, FastAPI will expect a body like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 { \u0026#34;item\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The pretender\u0026#34;, \u0026#34;price\u0026#34;: 42.0, \u0026#34;tax\u0026#34;: 3.2 }, \u0026#34;user\u0026#34;: { \u0026#34;username\u0026#34;: \u0026#34;dave\u0026#34;, \u0026#34;full_name\u0026#34;: \u0026#34;Dave Grohl\u0026#34; }, \u0026#34;importance\u0026#34;: 5 } Multiple body params and query Of course, you can also declare additional query parameters whenever you need, additional to any body parameters. As, by default, singular values are interpreted as query parameters, you don\u0026rsquo;t have to explicitly add a Query, you can just do:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 from typing import Annotated from fastapi import Body, FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None class User(BaseModel): username: str full_name: str | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item( *, item_id: int, item: Item, user: User, importance: Annotated[int, Body(gt=0)], q: str | None = None, ): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item, \u0026#34;user\u0026#34;: user, \u0026#34;importance\u0026#34;: importance} if q: results.update({\u0026#34;q\u0026#34;: q}) return results Here, q is the query parameter while others are body keys.\n💡 Body keys and body parameters refer to the same thing. For example, in `{\"item_id\": 1, \"user_id\": 2}`, `item_id` is a body key/parameter.\rEmbed a single body parameter Let\u0026rsquo;s say you only have a single item body parameter from a Pydantic model Item. By default, FastAPI will then expect its body directly. But if you want it to expect a JSON with a key item and inside of it the model contents, as it does when you declare extra body parameters, you can use the special Body parameter embed:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 from typing import Union from fastapi import Body, FastAPI from pydantic import BaseModel from typing_extensions import Annotated app = FastAPI() class Item(BaseModel): name: str description: Union[str, None] = None price: float tax: Union[float, None] = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Annotated[Item, Body(embed=True)]): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results In this case FastAPI will expect a body like:\n1 2 3 4 5 6 7 8 { \u0026#34;item\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The pretender\u0026#34;, \u0026#34;price\u0026#34;: 42.0, \u0026#34;tax\u0026#34;: 3.2 } } Fields The same way you can declare additional validation and metadata in path operation function parameters with Query, Path and Body, you can declare validation and metadata inside of Pydantic models using Pydantic\u0026rsquo;s Field.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 from typing import Annotated from fastapi import Body, FastAPI from pydantic import BaseModel, Field app = FastAPI() class Item(BaseModel): name: str description: str | None = Field( default=None, title=\u0026#34;The description of the item\u0026#34;, max_length=300 ) price: float = Field(gt=0, description=\u0026#34;The price must be greater than zero\u0026#34;) tax: float | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Annotated[Item, Body(embed=True)]): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results Field works the same way as Query, Path and Body, it has all the same parameters, etc.\nNested Models List fields In Python 3.9 and above you can use the standard list to declare these type annotations as we\u0026rsquo;ll see below.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: list[str] = [] @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Item): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results Submodel 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Image(BaseModel): url: str name: str class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: set[str] = set() image: Image | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Item): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results This would mean that FastAPI would expect a body similar to:\n1 2 3 4 5 6 7 8 9 10 11 { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The pretender\u0026#34;, \u0026#34;price\u0026#34;: 42.0, \u0026#34;tax\u0026#34;: 3.2, \u0026#34;tags\u0026#34;: [\u0026#34;rock\u0026#34;, \u0026#34;metal\u0026#34;, \u0026#34;bar\u0026#34;], \u0026#34;image\u0026#34;: { \u0026#34;url\u0026#34;: \u0026#34;http://example.com/baz.jpg\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;The Foo live\u0026#34; } } Special types and validation Apart from normal singular types like str, int, float, etc. you can use more complex singular types that inherit from str.To see all the options you have, checkout the docs for Pydantic\u0026rsquo;s exotic types. For example, as in the Image model we have a url field, we can declare it to be an instance of Pydantic\u0026rsquo;s HttpUrl instead of a str:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 from fastapi import FastAPI from pydantic import BaseModel, HttpUrl app = FastAPI() class Image(BaseModel): url: HttpUrl name: str class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: set[str] = set() image: Image | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Item): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results You can also use Pydantic models as subtypes of list, set, etc.:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 from fastapi import FastAPI from pydantic import BaseModel, HttpUrl app = FastAPI() class Image(BaseModel): url: HttpUrl name: str class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: set[str] = set() images: list[Image] | None = None @app.put(\u0026#34;/items/{item_id}\u0026#34;) async def update_item(item_id: int, item: Item): results = {\u0026#34;item_id\u0026#34;: item_id, \u0026#34;item\u0026#34;: item} return results This will expect (convert, validate, document, etc.) a JSON body like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 { \u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;The pretender\u0026#34;, \u0026#34;price\u0026#34;: 42.0, \u0026#34;tax\u0026#34;: 3.2, \u0026#34;tags\u0026#34;: [ \u0026#34;rock\u0026#34;, \u0026#34;metal\u0026#34;, \u0026#34;bar\u0026#34; ], \u0026#34;images\u0026#34;: [ { \u0026#34;url\u0026#34;: \u0026#34;http://example.com/baz.jpg\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;The Foo live\u0026#34; }, { \u0026#34;url\u0026#34;: \u0026#34;http://example.com/dave.jpg\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;The Baz\u0026#34; } ] } Bodies of arbitrary dict You can also declare a body as a dict with keys of some type and values of some other type.This way, you don\u0026rsquo;t have to know beforehand what the valid field/attribute names are (as would be the case with Pydantic models).This would be useful if you want to receive keys that you don\u0026rsquo;t already know.\n1 2 3 4 5 6 7 from fastapi import FastAPI app = FastAPI() @app.post(\u0026#34;/index-weights/\u0026#34;) async def create_index_weights(weights: dict[int, float]): return weights Extra Data Types Up to now, you have been using common data types, like:\nint float str bool But you can also use more complex data types.\nAnd you will still have the same features as seen up to now:\nGreat editor support. Data conversion from incoming requests. Data conversion for response data. Data validation. Automatic annotation and documentation. Here are some of the additional data types you can use:\nUUID: A standard \u0026ldquo;Universally Unique Identifier\u0026rdquo;, common as an ID in many databases and systems. In requests and responses will be represented as a str. datetime.datetime: A Python datetime.datetime. In requests and responses will be represented as a str in ISO 8601 format, like: 2008-09-15T15:53:00+05:00. datetime.date: Python datetime.date. In requests and responses will be represented as a str in ISO 8601 format, like: 2008-09-15. datetime.time: A Python datetime.time. In requests and responses will be represented as a str in ISO 8601 format, like: 14:23:55.003. datetime.timedelta: A Python datetime.timedelta. In requests and responses will be represented as a float of total seconds. Pydantic also allows representing it as a \u0026ldquo;ISO 8601 time diff encoding\u0026rdquo;, see the docs for more info. frozenset: In requests and responses, treated the same as a set: In requests, a list will be read, eliminating duplicates and converting it to a set. In responses, the set will be converted to a list. The generated schema will specify that the set values are unique (using JSON Schema\u0026rsquo;s uniqueItems). bytes: Standard Python bytes. In requests and responses will be treated as str. The generated schema will specify that it\u0026rsquo;s a str with binary \u0026ldquo;format\u0026rdquo;. Decimal: Standard Python Decimal. In requests and responses, handled the same as a float. You can check all the valid pydantic data types here: Pydantic data types. Cookie Parameters You can define Cookie parameters the same way you define Query and Path parameters.\n1 2 3 4 5 6 7 8 9 from typing import Annotated from fastapi import Cookie, FastAPI app = FastAPI() @app.get(\u0026#34;/items/\u0026#34;) async def read_items(ads_id: Annotated[str | None, Cookie()] = None): return {\u0026#34;ads_id\u0026#34;: ads_id} Header Parameters You can define Header parameters the same way you define Query, Path and Cookie parameters.\n1 2 3 4 5 6 7 8 9 from typing import Annotated from fastapi import FastAPI, Header app = FastAPI() @app.get(\u0026#34;/items/\u0026#34;) async def read_items(user_agent: Annotated[str | None, Header()] = None): return {\u0026#34;User-Agent\u0026#34;: user_agent} Header has a little extra functionality on top of what Path, Query and Cookie provide. Most of the standard headers are separated by a \u0026ldquo;hyphen\u0026rdquo; character, also known as the \u0026ldquo;minus symbol\u0026rdquo; (-). But a variable like user-agent is invalid in Python. So, by default, Header will convert the parameter names characters from underscore (_) to hyphen (-) to extract and document the headers.\nAlso, HTTP headers are case-insensitive, so, you can declare them with standard Python style (also known as \u0026ldquo;snake_case\u0026rdquo;). So, you can use user_agent as you normally would in Python code, instead of needing to capitalize the first letters as User_Agent or something similar.\nIf for some reason you need to disable automatic conversion of underscores to hyphens, set the parameter convert_underscores of Header to False: Header(convert_underscores=False)]\nDuplicate headers It is possible to receive duplicate headers. That means, the same header with multiple values. You can define those cases using a list in the type declaration.\nYou will receive all the values from the duplicate header as a Python list. For example, to declare a header of X-Token that can appear more than once, you can write:\n1 2 3 4 5 6 7 8 9 from typing import Annotated from fastapi import FastAPI, Header app = FastAPI() @app.get(\u0026#34;/items/\u0026#34;) async def read_items(x_token: Annotated[list[str] | None, Header()] = None): return {\u0026#34;X-Token values\u0026#34;: x_token} If you communicate with that path operation sending two HTTP headers like:\n1 2 X-Token: foo X-Token: bar The response would be like:\n1 2 3 4 5 6 { \u0026#34;X-Token values\u0026#34;: [ \u0026#34;bar\u0026#34;, \u0026#34;foo\u0026#34; ] } Response You can declare the type used for the response by annotating the path operation function return type.\nYou can use type annotations the same way you would for input data in function parameters, you can use Pydantic models, lists, dictionaries, scalar values like integers, booleans, etc.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: list[str] = [] @app.post(\u0026#34;/items/\u0026#34;) async def create_item(item: Item) -\u0026gt; Item: return item @app.get(\u0026#34;/items/\u0026#34;) async def read_items() -\u0026gt; list[Item]: return [ Item(name=\u0026#34;Portal Gun\u0026#34;, price=42.0), Item(name=\u0026#34;Plumbus\u0026#34;, price=32.0), ] FastAPI will use this return type to:\nValidate the returned data. If the data is invalid (e.g. you are missing a field), it means that your app code is broken, not returning what it should, and it will return a server error instead of returning incorrect data. This way you and your clients can be certain that they will receive the data and the data shape expected. Add a JSON Schema for the response, in the OpenAPI path operation. This will be used by the automatic docs. It will also be used by automatic client code generation tools. But most importantly:\nIt will limit and filter the output data to what is defined in the return type. This is particularly important for security, we\u0026rsquo;ll see more of that below. response_model Parameter There are some cases where you need or want to return some data that is not exactly what the type declares.\nFor example, you could want to return a dictionary or a database object, but declare it as a Pydantic model. This way the Pydantic model would do all the data documentation, validation, etc. for the object that you returned (e.g. a dictionary or database object). If you added the return type annotation, tools and editors would complain with a (correct) error telling you that your function is returning a type (e.g. a dict) that is different from what you declared (e.g. a Pydantic model). In those cases, you can use the path operation decorator parameter response_model instead of the return type.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 from typing import Any from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str | None = None price: float tax: float | None = None tags: list[str] = [] @app.post(\u0026#34;/items/\u0026#34;, response_model=Item) async def create_item(item: Item) -\u0026gt; Any: return item @app.get(\u0026#34;/items/\u0026#34;, response_model=list[Item]) async def read_items() -\u0026gt; Any: return [ {\u0026#34;name\u0026#34;: \u0026#34;Portal Gun\u0026#34;, \u0026#34;price\u0026#34;: 42.0}, {\u0026#34;name\u0026#34;: \u0026#34;Plumbus\u0026#34;, \u0026#34;price\u0026#34;: 32.0}, ] If you declare both a return type and a response_model, the response_model will take priority and be used by FastAPI. Some other things response models can do:\nYou can disable the response model generation by setting response_model=None. This will make FastAPI skip the response model generation and that way you can have any return type annotations you need without it affecting your FastAPI application. You can omit optional attributes having default values from the response by setting path operation decorator parameter response_model_exclude_unset=True. For example, @app.post(\u0026quot;/items/{item_id}\u0026quot;, response_model=Item, response_model_exclude_unset=True) You can also use the path operation decorator parameters response_model_include and response_model_exclude. They take a set of str with the name of the attributes to include (omitting the rest) or to exclude (including the rest). This can be used as a quick shortcut if you have only one Pydantic model and want to remove some data from the output. For example: @app.post(\u0026quot;/items\u0026quot;, response_model=Item, response_model_include={\u0026quot;name\u0026quot;, \u0026quot;description\u0026quot;}, response_model_exclude={\u0026quot;tax\u0026quot;}) Data Filtering We want to annotate the function with one type but return something that includes more data. We want FastAPI to keep filtering the data using the response model.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class BaseUser(BaseModel): username: str email: EmailStr full_name: str | None = None class UserIn(BaseUser): password: str @app.post(\u0026#34;/user/\u0026#34;) async def create_user(user: UserIn) -\u0026gt; BaseUser: return user Return a Response Directly 1 2 3 4 5 6 7 8 9 10 from fastapi import FastAPI, Response from fastapi.responses import JSONResponse, RedirectResponse app = FastAPI() @app.get(\u0026#34;/portal\u0026#34;) async def get_portal(teleport: bool = False) -\u0026gt; Response: if teleport: return RedirectResponse(url=\u0026#34;https://www.youtube.com/watch?v=dQw4w9WgXcQ\u0026#34;) return JSONResponse(content={\u0026#34;message\u0026#34;: \u0026#34;Here\u0026#39;s your interdimensional portal.\u0026#34;}) Extra Models It is common to have more than one related model. This is especially the case for user models, because:\nThe input model needs to be able to have a password. The output model should not have a password. The database model would probably need to have a hashed password. This is is an example of how they are used:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 from fastapi import FastAPI from pydantic import BaseModel, EmailStr app = FastAPI() class UserBase(BaseModel): username: str email: EmailStr full_name: str | None = None class UserIn(UserBase): password: str class UserOut(UserBase): pass class UserInDB(UserBase): hashed_password: str def fake_password_hasher(raw_password: str): return \u0026#34;supersecret\u0026#34; + raw_password def fake_save_user(user_in: UserIn): hashed_password = fake_password_hasher(user_in.password) user_in_db = UserInDB(**user_in.dict(), hashed_password=hashed_password) print(\u0026#34;User saved! ..not really\u0026#34;) return user_in_db @app.post(\u0026#34;/user/\u0026#34;, response_model=UserOut) async def create_user(user_in: UserIn): user_saved = fake_save_user(user_in) return user_saved To note:\nTo create a dictionary from a Pydantic model, use the .dict() method that returns a dict with the model\u0026rsquo;s data. For example, UserInDB(**user_dict) To create a Pydantic model from a dict, call the constructor with the dict. For example: UserInDB(**user_dict). This is caled unwrapping a dict. Convert from one Pydantic model to another using user_dict = user_in.dict(); UserInDB(**user_dict) UserInDB(**user_in.dict()) UserInDB(**user_in.dict(), hashed_password=hashed_password) (to add an extra attribute hashed_password) Other details Union or AnyOf You can declare a response to be the Union of two types, that means, that the response would be any of the two.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 from typing import Union from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class BaseItem(BaseModel): description: str type: str class CarItem(BaseItem): type: str = \u0026#34;car\u0026#34; class PlaneItem(BaseItem): type: str = \u0026#34;plane\u0026#34; size: int items = { \u0026#34;item1\u0026#34;: {\u0026#34;description\u0026#34;: \u0026#34;All my friends drive a low rider\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;car\u0026#34;}, \u0026#34;item2\u0026#34;: { \u0026#34;description\u0026#34;: \u0026#34;Music is my aeroplane, it\u0026#39;s my aeroplane\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;plane\u0026#34;, \u0026#34;size\u0026#34;: 5, }, } @app.get(\u0026#34;/items/{item_id}\u0026#34;, response_model=Union[PlaneItem, CarItem]) async def read_item(item_id: str): return items[item_id] In this example we pass Union[PlaneItem, CarItem] as the value of the argument response_model. Because we are passing it as a value to an argument instead of putting it in a type annotation, we have to use Union even in Python 3.10.\nList of models 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str description: str items = [ {\u0026#34;name\u0026#34;: \u0026#34;Foo\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;There comes my hero\u0026#34;}, {\u0026#34;name\u0026#34;: \u0026#34;Red\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;It\u0026#39;s my aeroplane\u0026#34;}, ] @app.get(\u0026#34;/items/\u0026#34;, response_model=list[Item]) async def read_items(): return items Response with arbitrary dict You can also declare a response using a plain arbitrary dict, declaring just the type of the keys and values, without using a Pydantic model. This is useful if you don\u0026rsquo;t know the valid field/attribute names (that would be needed for a Pydantic model) beforehand.\n1 2 3 4 5 6 7 from fastapi import FastAPI app = FastAPI() @app.get(\u0026#34;/keyword-weights/\u0026#34;, response_model=dict[str, float]) async def read_keyword_weights(): return {\u0026#34;foo\u0026#34;: 2.3, \u0026#34;bar\u0026#34;: 3.4} Response Codes In HTTP, you send a numeric status code of 3 digits as part of the response. These status codes have a name associated to recognize them, but the important part is the number.\n100 and above are for \u0026ldquo;Information\u0026rdquo;. You rarely use them directly. Responses with these status codes cannot have a body. 200 and above are for \u0026ldquo;Successful\u0026rdquo; responses. These are the ones you would use the most. 200 is the default status code, which means everything was \u0026ldquo;OK\u0026rdquo;. Another example would be 201, \u0026ldquo;Created\u0026rdquo;. It is commonly used after creating a new record in the database. A special case is 204, \u0026ldquo;No Content\u0026rdquo;. This response is used when there is no content to return to the client, and so the response must not have a body. 300 and above are for \u0026ldquo;Redirection\u0026rdquo;. Responses with these status codes may or may not have a body, except for 304, \u0026ldquo;Not Modified\u0026rdquo;, which must not have one. 400 and above are for \u0026ldquo;Client error\u0026rdquo; responses. These are the second type you would probably use the most. An example is 404, for a \u0026ldquo;Not Found\u0026rdquo; response. For generic errors from the client, you can just use 400. 500 and above are for server errors. You almost never use them directly. When something goes wrong at some part in your application code, or server, it will automatically return one of these status codes. 1 2 3 4 5 6 7 from fastapi import FastAPI, status app = FastAPI() @app.post(\u0026#34;/items/\u0026#34;, status_code=status.HTTP_201_CREATED) async def create_item(name: str): return {\u0026#34;name\u0026#34;: name} Form Data When you need to receive form fields instead of JSON, you can use Form. With Form you can declare the same configurations as with Body (and Query, Path, Cookie), including validation, examples, an alias (e.g. user-name instead of username), etc. The way HTML forms (\u0026lt;form\u0026gt;\u0026lt;/form\u0026gt;) sends the data to the server normally uses a \u0026ldquo;special\u0026rdquo; encoding for that data, it\u0026rsquo;s different from JSON. FastAPI will make sure to read that data from the right place instead of JSON.\n1 2 3 4 5 6 7 8 9 from typing import Annotated from fastapi import FastAPI, Form app = FastAPI() @app.post(\u0026#34;/login/\u0026#34;) async def login(username: Annotated[str, Form()], password: Annotated[str, Form()]): return {\u0026#34;username\u0026#34;: username} Request Files You can define files to be uploaded by the client using File\n💡 To receive uploaded files, first install [`python-multipart`](https://github.com/Kludex/python-multipart)\r1 2 3 4 5 6 7 8 9 10 11 12 13 from typing import Annotated from fastapi import FastAPI, File, UploadFile app = FastAPI() @app.post(\u0026#34;/files/\u0026#34;) async def create_file(file: Annotated[bytes, File()]): return {\u0026#34;file_size\u0026#34;: len(file)} @app.post(\u0026#34;/uploadfile/\u0026#34;) async def create_upload_file(file: UploadFile): return {\u0026#34;filename\u0026#34;: file.filename} The files will be uploaded as \u0026ldquo;form data\u0026rdquo;.\nIf you declare the type of your path operation function parameter as bytes, FastAPI will read the file for you and you will receive the contents as bytes.\nKeep in mind that this means that the whole contents will be stored in memory. This will work well for small files.\nUsing UploadFile has several advantages over bytes:\nYou don\u0026rsquo;t have to use File() in the default value of the parameter. It uses a \u0026ldquo;spooled\u0026rdquo; file: A file stored in memory up to a maximum size limit, and after passing this limit it will be stored in disk. This means that it will work well for large files like images, videos, large binaries, etc. without consuming all the memory. You can get metadata from the uploaded file. It has a file-like async interface. It exposes an actual Python SpooledTemporaryFile object that you can pass directly to other libraries that expect a file-like object. UploadFile has the following attributes:\nfilename: A str with the original file name that was uploaded (e.g. myimage.jpg). content_type: A str with the content type (MIME type / media type) (e.g. image/jpeg). file: A SpooledTemporaryFile (a file-like object). This is the actual Python file that you can pass directly to other functions or libraries that expect a \u0026ldquo;file-like\u0026rdquo; object. UploadFile has the following async methods. They all call the corresponding file methods underneath (using the internal SpooledTemporaryFile).\nwrite(data): Writes data (str or bytes) to the file. read(size): Reads size (int) bytes/characters of the file. seek(offset): Goes to the byte position offset (int) in the file. E.g., await myfile.seek(0) would go to the start of the file. This is especially useful if you run await myfile.read() once and then need to read the contents again. close(): Closes the file. As all these methods are async methods, you need to \u0026ldquo;await\u0026rdquo; them.\nFor example, inside of an async path operation function you can get the contents with contents = await myfile.read()\nIf you are inside of a normal def path operation function, you can access the UploadFile.file directly, for example contents = myfile.file.read()\nTo note:\nYou can make a file optional by using standard type annotations and setting a default value of None. For example, function definition from above will change to async def create_upload_file(file: UploadFile | None = None): You can also use File() with UploadFile, for example, to set additional metadata. For example: async def create_upload_file(\rfile: Annotated[UploadFile, File(description=\u0026quot;A file read as UploadFile\u0026quot;)]): It\u0026rsquo;s possible to upload several files at the same time. To use that, declare a list of bytes or UploadFile. For example, async def create_upload_files(files: list[UploadFile]): Handling Errors There are many situations in which you need to notify an error to a client that is using your API. This client could be a browser with a frontend, a code from someone else, an IoT device, etc. You could need to tell the client that:\nThe client doesn\u0026rsquo;t have enough privileges for that operation. The client doesn\u0026rsquo;t have access to that resource. The item the client was trying to access doesn\u0026rsquo;t exist, etc. HTTPException Raise an HTTPException in your code.\n1 2 3 4 5 6 7 8 9 10 11 from fastapi import FastAPI, HTTPException app = FastAPI() items = {\u0026#34;foo\u0026#34;: \u0026#34;The Foo Wrestlers\u0026#34;} @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_item(item_id: str): if item_id not in items: raise HTTPException(status_code=404, detail=\u0026#34;Item not found\u0026#34;) return {\u0026#34;item\u0026#34;: items[item_id]} If the client requests http://example.com/items/foo (an item_id \u0026quot;foo\u0026quot;), that client will receive an HTTP status code of 200, and a JSON response of:\n1 2 3 { \u0026#34;item\u0026#34;: \u0026#34;The Foo Wrestlers\u0026#34; } But if the client requests http://example.com/items/bar (a non-existent item_id \u0026quot;bar\u0026quot;), that client will receive an HTTP status code of 404 (the \u0026ldquo;not found\u0026rdquo; error), and a JSON response of:\n1 2 3 { \u0026#34;detail\u0026#34;: \u0026#34;Item not found\u0026#34; } 💡\rWhen raising an HTTPException, you can pass any value that can be converted to JSON as the parameter detail, not only str. You could pass a dict, a list, etc.\nThey are handled automatically by FastAPI and converted to JSON.\nAdd custom headers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 from fastapi import FastAPI, HTTPException app = FastAPI() items = {\u0026#34;foo\u0026#34;: \u0026#34;The Foo Wrestlers\u0026#34;} @app.get(\u0026#34;/items-header/{item_id}\u0026#34;) async def read_item_header(item_id: str): if item_id not in items: raise HTTPException( status_code=404, detail=\u0026#34;Item not found\u0026#34;, headers={\u0026#34;X-Error\u0026#34;: \u0026#34;There goes my error\u0026#34;}, ) return {\u0026#34;item\u0026#34;: items[item_id]} Install custom exception handlers Let\u0026rsquo;s say you have a custom exception UnicornException that you (or a library you use) might raise. And you want to handle this exception globally with FastAPI. You could add a custom exception handler with @app.exception_handler():\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 from fastapi import FastAPI, Request from fastapi.responses import JSONResponse class UnicornException(Exception): def __init__(self, name: str): self.name = name app = FastAPI() @app.exception_handler(UnicornException) async def unicorn_exception_handler(request: Request, exc: UnicornException): return JSONResponse( status_code=418, content={\u0026#34;message\u0026#34;: f\u0026#34;Oops! {exc.name} did something. There goes a rainbow...\u0026#34;}, ) @app.get(\u0026#34;/unicorns/{name}\u0026#34;) async def read_unicorn(name: str): if name == \u0026#34;yolo\u0026#34;: raise UnicornException(name=name) return {\u0026#34;unicorn_name\u0026#34;: name} Here, if you request /unicorns/yolo, the path operation will raise a UnicornException. But it will be handled by the unicorn_exception_handler. So, you will receive a clean error, with an HTTP status code of 418 and a JSON content of: {\u0026quot;message\u0026quot;: \u0026quot;Oops! yolo did something. There goes a rainbow...\u0026quot;}\nOverride the default exception handlers FastAPI has some default exception handlers. These handlers are in charge of returning the default JSON responses when you raise an HTTPException and when the request has invalid data.\nWhen a request contains invalid data, FastAPI internally raises a RequestValidationError. And it also includes a default exception handler for it. To override it, import the RequestValidationError and use it with @app.exception_handler(RequestValidationError) to decorate the exception handler. The exception handler will receive a Request and the exception.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 from fastapi import FastAPI, HTTPException from fastapi.exceptions import RequestValidationError from fastapi.responses import PlainTextResponse from starlette.exceptions import HTTPException as StarletteHTTPException app = FastAPI() @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): return PlainTextResponse(str(exc.detail), status_code=exc.status_code) @app.exception_handler(RequestValidationError) async def validation_exception_handler(request, exc): return PlainTextResponse(str(exc), status_code=400) @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_item(item_id: int): if item_id == 3: raise HTTPException(status_code=418, detail=\u0026#34;Nope! I don\u0026#39;t like 3.\u0026#34;) return {\u0026#34;item_id\u0026#34;: item_id} Now, if you go to /items/foo, instead of getting the default JSON error with:\n1 2 3 4 5 6 7 8 9 10 11 12 { \u0026#34;detail\u0026#34;: [ { \u0026#34;loc\u0026#34;: [ \u0026#34;path\u0026#34;, \u0026#34;item_id\u0026#34; ], \u0026#34;msg\u0026#34;: \u0026#34;value is not a valid integer\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;type_error.integer\u0026#34; } ] } you will get a text version, with:\n1 2 3 1 validation error path -\u0026gt; item_id value is not a valid integer (type=type_error.integer) The same way, you can override the HTTPException handler. For example, you could want to return a plain text response instead of JSON for these errors:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 from fastapi import FastAPI, HTTPException from fastapi.exceptions import RequestValidationError from fastapi.responses import PlainTextResponse from starlette.exceptions import HTTPException as StarletteHTTPException app = FastAPI() @app.exception_handler(StarletteHTTPException) async def http_exception_handler(request, exc): return PlainTextResponse(str(exc.detail), status_code=exc.status_code) @app.exception_handler(RequestValidationError) async def validation_exception_handler(request, exc): return PlainTextResponse(str(exc), status_code=400) @app.get(\u0026#34;/items/{item_id}\u0026#34;) async def read_item(item_id: int): if item_id == 3: raise HTTPException(status_code=418, detail=\u0026#34;Nope! I don\u0026#39;t like 3.\u0026#34;) return {\u0026#34;item_id\u0026#34;: item_id} FastAPI\u0026rsquo;s HTTPException vs Starlette\u0026rsquo;s HTTPException And FastAPI\u0026rsquo;s HTTPException error class inherits from Starlette\u0026rsquo;s HTTPException error class. The only difference is that FastAPI\u0026rsquo;s HTTPException accepts any JSON-able data for the detail field, while Starlette\u0026rsquo;s HTTPException only accepts strings for it.\nSo, you can keep raising FastAPI\u0026rsquo;s HTTPException as normally in your code. But when you register an exception handler, you should register it for Starlette\u0026rsquo;s HTTPException. This way, if any part of Starlette\u0026rsquo;s internal code, or a Starlette extension or plug-in, raises a Starlette HTTPException, your handler will be able to catch and handle it.\n","date":"2024-11-04T13:23:57+05:30","permalink":"https://jhaakansha.github.io/p/fastapi-a-quick-guide/","title":"FastAPI: A Quick Guide"},{"content":"My Internship Experience at Adobe \u0026ldquo;A dream internship, real-world impact, and a journey into the heart of creativity and innovation.\u0026rdquo;\nLanding an internship at Adobe was nothing short of surreal. It was a thrilling and unexpected milestone, and I couldn’t wait to see what life beyond the classroom looked like—working alongside industry veterans and building features that millions might use.\nTeam Photoshop Express – Android Squad I joined the Photoshop Express team, where I primarily worked on the Android app. What amazed me from day one was the ownership I was given. I wasn\u0026rsquo;t just there to shadow someone—I was contributing to real features that impacted actual users.\nTackling Frustration with a Wait State Loader The Problem Long image processing times were leading to:\nUser frustration and app abandonment Confusion among new users trying to understand text-to-image generation Lost opportunities to highlight community creativity The Solution Introduce a wait state loader that:\nShows community-generated images Displays prompts used to create them Lets users copy prompts for inspiration while they wait What I Built An automated image slider that:\nPulls in community images from a remote server Displays them with clean, styled captions Allows quick one-tap prompt copying Demo Leading the Generative Expand Feature \u0026ldquo;From concept to production—owning one of the most powerful AI-driven features in the app.\u0026rdquo;\nWhat is Generative Expand? Generative Expand is an innovative feature that allows users to change the aspect ratio of an image by using AI to seamlessly generate and fill in the extended canvas based on the original content.\nImagine expanding a photo horizontally, and the app fills in the sides with realistic, context-aware details—automatically.\nMy Role I was entrusted with end-to-end ownership of this feature during my internship, and it quickly became the most exciting and challenging part of my journey.\nResponsibilities: Architected the workflow for canvas resizing + AI content generation Integrated Firefly model capabilities into the Android app Ensured a smooth and intuitive UX, minimizing user friction Collaborated with designers, QA, and product teams for polish Timeline \u0026amp; Execution 3 weeks of development Continuous feedback and iteration with mentors Meticulous testing across devices and screen sizes Final implementation went live in the production app Seeing my feature shipped and used by real users was one of the most fulfilling moments of the internship.\nHere\u0026rsquo;s a demo video Enhancing the Firefly Model through Prompt Engineering \u0026ldquo;The art of talking to AI—shaping how it sees, creates, and imagines.\u0026rdquo;\nOne of the most intellectually rewarding parts of my internship involved working on Prompt Engineering for Adobe’s Firefly model, which powers several upcoming features in the Photoshop Express iOS app.\nThe Challenge The Firefly model needed smarter, trend-aware inputs to produce cleaner, more realistic images. That meant:\nReducing visual noise Avoiding confusing or ambiguous prompts Keeping up with rapidly evolving creative trends What I Did As a Prompt Engineer, my responsibilities included:\nRefining existing prompts to improve clarity and output quality Creating new prompt categories based on trend analysis Cleaning up noisy or redundant text elements Studying competitor apps + user-generated galleries for inspiration This wasn’t just a technical task—it was part research, part design thinking, and part creative strategy.\nThe Results The model produced cleaner, more accurate, and visually appealing images\nNew prompt categories helped Firefly stay aligned with creative trends\nPrompt templates became more user-friendly and intuitive\nI wasn’t just teaching AI how to create—I was helping it create better.\nWrapping It All Up Conclusion My internship at Adobe was a transformative journey. I got to:\nBuild real features that shipped to production Work hands-on with AI-driven technologies Contribute directly to Photoshop Express on Android and iOS Collaborate with an incredibly talented and welcoming team It was the perfect blend of challenge, creativity, and real-world impact.\nI walked in as a student curious about how the industry works—and walked out as a developer who\u0026rsquo;s contributed to one of the most recognized creative platforms in the world.\nFinal Thanks A huge thank you to my mentors and the amazing team at Adobe Photoshop Express. You believed in me, challenged me, and gave me the space to grow. I’ll carry this experience with me for a long time.\n","date":"2024-08-12T13:23:39+05:30","permalink":"https://jhaakansha.github.io/p/more-than-just-photoshop-my-adobe-internship-experience/","title":"More Than Just Photoshop: My Adobe Internship Experience"},{"content":"Apache Kafka Apache Kafka is an open-source distributed event streaming platform. It has a publish-subscribe messaging system which allows exchanging of data between applications, servers and processors as well. Kafka is written in Java and Scala. It is scalable and fault-tolerant and has topic-based partition FIFO queues.\nWhy Apache Kafka High Throughput: Deliver messages at network limited throughput using a cluster of machines with latencies as low as 2ms. Scalable: Scale production clusters up to a thousand brokers, trillions of messages per day, petabytes of data, hundreds of thousands of partitions. Elastically expand and contract storage and processing. Permanent Storage: Store streams of data safely in distributed, durable, fault-tolerant cluster. High Availability: Stretch clusters efficiently over availability zones or connect separate clusters across geographic regions. Built-in Stream Processing: Process streams of events with joins, aggregations, filters, transformations, and more, using event-time and exactly-once processing. Connect to almost anything: Kafka’s out-of-the-box Connect interface integrates with hundreds of event sources and event sinks including Postgres, JMS, Elasticsearch, AWS S3, and more. Resilient architecturve which has resolved unusual complications in data sharing. What is event streaming? Event streaming is the process of capturing data in real-time from event sources like databases, sensors, mobile devices etc. in the form of stream of events, sotring these events durably for later retrieval. Kafka allows manipulating, processing and reacting to the event streams in real-time as well as retrospectively, routing the event streams to different destination technologies as needed. It ensures a continuous flow and interpretation of data so that the right info is at the right place at the right time. Examples include processing payments, financial transactions in real-time; tracking and monitoring cars, truck, fleets and shipments; coontinuous capturing and analyzing of sensor data of IoT devices.\nWhat does it mean when it is said that Kafka is an event streaming platform? Kafka combines 3 key capabilities:-\nTo publish (write) and subscribe (read) to streams of events, including continuous import/export of your data from other systems. To store streams of events durably and reliably for as long as you want. To process streams of events as they occur or retrospectively. Kafka Data Model Contains messages and topics Messages represent information such as lines in a log file, a row of stock market data or an error message from the system. Messages are goruped into categories called topics, e.g., logMessage or stockMessage. Topics are divided into one or more partitions. A partition is equivalent to a commit log. Each partition contains an ordered set of messages. Each message is identified by its offset in the partition. Messages are added at one end of the partition and consumed at the other Producers are processes that publish message into a topic. Consumers are processes that receive the messages from a topic. Brokers are processes or servers within Kafka that process the messages. A Kafka cluster consists of a set of brokers that process the messages. Each machine in the cluster can run one broker They cordinate amongst each other using zookeeper One broker acts as a leader for a partition and handles the delivery and persistence, while others act as followers. Partition Distributors Partitions can be distributed across Kafka clusters. Each Kafka server may handle one or more partitions. A partition can be replicated across several servers for fault tolerance One server is marked as a leader for the partiton and the others are marked as followers. The leader controls the read and write for the partition, the followers replicate the data. If a leader fails, one of the followers automatically becomes the leader. Zookeeper is used for leader selection. 3 Major components Kafka core: A central hub to transport and store event streams in real time. Kafka connect: A framework to import event streams from other source data systems into Kafka and export event streams from Kafka to destination data systems. Kafka Streams: A Java library to process event streams live as they occur. Architecture Apache Kafka follows a distributed, log-based architecture designed for scalability, durability, and fault tolerance.\nAt a high level, Kafka consists of Producers, Brokers, Topics with Partitions, and Consumers, all coordinated by ZooKeeper (legacy) or KRaft (modern Kafka).\nHigh-Level Kafka Architecture 1 2 3 4 5 6 7 8 9 10 11 12 graph LR Producer1 --\u0026gt;|Publish| Broker1 Producer2 --\u0026gt;|Publish| Broker2 Broker1 --\u0026gt; Topic Broker2 --\u0026gt; Topic Topic --\u0026gt; Broker1 Topic --\u0026gt; Broker2 Broker1 --\u0026gt;|Consume| Consumer1 Broker2 --\u0026gt;|Consume| Consumer2 flow explanation Producers publish messages to Kafka topics. Kafka brokers store these messages in partitions. Consumers fetch messages from brokers independently. Topic and Partition Architecture 1 2 3 4 5 6 7 8 9 graph TD Topic[\u0026#34;Kafka Topic\u0026#34;] Topic --\u0026gt; P0[\u0026#34;Partition 0\u0026#34;] Topic --\u0026gt; P1[\u0026#34;Partition 1\u0026#34;] Topic --\u0026gt; P2[\u0026#34;Partition 2\u0026#34;] P0 --\u0026gt; B1[\u0026#34;Broker 1\u0026#34;] P1 --\u0026gt; B2[\u0026#34;Broker 2\u0026#34;] P2 --\u0026gt; B3[\u0026#34;Broker 3\u0026#34;] Topics are split into partitions for parallelism. Each partition is an ordered, immutable log. Ordering is guaranteed only within a partition. Partition Replication and Fault Tolerance 1 2 3 graph LR Leader[\u0026#34;Leader Replica\u0026#34;] --\u0026gt; F1[\u0026#34;Follower Replica\u0026#34;] Leader --\u0026gt; F2[\u0026#34;Follower Replica\u0026#34;] Each partition has: One leader (handles all reads and writes) Multiple followers (replicate data) If the leader fails, a follower is automatically elected as the new leader. Consumer Groups Architecture 1 2 3 4 5 6 graph TD Topic --\u0026gt; P0[\u0026#34;Partition 0\u0026#34;] Topic --\u0026gt; P1[\u0026#34;Partition 1\u0026#34;] P0 --\u0026gt; C1[\u0026#34;Consumer 1\u0026#34;] P1 --\u0026gt; C2[\u0026#34;Consumer 2\u0026#34;] Consumers work together in consumer groups. Each partition is consumed by only one consumer per group. Enables horizontal scalability and parallel processing. Metadata and Coordination ZooKeeper (older Kafka versions) or KRaft (newer versions) is used for: Broker metadata management Leader election Cluster coordination Modern Kafka clusters use KRaft, removing the ZooKeeper dependency. Replication Uses the primary backup method of replication. One machine (one replica) is called a leader and is chosen as the primary. Remaining machines (replicas) are chosen as followers and act as backup. The leader propagates the writes to the followers. The leader waits until the writes are completed on all the replicas. If a replica is down, it is skipped for the write until it comes back. If a leader fails, one of the followers is chosen as new leader. This mechanism can handle n-1 failures where n is the replication factor. ","date":"2024-06-16T20:14:01+05:30","permalink":"https://jhaakansha.github.io/p/kafka/","title":"Kafka"},{"content":"Apache Spark Apache Spark is an open-source unified analytics engine designed for large-scale data processing. It provides an interface for programming entire clusters with built-in data parallelism and fault tolerance.\nBuilt on top of the Hadoop MapReduce framework, Spark enhances it by supporting a wider range of computations, including interactive queries and stream processing. It features its own cluster management system, but can also utilize Hadoop for storage and processing.\nSpark executes applications much faster—especially in-memory—by caching intermediate data, reducing disk I/O overhead. It offers built-in APIs for Java, Python, and Scala, and supports SQL queries, real-time streaming, machine learning, and graph computation.\nWhat is Spark Streaming? Spark Streaming is a component of Spark that enables scalable, high-throughput, fault-tolerant stream processing of live data streams.\nLeverages Spark Core’s fast scheduling capability. Ingests data in mini-batches. Applies RDD transformations to these mini-batches. Architecture of Spark Apache Spark follows a master-slave architecture, consisting of a single master node and multiple slave (worker) nodes. It is designed around two key abstractions:\nRDD (Resilient Distributed Dataset) — A distributed collection of data that can be processed in parallel. DAG (Directed Acyclic Graph) — Represents a logical execution plan composed of transformations. DAG (Directed Acyclic Graph) The DAG is a directed graph where:\nEach node is an RDD partition. Each edge represents a transformation on the data. The DAG execution model is composed of:\nDriver Program: The process that runs the main() function and creates the SparkContext object. Cluster Manager: Allocates resources across applications (e.g., YARN, Mesos, or Spark\u0026rsquo;s built-in manager). Worker Nodes: Execute application code assigned by the cluster manager. Executor: A process launched on a worker node to run tasks and store data in memory or disk. Each Spark application has its own executors. Task: A unit of work that is sent to one executor. RDD (Resilient Distributed Dataset) The RDD is the foundational data structure in Spark. It is:\nImmutable and distributed. A collection of records stored across multiple nodes. Able to contain any type of object: Python, Java, Scala, or even user-defined types. Key Characteristics Partitioned across the cluster. Read-only collections. Fault-tolerant using lineage (recomputation). Stored in-memory when possible; otherwise spilled to disk. Checkpointing is used to save RDDs to stable storage to prevent recomputation on node failure. Types of RDDs Parallelized Collections\nCreated by invoking SparkContext.parallelize() method.\nHadoop Datasets\nCreated from external storage systems like HDFS, S3, etc.\nRDD Operations RDDs support two kinds of operations:\nTransformations\nLazily evaluated. Create a new RDD from an existing one. Examples: map(), filter(), flatMap() Actions\nTrigger execution of RDD transformations. Return values to the driver or write to external storage. Examples: collect(), count(), saveAsTextFile() Apache Spark combines the flexibility of functional programming with the power of distributed computing, making it a go-to engine for big data analytics, real-time processing, and complex workflows.\n","date":"2024-06-12T13:13:17+05:30","permalink":"https://jhaakansha.github.io/p/spark/","title":"Spark"},{"content":"Understanding AI Personalities As artificial intelligence systems become increasingly integrated into daily life, their design goes beyond pure functionality. One of the most compelling and nuanced aspects of modern AI is personality—how an AI communicates, responds, and interacts with users in a way that feels human, empathetic, or distinctly machine-like.\nThis blog explores what AI personalities are, why they matter, and how they shape our interactions with technology.\nWhat is an AI Personality? An AI personality refers to the distinct tone, behavior, and communication style an AI system uses to interact with users. This can range from friendly and conversational to formal and task-oriented. Unlike static software, AI can adapt its tone based on context, making the interaction feel more natural or user-specific.\nKey Elements of AI Personality: Tone of Voice: Is the AI casual, professional, witty, or neutral? Response Style: Does it provide detailed explanations or brief answers? Empathy \u0026amp; Emotion: Does it acknowledge emotions or remain purely factual? Adaptability: Can it modify behavior based on user preferences or history? Why AI Personalities Matter AI personalities aren\u0026rsquo;t just aesthetic choices—they directly affect user experience, engagement, and trust.\n1. User Comfort \u0026amp; Trust A friendly and relatable tone can make users feel more at ease, especially in high-stress scenarios like mental health chatbots or customer service.\n2. Brand Identity Companies can embed their brand\u0026rsquo;s tone into AI, maintaining consistency across human and machine interactions.\n3. Clarity \u0026amp; Usability A well-designed personality can make technical responses more accessible by tailoring the communication to the user\u0026rsquo;s understanding.\n4. Engagement Users are more likely to continue interacting with an AI that feels responsive, helpful, and human-aware.\nTypes of AI Personalities While AI personalities vary widely, they often fall into a few recognizable categories:\nAssistant: Professional, neutral, focused on productivity (e.g., Siri, Google Assistant). Companion: Friendly, empathetic, designed for conversation or emotional support. Expert: Direct, factual, and technical—common in educational or enterprise tools. Playful: Humorous, creative, and sometimes quirky—used in entertainment or casual contexts. The Ethical Dimension Creating AI personalities also introduces ethical questions:\nShould an AI express emotions it doesn\u0026rsquo;t \u0026ldquo;feel\u0026rdquo;? Can over-humanized AI mislead users about its capabilities? How do we ensure inclusivity and avoid bias in AI tone or language? These are ongoing challenges in AI development that require careful consideration of psychology, design, and user diversity. The Future of AI Personalities As AI continues to evolve, future systems will likely support customizable personalities, allowing users to choose how their AI sounds and behaves. This personalization could extend from tone and gender to cultural context and even humor level.\nWe may also see AI personalities evolve dynamically over time—learning from interactions, adapting to individual preferences, and becoming more contextually aware.\nFinal Thoughts AI personalities represent a powerful blend of technology and human-centered design. By shaping how we interact with machines, they influence not only usability but also emotional connection and trust. As we move forward, designing AI with intentional, ethical, and inclusive personalities will be key to building systems that are not only intelligent—but genuinely engaging.\n","date":"2024-04-09T14:27:54+05:30","permalink":"https://jhaakansha.github.io/p/understanding-ai-personalities/","title":"Understanding AI Personalities"},{"content":"Understanding SQL Joins: A Quick Guide When working with relational databases, SQL joins allow you to combine rows from two or more tables based on a related column. Here\u0026rsquo;s a quick breakdown of the most common types of joins:\nINNER JOIN Returns: Only rows that have matching values in both tables.\n1 2 3 SELECT * FROM orders INNER JOIN customers ON orders.customer_id = customers.id; Use it when you want to retrieve only the data that exists in both databases.\nLEFT JOIN (or LEFT OUTER JOIN) Returns: All rows from the left table, and the matched rows from the right table. If no match, returns NULL on the right side.\n1 2 3 SELECT * FROM customers LEFT JOIN orders ON customer.id = orders.customer_id; Use it when you want all records from the left table, regardless of matches.\nRIGHT JOIN (or RIGHT OUTER JOIN) Returns: All rows from the right table, and the matched rows from the left table. If no match, returns NULL on the left side.\n1 2 3 SELECT * FROM orders RIGHT JOIN customers ON orders.customer_id = customers.id; Use it when you want all records from the right table, with or without matches.\nFULL JOIN (or FULL OUTER JOIN) Returns: All rows when there is a match either left or right table. Non-matching rows will have NULL where appropriate.\n1 2 3 SELECT * FROM customers FULL OUTER JOIN orders ON customers.id = orders.customer_id; Use it when you want to see everything from both tables, matched or not.\nCROSS JOIN Returns: The Cartesian product of the two tables (every combination of rows).\n1 2 3 SELECT * FROM products CROSS JOIN categories; Use with caution - can return a very large number of rows.\nSELF JOIN Returns: A join of a table to itself, useful for hierarchical or related data within one table.\n1 2 3 SELECT A.name AS Employee, B.name AS Manager FROM employees A JOIN employees B ON A.manager_id = B.id; Use it when you need to compare rows within the same table.\n","date":"2023-07-02T22:23:59+05:30","permalink":"https://jhaakansha.github.io/p/understanding-sql-joins/","title":"Understanding SQL Joins"},{"content":"Augmented Reality in Everyday Life: How AR Can Transform Various Industries and Personal Experiences Augmented Reality (AR) is no longer a speculative concept reserved for sci-fi movies and gaming niches. With the evolution of spatial computing, 5G connectivity, edge processing, and wearables like AR glasses, this technology is stepping into the mainstream. From enterprise applications to personal productivity and entertainment, AR is poised to become a foundational layer of our daily digital interaction.\nIn this article, we’ll explore how AR is transforming various industries and enhancing personal experiences, supported by current use cases and the technologies making it possible.\nWhat is Augmented Reality? Augmented Reality superimposes digital content (images, sounds, text) onto the physical world in real time. Unlike Virtual Reality (VR), which creates a fully immersive digital environment, AR enhances the user\u0026rsquo;s perception of their actual surroundings using devices like smartphones, tablets, and AR headsets.\nAR in Industry: Sector by Sector Breakdown 1. Retail and E-Commerce Virtual Try-Ons: Using facial and body tracking, consumers can virtually try on clothes, eyewear, or makeup through apps like Sephora Virtual Artist and Warby Parker. AR Showrooms: IKEA Place and Wayfair use AR to allow customers to visualize furniture in their homes at scale. Impact: Increases conversion rates, reduces return rates, and enhances personalization through spatial analytics and AI-driven recommendations. Image Source: AuGray\n2. Healthcare Surgical Assistance: AR overlays critical data during procedures. Systems like Microsoft HoloLens in conjunction with Medivis’s SurgicalAR provide real-time anatomical guidance. Patient Education: AR can explain diagnoses visually, helping patients better understand procedures and treatments. Rehabilitation and Therapy: Gamified AR experiences are being used in cognitive and physical rehab (e.g., MindMaze, XRHealth). Image Source: Future Scape\n3. Education and Training Immersive Learning: Apps like JigSpace and Merge EDU bring interactive, 3D learning to classrooms, from biology dissections to mechanical engineering. Enterprise Training: Boeing and UPS use AR to train workers on complex equipment with step-by-step holographic instructions. AR + AI Integration: Smart learning environments adapt in real time to learner performance using computer vision and adaptive algorithms. Image Source: The Tech Edvocate\n4. Manufacturing and Logistics Remote Assistance: AR enables remote experts to provide real-time guidance to on-site workers via smart glasses (e.g., Vuzix, RealWear). Warehouse Navigation: AR glasses can direct pickers to products efficiently, reducing errors and increasing throughput. Digital Twins: AR overlays real-time IoT sensor data on physical machines to aid in predictive maintenance and operational monitoring. Image Source: Primeclick Media\n5. Real Estate and Architecture 3D Visualization: AR allows clients to walk through unbuilt properties using mobile devices or headsets like Magic Leap 2. Remote Collaboration: Teams can annotate physical environments during site inspections, enabling asynchronous feedback and faster iteration cycles. Geo-anchored Content: Real-world coordinates are used to anchor AR blueprints or interior design plans for in-situ visualization. Image Source: Nextech AR\nAR in Personal Life: Everyday Applications 1. Navigation and Travel AR Wayfinding: Apps like Google Maps AR provide heads-up, turn-by-turn directions using the phone’s camera feed. Cultural Overlay: Museums, landmarks, and tourist destinations use AR to enhance experiences with interactive, multilingual info layers. Image Source: Primeclick Media\n2. Home Improvement and DIY Smart Measurements: LiDAR-enabled AR apps (e.g., RoomScan, MeasureKit) allow for accurate room scanning and object sizing. AR Design Tools: Users can preview wall colors, tile layouts, or new fixtures in real time. Image Source: Real Simple\n3. Fitness and Wellness Form Correction: AR fitness mirrors and apps track body posture to offer real-time corrective feedback. Mindfulness and AR Art: Immersive AR experiences like TRIPP use visual effects overlaid in the real world to guide meditation. Image Source: The Tech Edvocate\n4. Social and Entertainment Snapchat and TikTok Lenses: AR filters go far beyond visual effects; they create new ways to interact, play, and even perform. AR Gaming: Games like Pokémon GO and the upcoming Niantic AR titles fuse digital gameplay with physical movement and exploration. ","date":"2023-06-27T14:40:37+05:30","permalink":"https://jhaakansha.github.io/p/augmented-reality-in-everyday-life/","title":"Augmented Reality in Everyday Life"},{"content":"The Quiet Revolution: How Edge Computing Is Reshaping the Internet In a world increasingly defined by data, speed, and connectivity, a subtle yet powerful shift is occurring in the tech landscape—edge computing is stepping out of the shadows and into the mainstream. While cloud computing has dominated the past decade, edge computing is positioning itself as the next frontier in building faster, smarter, and more responsive digital experiences.\nWhat Is Edge Computing? At its core, edge computing brings computation and data storage closer to the devices where data is generated—be it IoT sensors, smartphones, autonomous vehicles, or industrial machines. Instead of sending all that data to a centralized cloud for processing, edge computing handles much of the work locally, or at nearby \u0026ldquo;edge\u0026rdquo; servers.\nThis shift is all about latency, bandwidth, and real-time performance. By reducing the physical distance data must travel, edge computing enables faster decisions, less network congestion, and more reliable applications—critical for use cases like self-driving cars, AR/VR, and remote medical diagnostics.\nWhy Now? Several converging trends are accelerating the adoption of edge computing:\nIoT Explosion: Billions of connected devices are generating vast volumes of data that need instant analysis. 5G Rollout: The ultra-low latency of 5G makes edge computing even more potent, allowing real-time communication between devices and edge nodes. AI at the Edge: Advances in machine learning are allowing powerful inference models to run on smaller devices, enabling intelligent decision-making on the spot. Use Cases in Action Smart Cities: Edge devices in traffic systems can process data locally to optimize signals in real-time and reduce congestion. Healthcare: Wearable devices can monitor patients continuously and trigger alerts without needing to rely on cloud connectivity. Retail: Stores can use edge analytics to personalize customer experiences or detect theft without uploading hours of video footage to the cloud. Challenges Ahead Edge computing isn’t without hurdles. Security at the edge is more complex—there are more endpoints to protect. Data management, device interoperability, and maintaining consistent updates across dispersed hardware also present significant operational challenges.\nThe Road Ahead As the world becomes more connected and latency-sensitive, edge computing will become a cornerstone of digital infrastructure. It won’t replace the cloud but will complement it, enabling a hybrid architecture where the cloud remains the brain and the edge becomes the reflexes.\nIn a future dominated by AI, automation, and data-driven insights, edge computing is no longer a luxury—it’s a necessity.\n","date":"2023-06-19T10:59:34+05:30","permalink":"https://jhaakansha.github.io/p/the-quiet-revolution-how-edge-computing-is-reshaping-the-internet/","title":"The Quiet Revolution: How Edge Computing Is Reshaping the Internet"},{"content":"ChatGPT, a conversational AI developed by OpenAI, has captivated users with its ability to listen, learn, and respond with human-like conversations. Launched in November 2022, ChatGPT is based on the Generative Pretrained Transformer 3 (GPT-3) architecture, one of the largest and most advanced language models available. Fine-tuned using both supervised and reinforcement learning techniques, ChatGPT is designed to understand instructions and provide detailed, context-aware responses. It is also a sibling model to InstructGPT, which focuses specifically on following instructions to generate detailed and accurate outputs.\nTraining Method ChatGPT’s development involved a robust training process, leveraging Reinforcement Learning from Human Feedback (RLHF), a method also used to train InstructGPT, with some variations in the data collection process.\nInitially, the model was trained through supervised fine-tuning, where human AI trainers engaged in conversations, taking on both the role of the user and the AI assistant. Trainers had access to model-generated suggestions, which helped them refine their responses. This dialogue dataset was then merged with the InstructGPT dataset and transformed into a conversational format.\nTo fine-tune the model further, a reward model was created using Proximal Policy Optimization (PPO). AI trainers participated in ranking different model responses based on quality. These ranked responses were fed into the model to enhance its performance over multiple iterations, leading to the refined version of ChatGPT that we use today.\nIt\u0026rsquo;s important to note that ChatGPT is built on the GPT-3.5 model, which concluded training in early 2022, and it benefited from continued fine-tuning to improve its conversational abilities.\nLimitations While ChatGPT is an impressive AI, it’s not without its limitations. Here are some of the key challenges:\nInaccurate or Nonsensical Answers: ChatGPT sometimes produces answers that sound plausible but are either incorrect or nonsensical. This issue arises because, during the RL training, there is no definitive source of truth. Additionally, training the model to be more cautious may cause it to avoid answering questions it could otherwise answer correctly.\nSensitivity to Input Variations: The model can be overly sensitive to slight changes in phrasing. For example, a user might ask the same question in different ways, and while one phrasing might elicit an incorrect response, a minor rewording could yield the correct answer.\nExcessive Verbosity: ChatGPT can sometimes be excessively verbose, repeating phrases like “I’m a language model trained by OpenAI” in ways that can feel redundant. This is partly due to the training process, where longer, more detailed answers are often favored.\nAmbiguity Handling: Ideally, ChatGPT would ask clarifying questions when faced with ambiguous queries. Instead, the model often makes educated guesses about what the user intended, which can result in less accurate responses.\nBias and Harmful Instructions: Despite efforts to make the model refuse inappropriate requests, there are still instances where it may respond to harmful or biased instructions. OpenAI uses a Moderation API to flag or block unsafe content, but this system can still generate false positives and negatives.\nConclusion ChatGPT represents a remarkable leap in conversational AI technology, powered by advanced machine learning and reinforcement learning techniques. While it shows immense potential, there are still challenges that need to be addressed to improve its accuracy, reduce verbosity, and better handle ambiguity. As OpenAI continues to refine and evolve its models, we can expect even more powerful and reliable versions of ChatGPT to emerge, further pushing the boundaries of AI and its real-world applications.\n","date":"2023-03-20T04:49:14+05:30","permalink":"https://jhaakansha.github.io/p/exploring-gpt-4-the-next-step-in-conversational-ai/","title":"Exploring GPT-4: The Next Step in Conversational AI"},{"content":"The Digital Divide: Exploring the Gap in Technology Access and Its Societal Impacts Technology has revolutionized nearly every aspect of modern life—from communication and commerce to education and healthcare. Yet, not everyone shares in these advancements equally. This disparity, known as the digital divide, refers to the gap between individuals, households, businesses, and geographic areas at different socio-economic levels in terms of their access to information and communication technologies (ICTs) and their ability to use the internet effectively.\nAs we grow increasingly dependent on digital tools, the consequences of this divide become more pronounced and far-reaching.\nWhat Is the Digital Divide? The digital divide is typically defined across three main dimensions:\nAccess Divide – The lack of physical access to internet and devices. Use Divide – Differences in the ability to effectively use digital technologies. Quality Divide – Variations in connection speed, device quality, and technological support. It’s not just a matter of having Wi-Fi or not. The divide spans across income, education, age, geography, and ability.\nGlobal Perspective: A Snapshot High-income countries like the U.S., South Korea, and Germany boast internet penetration rates above 90%. Low-income countries such as those in Sub-Saharan Africa often have penetration rates below 30%, with rural areas even lower. Gender gap: In many regions, women are significantly less likely than men to have access to digital tools, especially in patriarchal societies. Urban vs Rural: Rural communities face higher costs and fewer infrastructure investments. Source: Our World In Data\nImpacts of the Digital Divide 1. Education Inequity During the COVID-19 pandemic, the shift to remote learning exposed deep digital disparities:\nStudents without home internet or suitable devices fell behind their peers. Rural schools struggled with connectivity infrastructure. Digital illiteracy limited engagement in online platforms. \u0026ldquo;Homework gap\u0026rdquo; is a term that emerged to describe students unable to complete assignments due to poor internet access.\n2. Economic Disparities Lack of digital access restricts job opportunities, especially in the remote and gig economy. Small businesses without e-commerce capabilities lose competitive ground. Government benefits, job applications, and essential services are increasingly online-only. 3. Healthcare Access Telemedicine, digital health records, and remote monitoring rely on robust internet. Those without access face delays or barriers to essential health services. 4. Civic and Social Participation E-government services, voter registration, and community engagement are increasingly digital. Social isolation worsens for those disconnected from communication tools like messaging apps and social media. Root Causes of the Divide Cause Description Infrastructure Lack of broadband and cellular infrastructure in rural/remote areas. Affordability High costs of data plans, devices, and service subscriptions. Digital Literacy Lack of skills to navigate and utilize digital platforms safely and effectively. Policy and Regulation Lack of inclusive policies or incentives to expand access to underserved areas. Bridging the Gap: Solutions and Innovations 1. Public-Private Partnerships Companies like Starlink, Google (Loon), and Microsoft (Airband) are working to expand rural broadband via satellite, balloons, and TV white space technologies.\n2. Community Wi-Fi and Municipal Networks Local governments in cities like Chattanooga, TN and Barcelona, Spain have deployed public internet to underserved communities.\n3. Device Subsidy and Refurbishment Programs Initiatives like One Laptop per Child and nonprofit organizations such as PCs for People help provide low-cost or free devices.\n4. Digital Literacy Programs Public libraries, NGOs, and universities offer free courses on basic computer skills, cybersecurity, and digital job training.\nPolicy Recommendations National Broadband Strategies with clear targets for universal access. Subsidized Connectivity programs for low-income households. Support for Local Innovation, especially in rural areas. Inclusive Design Standards to address accessibility for disabled and elderly populations. Data Transparency from ISPs to track underserved areas. The Human Side of the Divide Behind every statistic is a story:\nA student doing homework in a fast-food parking lot for free Wi-Fi. An elderly person unable to schedule a vaccine due to a lack of internet. A refugee missing job opportunities because of limited device access. Closing the digital divide isn’t just a technological challenge—it’s a moral imperative.\nConclusion As the digital world continues to evolve, the divide between those who are connected and those who are not becomes a defining issue of social justice and economic development. Addressing the digital divide requires a multi-faceted approach—technological, educational, economic, and political.\nIt’s time to treat internet access not as a luxury, but as a fundamental right in the digital age.\nWant to explore more tech equity topics? Subscribe to the newsletter for updates on digital inclusion, policy, and innovation.\n","date":"2023-01-07T15:14:16+05:30","permalink":"https://jhaakansha.github.io/p/the-digital-divide/","title":"The Digital Divide"},{"content":"A queue is an ordered list in which insertions are done at one end (rear) and deletions are done at the other end (front). The first element to be inserted is the first element to be deleted, i.e. it follows FIFO (First In First Out) concept. Circular linked lists and circular arrays are used to implement queues.\nMain queue operations Enqueue: Inserts an element at the end of the queue. Dequeue: Removes and return the element at the front of the queue. Auxiliary queue operations Front: Returns the element at the front without removing it. QueueSize(): Returns the number of elements stored. IsEmptyQueue: Indicates whether no elements are stored. Simple Circular Array Implementation Elements are added circularly and two variables are used to keep track of the start element (front) and the element at the end (rear).\nThe array may become full and an enqueue operation will throw a full queue exception. Similarly, deleting an element will throw an empty queue exception.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 struct arrayQueue { int front, rear; int capacity; int *array; }; struct arrayQueue *Queue(int size) { struct arrayQueue *Q = malloc(sizeof(structarrayQueue)); if (!Q) { return NULL; } Q-\u0026gt;capacity = size; Q-\u0026gt;front = Q-\u0026gt;rear = -1; Q-\u0026gt;array = malloc(Q-\u0026gt;capacity*sizeof(int)); if (!Q-\u0026gt;array) { return NULL; } return Q; } int isEmptyQueue(struct arrayQueue *Q) { return (Q-\u0026gt;front == -1); } int isFullQueue(struct arrayQueue *Q) { return( ((Q-\u0026gt;rear+1) % (Q-\u0026gt;capacity)) == Q-\u0026gt;front); } int queueSize() { return ((Q-\u0026gt;capacity - Q-\u0026gt;front + Q-\u0026gt;rear +1) % Q-\u0026gt;capacity) ; } void Enqueue(struct arrayQueue *Q, int data) { if (isFullQueue(Q)) { printf(\u0026#34;Queue Overflow\u0026#34;); } else { Q-\u0026gt;rear = (Q-\u0026gt;rear +1)% Q-\u0026gt;capacity; Q-\u0026gt;array[Q-\u0026gt;rear] = data; if (Q-\u0026gt;front == -1) { Q-\u0026gt;front = Q-\u0026gt;rear; } } } int Dequeue(struct arrayQueue *Q) { int data = 0; if (isEmptyQueue(Q)) { printf(\u0026#34;Queue is empty\u0026#34;); return 0; } else { data = Q-\u0026gt;array[Q-\u0026gt;front]; if (Q-\u0026gt;front = Q-\u0026gt;rear) { Q-\u0026gt;front = Q-\u0026gt;rear = -1; } else { Q-\u0026gt;front = (Q-\u0026gt;front+1) % Q-\u0026gt;capacity; } } return data; } void deleteQueue(struct arrayQueue *Q) { if (Q) { if (Q-\u0026gt;array) { free(Q-\u0026gt;array); } free(Q); } } Limitations The maximum size of queue must be defined at the beginning and cannot be changed. Trying to enqueue a new element into a full queue causes an implementation- specific exception.\nDynamic Circular Array Implementation In the dynamic array implemntation of queue, the size of queue gets doubled in size each time the enqueue operation is called on a full queue.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 struct DynArrayQueue { int front, rear; int capacity; int *array; }; struct DynArrayQueue * createQueue() { struct DynArrayQueue *Q = malloc(sizeof(struct DynArrayQueue)); if (!Q) { return NULL; } Q-\u0026gt;capacity = 1; Q-\u0026gt;front = Q-\u0026gt;rear = -1; Q-\u0026gt;array = malloc(Q-\u0026gt;capacity*sizeof(int)); if (!Q-\u0026gt;array) { return NULL; } return Q; } int isEmptyQueue(struct DynArrayQueue *Q) { return (Q-\u0026gt;front == -1); } int isFullQueue(struct DynArrayQueue *Q) { return ((Q-\u0026gt;rear + 1) % Q-\u0026gt; capacity == Q-front); } int queueSize() { return((Q-\u0026gt;capacity - Q-\u0026gt;front + Q-\u0026gt;rear + 1) % Q-\u0026gt;capacity); } void Enqueue(struct DynArrayQueue *Q, int data) { if(isFullQueue(Q)) { resizeQueue(Q); } } void resizeQueue(struct DynArrayQueue *Q) { int size = Q-\u0026gt;capacity; Q-\u0026gt;capacity = Q-\u0026gt;capacity*2; Q-\u0026gt;array = realloc(Q-\u0026gt;array, Q-\u0026gt;capacity); if (!q-\u0026gt;array) { printf(\u0026#34;Memory error\u0026#34;); return; } if (Q-\u0026gt;front \u0026gt; Q-\u0026gt;rear) { for (int i = 0; i\u0026lt;Q-\u0026gt;front; i++) { Q-\u0026gt;array[i+size] = Q-\u0026gt;array[i]; } Q-\u0026gt;rear = Q-\u0026gt;rear + size; } } int Dequeue(struct DynArrayQueue *Q) { int data = 0; if (isEmptyQueue(Q)) { printf(\u0026#34;Queue is empty\u0026#34;); return 0; } else { data = Q-\u0026gt;array[Q-\u0026gt;front]; if (Q-\u0026gt;front == Q-\u0026gt;rear) { Q-\u0026gt;front = Q-\u0026gt;rear = -1; } else { Q-\u0026gt;front = (Q-\u0026gt;front + 1) % Q-\u0026gt;capacity; } } return data; } void deleteQueue(struct DynArrayQueue *Q) { if (Q) { if (Q-\u0026gt;array) { free(Q-\u0026gt;array); } free(Q-\u0026gt;array) //CROSS CHECK THIS ONCE } } LInked List Implementation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 struct ListNode { int data; struct ListNode *next; }; struct queue { struct ListNode *front; struct ListNode *rear; }; struct queue *createQueue() { struct queue *Q; struct ListNode *temp; Q = malloc(sizeof(struct queue)); if (!Q) { return NULL; } temp = malloc(sizeof(struct ListNode)); Q-\u0026gt;front = Q-\u0026gt;rear = NULL; return Q; } int isEmptyQueue (struct queue *Q) { return (Q-\u0026gt;front == NULL); } void Enqueue (struct queue *Q, int data) { struct ListNode *newNode; newNode = malloc(sizeof(struct ListNode)); if (!newNode) { return NULL; } newNode-\u0026gt;data = data; newNode-\u0026gt;next = NULL; Q-\u0026gt;rear-\u0026gt;next = newNode; Q-\u0026gt;rear = newNode; if (Q-\u0026gt;front == NULL) { Q-\u0026gt;front = Q-\u0026gt;rear; } } int Dequeue(struct queue *Q) { int data = 0; struct ListNode *temp; if (isEmptyQueue(Q)) { printf(\u0026#34;Queue is empty\u0026#34;); return 0; } else { temp = Q-\u0026gt;front; data = Q-\u0026gt;front-\u0026gt;data; Q-\u0026gt;front = Q-\u0026gt;front-\u0026gt;next; free(temp); } return data; } void deleteQueue(struct queue *Q) { struct ListNode *temp; while (Q) { temp = Q; Q = Q-\u0026gt;next; free(temp); } free(Q); } ","date":"2022-11-27T13:23:35+05:30","permalink":"https://jhaakansha.github.io/p/queue/","title":"Queue"},{"content":"CREATE REPOSITORIES New repostiories can be created either locally or by copying a file that alredy exists on GitHub.\n1 2 $ git init - Turn an existing directory into git repository. $ git clone [url] - Clone a repository that already exists in GitHub including all the files, branches and commits. CONFIGURE Configure user information for all local repositories.\n1 2 3 $ git config --global user.name \u0026#34;[name]\u0026#34; - Sets the name you want attached to your commit transactions. $ git config --global user.email \u0026#34;[email-address]\u0026#34; - Sets the email you want attached to your commit transactions. $ git config --global color.ui auto - Enables helpuful colorization of command line output. BRANCHES All the commits will be made to the branches you are currently checked out to.\n1 2 3 4 5 6 $ git branch [branch-name] - Will create a new branch. $ git checkout [branch-name] - Switches to the specified branch and updates the working directory. $ git checkout [branch-name] -b - Creates a new branch and switches to the specified branch and updates the working directory. $ git merge [branch-name] - Combines the specified branch’s history into the current branch. This is usually done in pull requests. $ git branch -d [branch-name] - Deletes the specified branch. SYNCHRONIZE CHANGES Synchronize your local repository with the remote repository on GitHub.\n1 2 3 4 $ git push - Uploads all local branch commits to GitHub. $ git merge - Combines remote tracking branch into current local branch. $ git fetch - Downloads all history from the remote tracking branches. $ git pull - Updates your current local working branch with all new commits from the corresponding remote branch on GitHub. git pull is a combination of git fetch and git merge. MAKE CHANGES Browse and inspect the evolution of project files.\n1 2 3 $ git diff [first-branch]...[second-branch] - Shows content differences between two branches. $ git log --follow [file] - Lists version history for a file, including renames. $ git log - Lists version history for the current branch. REDO COMMITS Erase mistakes and craft replacement history.\n1 2 $ git reset [commit] - Undoes all commits after [commit], preserving changes locally. $ git reset --hard [commit] - Discards all history and changes back to the specified commit. ","date":"2022-11-11T13:23:31+05:30","permalink":"https://jhaakansha.github.io/p/git-cheatsheet/","title":"Git Cheatsheet"},{"content":"A stack is an ordered list in which insertion and deletion are done at one end called the \u0026ldquo;top\u0026rdquo;.\nMain Stack Operations Push: Inserts data onto the stack. Pull: Removes and returns the last inserted data from the stack. Auxiliary Stack Operations Top: Returns the last inserted element without removing it. Size: returns the number of elements stored in the stack. IsEmptyStack: Check whether the stack is empty or not. IsFullStack: Check whether the stack is full or not. Simple Array Implementation This implementation of stack uses simple array. We add elements from left to right and use a variable to keep track of the index of the top element. If the array i full, a push operation will throw a full stack exception. Similarly, if we try to delete an element from an empty array, it will throw a stack empty exception.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 struct StackArr { int top; int capacity; int *array; }; struct StackArr *CreateStack() { struct StackArr *S = malloc(sizeof(struct StackArr)); if (!S) { return NULL; } S-\u0026gt;capacity = 1; S-\u0026gt;top = -1; S-\u0026gt;array = malloc(S-\u0026gt;capacity*sizeof(int)); if(!S-\u0026gt;array) { return NULL; } return S; } int IsEmptyStack (struct StackArr *S) { return (S-\u0026gt;top == -1); } int IsFullStack (struct StackArr *S) { return (S-\u0026gt;top == S-\u0026gt;capacity - 1); } void Push (struct StackArr *S, int data) { if (IsFullStack(S)) { printf(\u0026#34;Stack Overflow\u0026#34;); } else { S-\u0026gt;top++; S-\u0026gt;array[S-\u0026gt;top]=data; } } int Pop (struct StackArr *S) { if (IsEMptyStack(S)) { printf(\u0026#34;Stack is empty\u0026#34;); return 0; } else { return (S-\u0026gt;array[top--]); } } Performance Function Time complexity Space Complexity (for n push operations) O(n) Time Complexity of Push() O(1) Time Complexity of Pop() O(1) Time Complexity of Size() O(1) Time Complexity of IsEmptyStack() O(1) Time Complexity of IsFullStack() O(1) However, the maximum size of the array must be defined at the beginning and cannot be changed. Trying to push a new element into a full stack causes an implementation-specific exception. Therefore, a simple array implementation is not ideal and hence dynamic array implementation is preferred.\nDynamic Array Implementation In this approach, if the array is full, we create a new array of twice the size and copy items. With this approach, pushing n items takes time proportional to n.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 struct DynArr { int top; int capacity; int *array; }; struct DynArr *CreateStack() { struct DynArr *S = malloc(sizeof(struct DynArr)); if(!S) { return NULL; } else { S-\u0026gt;capacity = 1; S-\u0026gt;top = -1; S-\u0026gt;array = malloc(S-\u0026gt;capacity*sizeof(int)); } if (S-\u0026gt;array) { return NULL; } return S; } int IsFullStack(struct DynArr *S) { return (S-\u0026gt;top == S-\u0026gt;capacity-1); } void DoubleStack(struct DynArr *S) { S-\u0026gt;capacity*=2; S-\u0026gt;array = realloc(S-\u0026gt;array, S-\u0026gt;capacity); } void Push(struct DynArr *S, int x) { if (IsFullStack(S)) { DoubleStack(S); } S-\u0026gt;array[++S-\u0026gt;top] = x; } int IsEmptyStack(struct DynArr *S) { return S-\u0026gt;top == -1; } int Top(struct DynArr *S) { if (IsEmptySrack(S)) { return INT_MIN; } return S-\u0026gt;array[S-\u0026gt;top]; } int Pop(struct DynArr *S) { if(IsEmptyStack(S)) { return INT_MIN; } return S-\u0026gt;array[S-\u0026gt;top--]; } Perfromance Function Time complexity Space Complexity (for n push operations) O(n) Time Complexity of Push() O(1) (Average) Time Complexity of Pop() O(1) Time Complexity of Size() O(1) Time Complexity of IsEmptyStack() O(1) Time Complexity of IsFullStack() O(1) The limitation of this implementation of stack is that too many doublings may cause memory overflow exception.\nLinked List Implementation Another way of implementing stacks is by using linked lists. Push operation is implemented by inserting the incoming element at the beginning of the list. Pop operation is implemented by deleting the node from the beginning.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 struct ListNode { int data; struct ListNode *next; }; struct Stack *CreateStack() { return NULL; } void Push(struct Stack **top, int data) { struct Stack *temp; temp = malloc(sizeof(struct Stack)); if (!temp) { return NULL; } temp-\u0026gt;data = data; temp-\u0026gt;next = *top; *top = temp; } int IsEmptyStack(struct Stack *top) { return top == NULL; } int Pop(struct Stack **top) { int data; struct Stack *temp; if (IsEmptyStack(top)) { return INT_MIN; } temp = *top; *top = *top-\u0026gt;next; data = temp-\u0026gt;data; free(temp); return data; } int Top(struct Stack *top) { if (IsEmptyStack(top)) { return INT_MIN; } return top-\u0026gt;next-\u0026gt;data; } Performance Function Time complexity Space Complexity (for n push operations) O(n) Time Complexity of Push() O(1) (Average) Time Complexity of Pop() O(1) Time Complexity of Size() O(1) Time Complexity of IsEmptyStack() O(1) Time Complexity of IsFullStack() O(1) Comparing Array Implementation and Stack Implementation Array Implementation\nOperations take constant time. Expensive doubling operations every once in a while. Any sequence of n operations takes time proportional to n. Linked List Implementation\nGrows and shrinks gracefully. Every operation takes constant time O(1). Every operation uses extra time and space to deal with references. ","date":"2022-11-05T16:23:59+05:30","permalink":"https://jhaakansha.github.io/p/stack/","title":"Stack"},{"content":"A linked list is a list or chain of items where each item points to the next one in the list. Each item in a linked list is called a node. Each node contains the data and the location of the next item.\nPROPERTIES Successive elements are connected by pointers. Last element points to NULL. Can grow or shrink in size during execution of a program. Can be made just as long as required. It does not waste memory space (but takes some extra memory for pointers). Disadvantages Large access time to individual element. An advantage of arrays in access time is special locality in memory. Arrays are defined as contiguous blocks of memory, and so any array element will be physically near its neighbours. This greatly benefits from modern CPU caching methods. Linked lists can be hard to manipulate. Waste memory in terms of extra reference points. SINGLY LINKED LIST Type declaration 1 2 3 4 struct listNode { int data; struct listNode *next; } Traversing the list Time: O(n) Space: O(1)\n1 2 3 4 5 6 7 8 9 int listLength (struct listNode *head) { struct listNode *current = head; int count= 0; while (currnet != NULL) { count++; current = current-\u0026gt;next; } return count; } Inserting an element Time: O(n) Space: o(1)\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 void insert(struct listNode **head, int data, int position) { int k = 1; struct listNode *p, *q, *newNode; newNode = (listNode*)malloc(sizeof(struct listNode)); if (!newNode) { printf(\u0026#34;Memory Error\u0026#34;); return; } newNode-\u0026gt;data = data; p = *head; if (position == 1) { newNode-\u0026gt;next = p; *head = newNode; } else { while ((p != NULL) \u0026amp;\u0026amp; (k \u0026lt; position - 1)) { k++; q = p; p = p-\u0026gt;next; } if (p == NULL) { q-\u0026gt;next = newNode; newNode-\u0026gt;next = NULL; } else { q-\u0026gt;next = newNode; newNode-\u0026gt;next = p; } } } Deleting a node 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 void deleteNode (struct listNode **head, int position) { int k = 1; struct listNode *p, *q; if (*head == NULL) { printf(\u0026#34;List Empty\u0026#34;); return; } p = *head; if (position == 1) { p = *head; *head = *head-\u0026gt;next; free(p); return; } else { while ((p != NULL) \u0026amp;\u0026amp; (k \u0026lt; position - 1)) { k++; q = p; p = p-\u0026gt;next; } if (p == NULL) { printf(\u0026#34;Position does not exist\u0026#34;); } else { q-\u0026gt;next = p-\u0026gt;next; free(p); } } } DOUBLY LINKED LIST In doubly linked lists, given anode, we can navigate the list in both directions.\nA node in a singly linked list cannot be removed unless we have the pointer to its predecessor. But in a doubly linked list, we can delete a node if we don\u0026rsquo;t have previous nodes, address (since each node has left pointer pointing to a previous node and we can move backwards).\nDisadvantages Each node requires an extra pointer. requiring more space. The insertion or deletion of a node takes a little longer. Type declaration 1 2 3 4 5 struct DLLnode { int data; struct DLLnode *next; struct DLLnode *prev; } Inserting an element 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 void DLLInsert(struct DLLnode **head, int data, int position) { int k = 1; struct DLLnode *temp, *newNode; newNode = (struct DLLnode*)malloc(sizeof(struct DLLnode)); if (!newNode) { printf(\u0026#34;Memory error\u0026#34;); return; } newNode-\u0026gt;data = data; if (position == 1) { //insert at the beginning newNode-\u0026gt;next = *head; newNode-\u0026gt;prev = NULL; *head-\u0026gt;prev = newNode; *head = newNode; return; } temp = *head; while( (k \u0026lt; position - 1) \u0026amp;\u0026amp; temp-\u0026gt;next != NULL) { temp = temp-\u0026gt;next; k++; } if (temp-\u0026gt;next == NULL) { //insert at the end newNode-\u0026gt;next = temp-\u0026gt;next; newNode-\u0026gt;prev = temp; temp-\u0026gt;next = newNode; } else { //in the middle newNode-\u0026gt;next = temp-\u0026gt;next; newNode-\u0026gt;prev = temp; temp-\u0026gt;next-\u0026gt;prev = newNode; temp-\u0026gt;next = newNode; } return; } Deleting a node 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 void DLLNode(struct DLLNode **head, int position) { struct DLLNode *temp, *temp2, temp = *head; int k = 1; if (*head == NULL) { printf(\u0026#34;List is empty\u0026#34;); return; } if (position == 1) { //at the beginning *head = *head-\u0026gt;next; if (*head != NULL) { *head-\u0026gt;prev = NULL; } free(temp); return; } while ( (k\u0026lt;position - 1) \u0026amp;\u0026amp; temp-\u0026gt;next != NULL) { temp = temp-\u0026gt;next; k++; } if (temp-\u0026gt;next == NULL) { //from the end temp2 = temp-\u0026gt;prev; temp2-\u0026gt;next = NULL; free(temp); } else { //in the middle temp2 = temp-\u0026gt;prev; temp2-\u0026gt;next = temp-\u0026gt;next; temp-\u0026gt;next-\u0026gt;prev = temp2; free(temp); } return; } CIRCULAR LINKED LIST Type declaration 1 2 3 4 struct CLLnode { int data; struct CLLnode *next; } Inserting a node at the end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 void insertAtEnd(struct CLLnode **head, int data) { struct CLLnode current = *head; struct CLLnode *newNode = (struct node*)malloc(sizeof(struct CLLnode)); if (!newNode){ printf(\u0026#34;Memory Error\u0026#34;); return; } newNode-\u0026gt;data = data; while(current-\u0026gt;next != *head) { current = cuurent-\u0026gt;next; } newNode-\u0026gt;next = newNode; if (*head == NULL) { *head = newNode; } else { newNode-\u0026gt;next = *head; current-\u0026gt;next = newNode; } } Inserting a node at the front 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 void insertAtBegin(struct CLLnode **head, int data) { struct CLLnode *current = *head; struct CLLnode *newNode = (struct node*)malloc(sizeof(struct CLLnode)); if (!newNode) { printf(\u0026#34;Memory Error\u0026#34;); return; } newNode-\u0026gt;data = data; while (current-\u0026gt;next != *head) { current = current-\u0026gt;next; } newNode-\u0026gt;next = newNode; if (*head == NULL) { *head = newNode; } else { newNode-\u0026gt;next = *head; current-\u0026gt;next = newNode; *head = newNode; } return; } Deleting a node at the front 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 void deleteFront(struct CLLnode **head) { struct CLLnode *temp = *head; struct CLLnode *current = *head; if (*head == NULL) { printf(\u0026#34;List is empty\u0026#34;); return; } while (current-\u0026gt;next != *head) { current = current-\u0026gt;next; } current-\u0026gt;next = *head-\u0026gt;next; *head = *head-\u0026gt;next; free(temp); return; } Deleting a node from the end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 void deleteLast (struct CLLnode **head) { struct CLLNode *temp = *head; struct CLLnode *current = *head; if (*head == NULL) { printf(\u0026#34;List is empty\u0026#34;); return; } while (current-\u0026gt;next != *head) { temp = current; current = current-\u0026gt;next; } free(current); return; } ","date":"2022-10-21T19:17:59+05:30","permalink":"https://jhaakansha.github.io/p/linked-list/","title":"Linked List"},{"content":"Runtime : O(log n)\nThe approximate middle item of the data set is located, and its key value is examined. If its value is too high, then the key of the middle element of the first half of the set is examined and procedure is repeated on the first half until the required item is found. If the value is too low, then the key of the middle entry of the second half of the data set is tried and the procedure is repeated on the second half. The process is continued until the desired key is found or search interval becomes empty.\nImplementation of Binary search Template 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 int binarySearch (int a[], int n, int key) { int found = 0; mid, low = 0, high = n-1; while (low \u0026lt;= high) { mid = (low+high)/2; if (key \u0026lt; a[mid]) { high = mid - 1; } else if (key \u0026gt; a[mid]) { low = mid + 1; } else { found = 1; break; } } return found; } Template 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 int binary(int[] arr, int key) { int low = 0, high = arr.length() - 1; while (low \u0026lt; high) { int mid = (high + low)/2; if (arr[mid] == key) { return mid; } else if (arr[mid] \u0026lt; key) { low = mid + 1; } else { high = mid; } } if (arr[low] == key) { return low; } return -1; } Template 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 int binary (int[] arr; int key) { int low = 0, high = arr.length() - 1; while (low + 1 \u0026lt; high) { int mid = (low + high)/2; if (arr[mid] == key) { return mid; } else if (arr[mid] \u0026lt; key) { low = mid; } else { high = mid; } } if (nums[low] == key) { return low; } if (nums[high] == key) { return high; } return -1; } RECURSIVE BINARY SEARCH Binary search can be implemented using recursion as well.\nImplementation of recursive Binary search 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 void binary(int arr[], int key, int low, int high) { int mid, flag = 0; if (low \u0026gt; high) { printf(\u0026#34;Element not found\u0026#34;); } mid = (low+high)/2; if (key \u0026lt; arr[mid]) { binary(arr, key, 0, mid - 1); } else if (key \u0026gt; arr[mid]) { binary(arr, key, mid+1, high); } else { flag = 1; } if (flag == 1) { printf(\u0026#34;Element found at index %d\u0026#34;, mid); } } ","date":"2022-10-07T04:35:32+05:30","permalink":"https://jhaakansha.github.io/p/binary-search/","title":"Binary Search"},{"content":"In this searching method, first of all, an index file is created that contains some specific group or division of required record when the index is obtained, then the partial indexing takes less time because it is located in a specific group.\nCharacteristics of indexed sequential search In indexed sequential search, a sorted index is set aside in addition to the array. Each element in the index points to a block of elements in the array or another expanded index. The index is searched first then the array and guides the search in the array. Implementation of Indexed Sequential Search 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 int gn = 3; //gn is the number of elements in a group int elements[gn], indices[gn], i, set =0; int j = 0, ind = 0, start, end; for (i = 0; i \u0026lt; n; i+=3) { elements[ind] = arr[i]; indices[ind] = i; ind++; } if (k \u0026lt; elements[0]) { printf(\u0026#34;Not found\u0026#34;); exit(0); } else { for (i = 1; i \u0026lt;= ind; i++) { if (k \u0026lt;= elements[i]) { start = indices[i-1]; end = indices[i]; set = 1; break; } } if (set == 0) { start = indices[gn - 1]; end = gn; } for (i = start; i \u0026lt;= end; i++) { if (k == arr[i]) { j = 1; break; } } } if (j == 1) { printf(\u0026#34;Found at index %d\u0026#34;, i); } else { printf(\u0026#34;element not found\u0026#34;); } ","date":"2022-10-02T04:55:43+05:30","permalink":"https://jhaakansha.github.io/p/indexed-sequential-search/","title":"Indexed Sequential Search"},{"content":"Runtime: O(nlogn) Memory : O(1)\nHeap Sort is a comparison-based sorting technique based on binary heap data structure. It is an in-place algorithm. Its typical implementation is not stable, but can be made stable. Typically, it is 2-3 times slower than well-implemented quicksort due to lack of locality of reference.\nAdvantages of heapsort Efficiency : The time required to perform heap sort increases logarithmicallywhile other algorithms may grow exponentially slower as the number of items to sort increases. Memory usage : Memory usage is minimal because apart from what is necessary to hold the initial list of items to be sorted,it needs no additional memory space to work. Simplicity : It is simpler to understand than other sorting algorithm because it does not use advanced computer science concepts. Heapify It is the process of creating a heap data structure from a binary tree represented using an array. It is used to create Min-Heap or Max-Heap.Start from the first index of the non-leaf node whose index is given by n/2 - 1. Heapify using recursion.\nImplementation of Heap Sort 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 void heapify (int arr[], int n, int i) { int largest = i; int left = 2*i + 1; int right = 2*i + 2; if (left \u0026lt; n \u0026amp;\u0026amp; arr[left] \u0026gt; arr[largest]){ largest = left; } if (right \u0026lt; n \u0026amp;\u0026amp; arr[right] \u0026gt; arr[largest]) { largest = right; } if (largest != i) { swap(\u0026amp;arr[i],\u0026amp;arr[largest]); heapify(arr, n, largest); } } void heapSort(int arr[], int n) { for (int i = n/2 - 1; i \u0026gt;= 0; i--) { heapify(arr, n, i); } for (int i = n - 1; i \u0026gt;= 0; i--) { swap(\u0026amp;arr[0], \u0026amp;arr[i]); heapify(arr, i, 0); } } What are the two phases of heap sort? Array is converted into max heap. Highest element is removed and the remaining elements are used to create a new max heap. Which is better: Heapsort or Mergesort? Mergesort is slightly faster than heapsort but mergesort requires extra storage space. Depending on the requirement, one should be chosen. Why is heapsort better than selection sort?\nHeapsort is similar to selection sort, but with a better way to get the maximum element. It takes advantage os the heap data structure to get the maximum element in constant time. ","date":"2022-09-11T10:44:35+05:30","permalink":"https://jhaakansha.github.io/p/heap-sort/","title":"Heap Sort"},{"content":"Runtime: average : O(nlogn) worst : O(n²) Auxiliary Space : O(n) In quick sort, we pick a random element and partition the array, such that all numbers that are less than the partitioning element come before all elements greater than it.\nIf we repeatedly partition the array around an element, the array will eventually become sorted. However, as the partitioned element is not huaranteed to be the median, our sorting could be very slow.\nQuick Sort is not a stable algorithm. However, any sorting algorithm can be made stable by considering indexes as comparison parametres.\nImplementation of QuickSort 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 void quickSort(int[] arr, int left, int right) { int index = partition(arr, left, right); if (left\u0026lt;index-1) { quickSort(arr, left, index-1); } if (index \u0026lt; right) { quickSort(arr, index, right); } } int partition(int[] arr, int left, int right) { int pivot = arr[(left + right)/2]; while (left \u0026lt;= right) { while (arr[left] \u0026lt; pivot) { left++; } while (arr[right] \u0026gt; pivot) { right--; } if (left \u0026lt;= right) { swap(arr[left], arr[right]); left++; right--; } } return left; } Worst Case The worst case occurs when the partition process always picks the greatest or the smallest element as the pivot. If we consider the partition startegy where the last element is always picked as a pivot, the worst case obviously occurs when the array is already sorted.\nBest Case The best case occurs when the partition process always picks the middle element as the pivot.\n3 Way QuickSort Consider an array which has many redundant elements. For example {1,2,3,6,6,8,9,9,0,0,0,7}, If we pick 6 as the pivot, we fix only one 6 and recursively process remaining occurrences. The idea of 3 way quicksort is to process all occurrences of the pivot and is based on Dutch National Flag algorithm.\nImplementation of 3 way quicksort using Dutch National Flag algorithm\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 def partition (arr, first, last, start, mid): pivot = arr[last] end = last while (mid[0] \u0026lt;= end): if (arr[mid[0]] \u0026lt; pivot): arr[mid[0]], arr[start[0]] = arr[start[0]], arr[mid[0]] mid[0] = mid[0] + 1 start[0] = start[0] + 1 elif (arr[mid[0]] \u0026gt; pivot): arr[mid[0]], arr[end] = arr[end], arr[mid[0]] end = end - 1 else: mid[0] = mid[0] + 1 def quickSort(arr, first, last): if(first \u0026gt;= last): return if (last == first+1): if(arr[first] \u0026gt; arr[last]): arr[first], arr[last] = arr[last], arr[first] return start = [first] mid = [first] partition(arr, first, last, start, mid) quickSort(arr, first, start[0]-1) quickSort(arr, mid[0], last) The time complexity of this algorithm is O(nlogn) and the space complexity is o(logn).\nWhy is Quick Sort preferred over Merge Sort for sorting arrays? QuickSort is an in-place sort (i.e. it does not require any extra storage) whereas mergesort requires O(n) extra storage. Allocating and de-allocating the extra space used for mergesort increases the running time of the algorithm. Most practical implementations of QuickSort use randomized version. It has expected time complexity of O(nlogn). The worst case is possible in randomized version also, but worst case does not occur for a particular pattern (like sorted array) and works well in practice. QuickSort is also cache friendly sorting algorithm as it has good locality of reference when used for arrays. It is also tail recursive, therefore tail call optimization is done. ","date":"2022-09-03T13:18:51+05:30","permalink":"https://jhaakansha.github.io/p/quick-sort/","title":"Quick Sort"},{"content":"Runtime: O(nlogn) Auxiliary Space : O(n)\nMerge Sort divides the array in half, sorts each of those halves and then merges them back together. Each of these halves has the same sorting algorithm applied to it. Eventually, its like merging two single-element arrays. The merge method operates by copying all the elements from target array segment into a helper array,keeping track of where the start of the left and right halves should be. Then, iterate through helper, copying the smaller element from each half into the array.\nIt is a stable algorithm.\nImplementation of MergeSort 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 void mergesort(int[] array) { int[] helper = new int[array.length]; mergesort(array, helper, 0, array.length - 1); } void mergesort(int[] array, int[] helper, int low, int high) { if (low \u0026lt; high) { int middle = (low+high)/2; mergesort(array, helper, low, middle); //sort left half mergesort(array, helper, middle+1, high); //sort right half mergesort(array, helper, low, middle, high); //merge them } } void merge(int[] array, int[] helper, int low, int middle, int high) { //copy both halves into a helper array for (int i = low; i\u0026lt;=high; i++) { helper[i] = array[i]; } int helperLeft = low; int helperRight = middle + 1; int current = low; while (helperLeft \u0026lt;= middle \u0026amp;\u0026amp; helperRight \u0026lt;= high) { if (helper[helperLeft] \u0026lt;= helper[helperRight]) { array[current] = helper[helperLeft]; helperLeft++; } else { array[current] = helper[helperRight]; helperRight++; } current++; } int remaining = middle - helperLeft; for (int = 0; i \u0026lt;= remaining; i++) { array[current + i] = helper[helperLeft + i]; } } Drawbacks It is slower compared to other sort algorithms for smaller tasks. It requires additional memory space of O(n). Merge sort goes through whole process even if array is sorted. ","date":"2022-08-26T13:18:43+05:30","permalink":"https://jhaakansha.github.io/p/merge-sort/","title":"Merge Sort"},{"content":"Best Case Runtime: O(n) Worst Case Runtime : O(n²)\nThe method for implementation of insertion sort is similar to the one we use to arrange our cards. It is a stable sorting algorithm. The steps followed are:\nIterate from arr[1] to arr[n] over the array. Compare the current element (key) to its predecessor. If the key element is smaller than its predecessor, compare it to the element before. Move the greater elements one position up to make space for the swapped element. Implementation of InsertionSort 1 2 3 4 5 6 7 8 9 for (i = 0; i \u0026lt; n; i++) { key = arr[i]; j = i - 1; while (j \u0026gt;= 0 \u0026amp;\u0026amp; arr[j] \u0026gt; key) { arr[j+1] = arr[j]; j-=1; } key = arr[j]; } Binary Insertion Sort This method uses binary search to find the proper location to insert the selected item at each iteration. In normal insertion, sorting takes O(i) time (at ith iteration) in worst case, this can be reduced to O(log i). The algorithm, as a whole, still has the worst case running time of O(n²).\nImplementation of binary Insertion Sort 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 int binarySearch(int a[], int item, int low, int high) { if (high \u0026lt;= low) { return ((item \u0026gt; a[low]) ? (low+1) : low) } int mid = (low+high)/2; if (item == a[mid]) { return mid+1; } if (item \u0026gt; a[mid]) { return (binarySearch(a, item, mid+1, high)); } } void insertionSort(int a[], int n) { int i, loc, j, k, selected; for (i = 1; i \u0026lt; n; ++i) { j = i-1; selected = a[i]; loc = binarySearch(a, selected, 0, j); while (j \u0026gt;= loc) { a[j+1] = a[j]; j--; } a[j+1] = selected; } } ","date":"2022-08-21T02:03:01+05:30","permalink":"https://jhaakansha.github.io/p/insertion-sort/","title":"Insertion Sort"},{"content":"Runtime: O(n²) Memory : O(1)\nFind the smallest element using a linear scan and move it to the front. Using linear scane, traverse the array from the second element onwards and find the least element in this sub-array. Swap the second element with the least element in the sub-array (which will the second least element in the array). Continue doing this until all elements are in place.\nThe default implementation of selection sort is not stable. Implementation of SelectionSort 1 2 3 4 5 6 7 8 9 10 11 for (i = 0; i \u0026lt; n - 1; i++) { min_idx = i; for (j = i+1; j \u0026lt; n; j++) { if (arr[j] \u0026lt; arr[min_idx]){ min_idx = j; } } if (min_idx != i) { swap(\u0026amp;arr[i], \u0026amp;arr[min_idx]); } } ","date":"2022-08-06T14:20:15+05:30","permalink":"https://jhaakansha.github.io/p/selection-sort/","title":"Selection Sort"},{"content":"Runtime: O(n²) Memory : O(1)\n*In this article each n refers to the number of elements present in the array.\nIn Bubble sort, we start at the beginning of the array and swap the first two elements if the first element is greater than the second. Then, we go to the next pair and so on, continuously making sweeps of the array until it is sorted.\nIt is a stable algorithm, i.e, the relative positions of equivalent elements remains the same.\nSimple BubbleSort 1 2 3 4 5 6 7 for (i = 0; i \u0026lt; n-1; i++) { for (j = 0; j \u0026lt; n-1; j++) { if (arr[j] \u0026gt; arr[j+1]) { swap(\u0026amp;arr[j], \u0026amp;arr[j+1]); } } } The above method is not optimized it could be bettered by stopping the algorithm if the inner loop does not cause any swap.\nThe optimized approach will run a little slower than the original one if all the passes are made. However, in the best situation, the time complexity will be O(n) as opposed to the original which was O(n²) in all circumstances.\nWorst Case : The worst case occurs when all the elements are arranged in descending order.\nTotal number of iterations = n - 1\nAt pass 1:\ncomparisons = n - 1 swaps = n - 1\nAt pass 2:\ncompartisons = n - 2 swaps = n - 2\nand so on until the number of comparisons is 1. Therefore, total number of comparisons required to sort the array\n= (n - 1) + (n - 2) + (n - 3) + \u0026hellip;. + 2 + 1\n= (n - 1)(n - 1 + 1)/2\n= n(n - 1)/2\nThis is equivalent to n².\nBest Case : The best occurs when the array is already sorted. The time complexity in this case is O(n).\nOptimized BubbleSort 1 2 3 4 5 6 7 8 9 10 11 12 for (i = 0; i \u0026lt; n-1; i++) { swapped = false; for (j = 0; j \u0026lt; n-i-1; j++) { if (arr[j] \u0026gt; arr[j+1]) { swap(\u0026amp;arr[j], \u0026amp;arr[j+1]); swapped = true; } } if (swapped == false) { break; } } Recursive BubbleSort The following steps are followed to implement recursive bubblesort:-\nPlace the largest element at its position, this operation makes sure that the first largest element will be placed at the end of the array. Recursively call for the rest n - 1 elements with the same operation and place the next greater element at its position. The base condition for this recursion call would be when the number of elements in the array becomes 0 or 1 then, simply return. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 void recursiveBubble (int arr[], int len) { int temp, i; if (len == 0) { return; } for (i = 0; i \u0026lt; len-1; i++) { if (arr[i] \u0026gt; arr[i+1]) { temp = arr[i]; arr[i] = arr[i+1]; arr[i+1] = temp; } } len = len-1; recursiveBubble(arr,len); } ","date":"2022-07-30T12:27:42+05:30","permalink":"https://jhaakansha.github.io/p/bubble-sort/","title":"Bubble Sort"},{"content":"Databending \u0026amp; Glitch Art: Turning Digital Errors into Art Ever seen a photo glitch out—colors going wild, pixels melting into one another—and thought it actually looked kind of… cool? That’s basically what databending is all about. It’s the art of breaking digital stuff on purpose to make something unexpected and beautiful.\nThink of it like this: if circuit bending is the act of rewiring a kid’s toy or old keyboard to make weird sounds, databending is the digital version—only instead of wires, we’re messing with raw data.\nWhat Even Is Glitch Art? Glitch art is exactly what it sounds like: art made from glitches. Sometimes they happen by accident (your computer crashes and corrupts an image), and sometimes they’re 100% intentional. Artists have started embracing these weird little moments when tech fails—because in those moments, something raw and chaotic shows up. And it’s kind of beautiful.\nThere\u0026rsquo;s even some debate: does it have to be accidental to count as glitch art? Or can you plan a glitch? Either way, databending sits right in the middle—intentional chaos, but with roots in real, spontaneous errors.\nTools of the Trade To start databending, you don’t need fancy software. You just need the courage to open a file in the wrong program and see what happens.\nSome go-to tools:\nHex editors like HxD or Hex Fiend let you mess with the literal ones and zeroes inside a file. Audacity (yes, the audio editor) can open an image file as sound—which leads to some awesome, noisy results. Even a plain text editor like Notepad can break things in interesting ways. Types of Databending There are a few classic techniques people use:\n1. Incorrect Editing This is the OG move: open a file (like a JPEG) in a program meant for something totally different—like a sound editor—and just go wild. Save it, and open it again as an image. Boom, glitch.\n2. Reinterpretation Here, you\u0026rsquo;re converting files between formats in weird ways. Treating a picture as audio, or vice versa. Sometimes even just renaming the file extension can trip up your system in fun ways.\n3. Forced Errors This is where you exploit bugs on purpose. Maybe you know a certain plugin crashes under specific conditions—or that a program saves corrupt files if you interrupt it mid-process. So… you interrupt it on purpose. Glitchy magic.\nA Little Backstory Glitch art didn’t come out of nowhere. It’s got roots in hacker culture, experimental music, and early digital art scenes. Artists like Rosa Menkman have helped shape the conversation around glitch art, even writing whole books about the aesthetic of digital error (The Glitch Moment(um) is a must-read).\nYou might’ve seen databending in music videos (like Kanye’s “Welcome to Heartbreak”), or even in the recent surge of “datamoshing” in TikTok edits and motion graphics.\nWhy Break Something on Purpose? Because perfection gets boring.\nGlitches remind us that technology isn’t flawless. It’s messy, fragile, and sometimes totally unpredictable. That unpredictability can actually be freeing for artists—it gives them a way to break out of the usual patterns and create something raw and unique.\nIn a world where everything’s edited, filtered, and polished, glitch art feels like a breath of chaotic fresh air.\nWanna Try It? Here’s how to jump in:\nGrab a simple image file—like a .jpg or .bmp. Open it in a hex editor. But be careful—don’t mess with the first few lines (the header), or your file won’t open again. Make some small changes—swap numbers, delete a few bytes, paste in random stuff. Save it with a new name and open it in your image viewer. Bask in the glitch. Or go wild with Audacity:\nOpen a .jpg or .png as raw audio. Add distortion, reverse it, slow it down. Export it as raw data. Rename the file back to .jpg and open it again. Weird, right? Final Thoughts Databending is like digital vandalism—but in a good way. It’s about twisting tools, breaking files, and celebrating the unexpected results. Whether you’re into tech, art, or just the joy of messing with things you “shouldn’t,” databending offers a whole new way to express yourself.\nSo go ahead—break something.\nMore inspiration: Rosa Menkman | Glitch Artist Collective on Reddit\n","date":"2022-07-24T13:15:46+05:30","permalink":"https://jhaakansha.github.io/p/data-bending/","title":"Data Bending"},{"content":" Big O is the used to decribe the efficiency of algorithms. Academics use big O, big θ and big Ω to decribe runtimes. Big O Big Ω Big θ It describes upper bound on time It describes lower bound on time θ means both O and Ω An algorithm that requires O(n) time can also be described as requiring O(n²), O(n³), O(2ⁿ) etc. The algorithm is at least as fast as any of these If the runtime is decribed as Ω(n), then it can also be described by Ω(log n) and Ω(1). But, it won\u0026rsquo;t be faster than those runtimes That is, an algorithm is θ(n) if its both O(n) and Ω(n). θ gives a tight bound on runtimes In industry, the meaning of O and θ seem to have been merged. Industry\u0026rsquo;s meaning of big O is closer to what academics mean by θ. The runtime for an algorithm is described in three different ways:\nBest Case : If we consider the case of sorting an array, then if all the elements are equal, then quick sort will traverse the array only once giving the runtime O(n). This is the least runtime possible, hence it is the best case scenario. Worst Case : If we get really unlucky and the pivot is repeatedly the biggest element in the array then we have the largest possible runtime. This is the worst case scenario. Expected Case : Usually, the conditions considered above don\u0026rsquo;t happen. In general cases, the expected runtime for quick sort will be O(n log n). ","date":"2022-07-14T12:09:29+05:30","permalink":"https://jhaakansha.github.io/p/big-o-notation/","title":"Big O Notation"}]