The State of Coding Agents Using Local LLMs — February 2026

Last update: February 1st, 2026

Coding agents are no longer a novelty – they’re everywhere. Over the past year, we’ve seen massive adoption across startups and enterprises, alongside real improvements in autonomy, reasoning depth, and multi-step code execution. Tools like Claude Code, Codex, Copilot, and Kiro are shipping updates at a relentless pace, and teams are increasingly comfortable letting agents refactor modules, write tests, and manage pull requests.

But there’s a catch: these tools are token eaters. Autonomous agents don’t just answer a prompt – they plan, reflect, re-read the codebase, call tools, retry, and iterate. At scale, that translates into serious API bills.

That’s why we’re seeing growing interest in a different deployment pattern: running coding agents against local or self-hosted models. Ollama recently announced ollama launch a command that sets up and runs coding tools such as Claude Code, OpenCode, and Codex with local or cloud models. vLLM, LiteLLM, and OpenRouter also provide similar integrations. That signals that this is no longer fringe experimentation. For many teams, local LLMs are emerging as a viable path to reduce cost, improve stability, and gain tighter control over privacy.


Deployment models for coding agents

When teams talk about “running models locally,” they often mean different things. In practice, there are three distinct deployment patterns – and they differ meaningfully in cost structure, performance profile, and governance posture.

  1. Local (Developer Machine) – the model runs directly on a developer’s laptop or workstation (e.g., via Ollama).
  2. Hosted (Org-Managed Infrastructure / VPC) – the organization runs the model on its own infrastructure, either on-premises GPU servers or in a private cloud/VPC (e.g., via vLLM, Kubernetes, or managed GPU clusters).
  3. Managed LLM API (e.g., Anthropic, OpenAI, etc.) – the model runs fully managed by a provider; the organization interacts via API.
DimensionLocal (Dev Machine)Hosted (Org VPC / On-Prem)Managed LLM API
Cost StructureNo per-token fees. Hardware cost borne by the developer. Cheap at a small scale; uneven across the team.No per-token fees. Significant infra + ops cost. Economical at scale if usage is high.Usage-based (per token / per request). Predictable but can become very expensive with agent loops.
Cost at Scale (Agents)Hard to standardize; limited by laptop GPU/CPU.Strong cost efficiency at high volumeToken costs compound quickly. Expensive in large org rollouts.
Performance (Latency)Very low latency locally, but limited by hardware. Large models may be slow or impossible.Good latency if well-provisioned GPU cluster. Can optimize with batching.Typically excellent latency and throughput; globally distributed infra.
Model Size / CapabilityLimited to smaller models (7B–34B typically; maybe 70B with strong GPUs).Can run large open models (70B+), depending on infra budget.Access to frontier SOTA models (often strongest reasoning & coding quality).
Quality (Coding Tasks)Improving. “Good enough” for many workflows, especially with fine-tuned coding models.Strong – can choose best open models and fine-tune internally.Often highest raw reasoning quality and reliability on complex multi-file tasks.
Security / PrivacyCode never leaves device. Strong for IP protection. Risk: inconsistent security posture across developers.Code stays inside org boundary. Strong centralized control.Code leaves org boundary (even with enterprise contracts). Vendor trust required.
Compliance (GDPR, HIPAA, etc.)Hard to audit across distributed machines.Strong compliance posture if infra is controlled and logged centrally.Enterprise compliance available via contract, but still external processing.
Governance & ObservabilityWeak – hard to monitor usage or enforce policies.Strong – full logging, auditing, access controls, IAM integration.Strong observability dashboards from vendor, but limited transparency into internals.
Stability / AvailabilityWorks offline. Dependent on developer hardware reliability.Controlled SLAs internally. Requires DevOps maturity.Vendor-managed SLAs. Risk of outages outside your control.
Standardization Across TeamLow: “works on my machine” problem possible.High – central model versions and infra.Very high – single API endpoint for entire org.

Tools overview

Coding Agents and Model support

Coding AgentLocal LLM SupportHosted SupportNotes
Claude Code✅ via Ollama/vLLM integrationNative AnthropicRun Claude Code with Local LLMs Using Ollama
LLM gateway configuration
LiteLLM Claude Code Quickstart
OpenRouter integration with Claude Code
GitHub Copilot (Agent mode)✅ via Ollama/vLLM integrationCloud models (GPT-4o, Claude 3.5, Gemini, etc)Ollama in VSCode
GitHub copilot with Open Router
GitHub copilot LLM Gateway
Codex (OpenAI)✅ via Ollama integrationCloud via OpenAIOllama Codex integration
Cursor AI✅ via Ollama integrationCloud multi-modelUse Local LLM with Cursor and Ollama
OpenRouter with Cursor
AWS Kiro❌ localAWS hosted

Local LLM Frameworks

FrameworkPrimary RoleNotes
OllamaLocal LLM hosting & runtimeLightweight CLI + API that serves models locally; integrates with multiple agents (Claude Code, Codex, Droid, OpenCode) and supports on-prem inferencing with moderate hardware.
vLLM (Serving)High-performance LLM serverOptimized for scalable reasoning and long context LLM inference; integrates with agents (e.g., Claude Code) via Anthropic-Messages API compatibility.
OpenRouterUnified LLM API brokerCentral API layer for 400+ LLMs including local and cloud endpoints; can route agents to preferred backends with cost/redundancy optimization.
LiteLLMUnified LLM APIEnables developers to use many LLM APIs, such as OpenAI, Anthropic, Gemini, and Ollama, in a single, OpenAI-compatible format.

Notable models

ModelPrimary UseLatest Release
Qwen3-CoderAlibaba’s 480B-parameter MoE coding model. SOTA results among open models on agentic coding tasksJuly 2025
DeepSeek CoderDeepSeek’s open-source code model series (1B–33B params), achieving top performance among open-source code models across major benchmarks.June 2024
Code Llama (7B/34B)Meta’s open-source code-specialized LLMs, fine-tuned from Llama 2 in multiple sizesJanuary 2024
gpt-ossOpenAI’s open-weight LLMs, available in 20B and 120B sizes under Apache 2.0. 120B variant matching o4-mini on reasoning benchmarksAugust 2025
kimi-k2.5Moonshot AI’s open-source, native multimodal agentic modelJanuary 2026

📈 Predictions Through 2026

1. Hybrid Routing Will Become the Standard

Cost is the most immediate driver. Autonomous coding agents are token-intensive by design. At enterprise scale, those token costs compound quickly.

Local inference eliminates per-token fees, which makes it attractive for high-volume, repetitive tasks. But frontier proprietary models still maintain an edge on complex, cross-repository reasoning and edge cases. The likely outcome is not full replacement, but intelligent routing:

  • Simpler or repetitive tasks → local or hosted open models
  • High-stakes, complex reasoning → managed frontier APIs

Tools like OpenRouter and LiteLLM are already enabling this pattern, and by the end of 2026, hybrid routing is likely to be the default deployment strategy for medium- to large-sized engineering organizations.

2. Standardization Will Lower the Switching Cost

Hybrid only works if switching models is frictionless.

As coding agents like Claude Code, Codex, Copilot, and others converge around shared inference interfaces (Ollama, vLLM, OpenAI-compatible endpoints), swapping models in and out becomes operationally simple. This reduces lock-in and makes experimentation safer.
As interoperability improves, the barrier to trying local models drops dramatically – and adoption follows.

3. Open-Source Coding Models Will Close the Gap

Tool-use fine-tuning is maturing. Code reasoning benchmarks are becoming more rigorous.

By late 2026, open-weight coding models are likely to be “production-grade” for a substantial share of workflows – especially where cost control and data sovereignty matter more than absolute frontier performance.

4. Resilience Will Matter as Much as Cost

There’s also a structural pressure building: agent-driven workloads amplify the impact of API outages. When a coding agent is embedded into CI pipelines or developer workflows, downtime is no longer an inconvenience – it’s a blocker.

As usage scales, reliance on a single managed API becomes a risk vector. This will accelerate investment in redundancy:

  • Secondary API providers
  • Local fallback models
  • On-prem capacity for critical workflows

Summary

In 2026, hybrid won’t just be about cost optimization – it will be about operational resilience.

The future is not “local vs cloud.” It’s a composable, policy-driven model infrastructure.

Organizations that treat model routing, hosting strategy, and redundancy as part of their core engineering architecture – rather than as an afterthought – will have structural advantages in cost control, privacy, and reliability.

2026 won’t be the year enterprises abandon managed APIs. It will be the year they stop depending on them exclusively.

Learn In Public – week 04

A sense of humor is one of the most underestimated leadership skills

Following the recommendation in “The Great CEO Within” I listened to “The One Minute Manager”. In one of the chapters, they mention the usage of Humor as a leadership tool. It is not about becoming a comedian but rather showing up as humans. Humor is a powerful and often overlooked tool.
Why it matters:

  • Humor helps build trust and rapport – people are more likely to engage and collaborate when they feel comfortable and connected.
  • It can reduce stress and tension, boosting well-being and performance.
  • Humor makes leaders more approachable and memorable, signaling confidence and emotional intelligence.
  • Shared laughter fosters psychological safety, helping teams voice ideas and take risks.

Of course, balance is key – humor should complement clarity and respect, not replace them. Too many jokes or poorly timed humor can actually backfire, so think of it as a strategic leadership tool, not a default setting. It’s about knowing when a light moment can lower defenses, reset the room, or simply remind everyone that work is done by humans, not robots.

Related resources –

Tokens as Currency

Half-baked thought: Tokens will become currency.

Right now, the direction is obvious – more money buys more tokens.

But what if, in the near future, tokens themselves become a medium of exchange?

Consider this:

  • Microsoft is allocating tokens to support the maintenance of open-source projects.
  • Companies granting tokens in exchange for using their tools or infrastructure.
  • Open-source maintainers receiving donations in tokens instead of (or alongside) cash.
  • Platforms enabling distributed token usage across multiple accounts, almost like a modern SETI@home
  • Gift cards for Anthropic.

In other words, tokens are not just consumption units, but are tradable, transferable assets within an ecosystem.

Of course, this is far from trivial. Privacy, security, incentive alignment, and implementation complexity are all major hurdles.

But if I had to place one slightly outrageous bet for 2026, it would be movement in this direction.

Are we ready to start thinking of tokens as something you can “round up and donate”?

Learn in Public – week 03

I finished listening to “The Great CEO Within” by Matt Mochary and Alex MacCaw. A few thoughts:

1️⃣ A tactical cheat sheet
I view the book as a tactical cheat sheet: short, practical chapters you can skim for ideas. It’s great for quick exposure, and most chapters include references for deeper dives. For me, this book has excellent value for time.

2️⃣ Revisit in the LLM era
Two well-known ideas in the book are Getting Things Done and Inbox Zero.
Inbox-zero and productivity advice hit differently today. With LLMs helping triage emails, summarize threads, and highlight what actually matters, the principles remain the same – but the execution is far more automated.

3️⃣ Optimizing meetings
TL;DR: come prepared, use written communication in advance, and don’t deviate from the planned agenda.
The authors suggest holding all meetings, including 1:1s, on the same day. From my experience, for 1:1s that go beyond status updates and require real attention (e.g., feedback), stacking too many of them on the same day can be overwhelming for most people.

4️⃣ “The first thing to optimize is yourself”
One of my favorite quotes from the book. It emphasizes founders’ and leaders’ health, mental and physical, something that has historically been overlooked. A good reminder that sustainable leadership starts with managing your own energy.

The book also mentions principles of conscious leadership: listening to feedback and acting on it, and not being afraid to make (and admit) mistakes.
This week, I also read a blog titled “Reflection is a Crucial Leadership Skill”, which made these ideas more actionable and down-to-earth.

What should I read next?

Learn in Public – week 02

I started this week with deeplearning.ai’s course on semantic caching, created in collaboration with Redis. That sent me down a rabbit hole, exploring different LLM caching strategies and the products that support them.

One such product is AWS Bedrock Prompt Caching. If large parts of your prompts are static (specifically, the prefixes), retokenizing the prefix on every request is a waste of time and money. Prompt or context caching lets you process the prefix once and store it, reducing costs and improving performance.

Sounds great, right? Let’s check the pricing mode. If your requests are more than 5 minutes apart, your cache will be cleared. If your requests are short, caching won’t be activated; if the cache hit rate is low, you will pay an extra, non-usage-based premium for cache writes. I highly recommend reading the “How Much Does Bedrock Prompt Caching Cost?” section in the article “Amazon Bedrock Prompt Caching”.

AI Just Took a Big Step Into Digital Health

🚀 AI is moving deeper into digital health – over the past week, both OpenAI and Anthropic have introduced major features aimed at bringing powerful AI capabilities into healthcare and life sciences. Links in the first comment

🔹 OpenAI: ChatGPT Health
OpenAI has launched ChatGPT Health, a dedicated health experience that lets users securely connect their medical records and wellness app data (e.g., Apple Health, Function, and MyFitnessPal) to get more informed insights about their health and wellness. The feature is designed to help people better interpret test results, prepare for doctor visits, and navigate everyday health questions — not replace clinicians. Enhanced privacy protections ensure that health chats and data remain isolated and encrypted, and that users retain full control over connections and data.

🔹 Anthropic: Claude for Healthcare & Life Sciences
Following an earlier announcement regarding Claude for Life Sciences, Anthropic introduced Claude for Healthcare alongside expanded life science capabilities, bringing its Claude AI into regulated medical and scientific use cases. This includes HIPAA-ready infrastructure and connectors to industry data sources (like CMS coverage rules, ICD-10 codes, and NPI registries) to support tasks such as prior authorizations, claims management, and clinical documentation. Claude can also summarize medical histories and explain test results in plain language. On the life sciences side, new integrations with clinical trial, preprint, and bioinformatics platforms aim to accelerate research workflows and regulatory documentation.

Both announcements show the AI industry racing into digital health with different focus areas. OpenAI’s move toward personalized health guidance for individuals complements Anthropic’s broader, enterprise-oriented tools for providers and researchers. Together, they raise exciting possibilities and important questions about regulatory standards, data privacy, and the role of AI in care delivery.

Bonus – GrantFlow – a grant management platform that automates discovery, planning, and application workflows for researchers and institutions.

4 AWS re:Invent announcment to check

AWS re:Invent 2025 took place this week, and as always, dozens of announcements were unveiled. At the macro level, announcing Amazon EC2 Trn3 UltraServers for faster, lower-cost generative AI training can make a significant difference in the market, which is primarily biased towards Nvidia GPUs. At the micro-level, I chose four announcements that I find compelling and relevant for my day-to-day.

AWS Transform custom – AWS Transform enables organizations to automate the modernization of codebases at enterprise scale, including legacy frameworks, outdated runtimes, infrastructure-as-code, and even company-specific code patterns. The custom agent applies those transformation rules defined in documentation, natural language descriptions, or code samples consistently across the organization’s repositories. 

Technical debt tends to accumulate quietly, damaging developer productivity and satisfaction. Transform custom wishes to “crush tech debt” and free up developers to focus on innovation instead. For organizations managing many microservices, legacy modules, or long-standing systems, this could dramatically reduce the maintenance burden and risk and increase employees’ satisfaction and retention over time.

https://aws.amazon.com/blogs/aws/introducing-aws-transform-custom-crush-tech-debt-with-ai-powered-code-modernization

Partially complementary, AWS introduced 2 frontier agents in addition to the already existing Kiro agent – 

AWS Lambda Durable Functions – Durable Functions enable building long-running, stateful, multi-step applications and workflows – directly within the serverless paradigm. Durable functions support a checkpoint-and-replay model: your code can pause (e.g., wait for external events or timeouts) and resume within 1 year without incurring idle compute costs during the pause.

Many real-world use cases, such as approval flows, background jobs, human-in-the-loop automation, and cross-service orchestration, require durable state, retries, and waiting. Previously, these often required dedicated infrastructure or complex orchestration logic. Durable Functions enable teams to build more robust and scalable workflows and reduce overhead.

https://aws.amazon.com/blogs/aws/build-multi-step-applications-and-ai-workflows-with-aws-lambda-durable-functions

AWS S3 Vectors (General Availability) – Amazon S3 Vectors was announced about 6 months ago and is now generally available. This adds native vector storage and querying capabilities to S3 buckets. That is, you can store embedding/vector data at scale, build vector indexes, and run similarity search via S3, without needing a separate vector database. The vectors can be enriched with metadata and integrated with other AWS services for retrieval-augmented generation (RAG) workflows. I think of it as “Athena” for embeddings.

This makes it much easier and cost-effective for teams to integrate AI/ML features – even if they don’t want to manage a dedicated vector DB and reduces the barrier to building AI-ready data backends.

https://aws.amazon.com/blogs/aws/amazon-s3-vectors-now-generally-available-with-increased-scale-and-performance


Amazon SageMaker Serverless Customization – Fine-Tuning Models Without Infrastructure – AWS announced a new capability that accelerates model fine-tuning by eliminating the need for infrastructure management. Teams can upload a dataset and select a base model, and SageMaker handles the fine-tuning pipeline, scaling, and optimization automatically – all in a serverless, pay-per-use model. This customized model can also be deployed using Bedrock for Serverless inference. It is a game-changer, as serving a customized model was previously very expensive. This feature makes fine-tuning accessible to far more teams, especially those without dedicated ML engineers.

https://aws.amazon.com/blogs/aws/new-serverless-customization-in-amazon-sagemaker-ai-accelerates-model-fine-tuning

These are just a handful of the (many) announcements from re:Invent 2025, and they represent a small, opinionated slice of what AWS showcased. Collectively, they highlight a clear trend: Amazon is pushing hard into AI-driven infrastructure and developer automation – while challenging multiple categories of startups in the process.

While Trn3 UltraServers aim to chip away at NVIDIA’s dominance in AI training, the more immediate impact may come from the developer- and workflow-focused releases. Tools like Transform Custom, the new frontier agents, and Durable Functions promise to reduce engineering pain – if they can handle the real, messy complexity of enterprise systems. S3 Vectors and SageMaker Serverless Customization make it far easier to adopt vector search and fine-tuning without adding a new operational burden.

5 interesting things (02/11/2025)

Measuring Engineering Productivity – Measuring engineering productivity is a topic that has been discussed as long as the field of engineering has existed. This post acknowledges the tension of measuring engineering work (where metrics can be easily manipulated). It proposes a pragmatic system of minimal burden, high visibility, and context-sensitive metrics, rather than focusing on lines of code.

https://justoffbyone.com/posts/measuring-engineering-productivity/

Stop Avoiding Politics – Politics usually have a bad name, but the article argues that avoiding the “politics” of an organization doesn’t remove politics; it just removes your ability to influence outcomes and let others decide for you. This is a helpful reminder that part of seniority is engaging in stakeholder dynamics, not just writing code.

https://terriblesoftware.org/2025/10/01/stop-avoiding-politics/

Team Dynamics after AI – This post critiques the rush to utilize AI to scale engineering artifact production and argues that what matters most remains the “illegible” human and team elements: context, feedback loops, diversity of roles, and the social glue that holds work together. I link it to the “Measuring engineering productivity” post in the sense that simply measuring throughput or artifacts might miss the hidden “team health” or context dimension.

https://mechanicalsurvival.com/blog/team-dynamics-after-ai/

Useful Engineering Management Artifacts – This is a practical collection of templates for various purposes, including team charters, career development plans, and decision briefs. It complements the productivity and team dynamics posts by providing actual artifacts you can use to operationalize some of the ideas.

https://bjorg.bjornroche.com/management/engineering-management-artifacts/

Stop Caring So Much About Your People – I find this post a bit weird. In the post “Radical Candor” era, it feels obvious that giving feedback is both essential and meaningful. I agree with the author’s point that leaders sometimes over-prioritize team happiness at the expense of organizational health, and I’d extend that further: over-protecting people from discomfort also hurts their own growth. As leaders, giving feedback is often uncomfortable, but it’s one of the most valuable things we can do – to help our people, our teams, our company, and even ourselves evolve.

https://avivbenyosef.com/stop-caring-so-much-about-your-people/

From Demo Hell to Scale: Two Takes on Building Things That Last

​​I recently came across two blog posts that made me think, especially in light of a sobering statistic I’ve seen floating around: a recent MIT study reports that 95% of enterprise generative AI pilots fail to deliver real business impact or move beyond demo mode.

One post is a conversation with Werner Vogels, Amazon’s long-time CTO, who shares lessons from decades of building and operating systems at internet scale. The other, from Docker, outlines nine rules for making AI proof-of-concepts that don’t die in demo land.

Despite their different starting points, I was surprised by how much the posts resonated with one another. Here’s a short review of where they align and where they differ.

Where They Agree

  1. Solve real problems, not hype – Both warn against chasing the “cool demo.” Docker calls it “Solve Pain, Not Impress”, while Vogels is blunt: “Don’t build for hype.” This advice sounds obvious, but it’s easy to fall into the trap of chasing novelty. Whether you’re pitching to executives or building at AWS scale, both warn that if you’re not anchored in a real customer pain, the project is already off track.
  2. Build with the end in mind – Neither believes in disposable prototypes. Docker advises to design for production from day zero—add observability, guardrails, testing, and think about scale early. Vogels echoes with “What you build, you run”, highlighting that engineers must take ownership of operations, security, and long-term maintainability. Both perspectives converge on the same principle: if you don’t build like it’s going to live in production, it probably never will.
  3. Discipline over speed – Both posts emphasize discipline over blind speed. Docker urges teams to embed cost and risk awareness into PoCs, even tracking unit economics from day one. Vogels stresses that “cost isn’t boring—it’s survival” and frames decision-making around reversibility: move fast when you can reverse course, slow down when you can’t. Different wording, same idea: thoughtful choices early save pain later.

Where They Differ

  1. Scope: the lab vs. the long haul – Docker’s post is tightly focused on how to build POCs in the messy realities of AI prototyping and how to avoid “demo theater” and make something that survives first contact with production. Vogels’ advice is broader, aimed at general engineering, technology leadership, infrastructure, decision-making at scale, and organization-level priorities. Vogels speaks from decades of running Amazon-scale systems, where the horizon is years, not weeks.
  2. Tactics vs. culture – Docker’s advice is concrete and technical: use remocal workflows, benchmark early, add prompt testing to CI/CD. Vogels is less about specific tools and more about culture: engineers owning what they build, organizations learning to move fast on reversible decisions, and leaders setting clarity as a cultural value. Docker tells you what to do. Vogels tells you how to think.
  3. Organizational Context and Scale – Docker speaks to teams fighting to get from zero to one—making PoCs credible beyond the demo stage. Vogels speaks from AWS’s point of view,  where the challenge is running infrastructure that millions rely on. Docker’s post is about survival; Vogels is about resilience at scale.

What strikes me about these two perspectives is how perfectly they complement each other. Docker’s advice isn’t really about AI – it’s about escaping demo hell by building prototypes with production DNA from day one. Vogels tackles what happens when you actually succeed: keeping systems reliable when thousands depend on them. They’re describing the same journey from different ends. Set up your prototypes with the right foundations, and you dramatically increase the odds that your product will one day face the kinds of scale and resilience questions Vogels addresses.

AI, Paradigm Shifts, and the Future of Building Companies

Over the past few months, I have been constantly reading conversations about how Generative AI will reshape software engineering. On LinkedIn, Twitter, or in closed professional groups, engineers and product leaders debate how tools like Cursor, GitHub Copilot, or automated testing frameworks will impact the way software is built and teams are organized.

But the conversation goes beyond just engineering practices. If we zoom out, AI will not only transform the workflows of software teams but also the structure of companies and even the financial models on which they are built. This kind of change feels familiar – it echoes a deeper historical pattern in how science and technology evolve.

Kuhn’s Cycle of Scientific Revolutions

During my bachelor’s, I read Thomas Kuhn’s The Structure of Scientific Revolutions. Kuhn argued that science does not progress in a linear, step-by-step manner. Instead, it moves through cycles of stability and disruption. The Kuhn Cycle1, as reframed by later scholars, breaks this process into several stages:

  1. Pre-science – A field without consensus; multiple competing ideas.
  2. Normal Science – A dominant paradigm sets the rules of the game, guiding how problems are solved.
  3. Model Drift – Anomalies accumulate, and cracks in the model appear.
  4. Model Crisis – The old framework fails; confidence collapses.
  5. Model Revolution – New models emerge, challenging the old order.
  6. Paradigm Change – A new model wins acceptance and becomes the new normal.

The Kuhn Cycle Applied to Software Development

Normal Science

For decades, software engineering has operated under a shared set of practices and beliefs:

  • Clean Code & Best Practices – DRY, SOLID, Unit Testing, Peer Reviews.
  • Agile & Scrum – Iterative sprints and ceremonies as the “right” way to build products.
  • DevOps & CI/CD – Automation of builds, deployments, and testing.
  • Organizational Structure – Specialized roles (frontend, backend, QA, DevOps, PM) and a belief that more engineers equals more output.

The underlying assumption is hire more engineers + refine practices → better and quicker software.


Model Drift

Over time, cracks began to show.

  • The talent gap – demand for software far outstrips available developers.
  • Velocity mismatch – Agile rituals can’t keep pace with market demands.
  • Complexity overload – Microservices and massive codebases create systems that are too complex for a single person to comprehend fully.
  • Knowledge silos – onboarding takes months, and institutional knowledge remains fragile.

These anomalies signaled that “hire more engineers and improve processes” was no longer a sustainable model.


Model Crisis

The strain became obvious:

  • Even tech giants with thousands of engineers struggle with code sprawl and coordination overhead.
  • Brooks’ Law bites – adding more people to a project often makes it slower.
  • Business pressure grows – leaders demand faster iteration, lower costs, and higher adaptability than human-only teams can deliver.
  • Early AI tools, such as GitHub Copilot and ChatGPT, reveal something provocative – machines can generate boilerplate, tests, and documentation in seconds – tasks once thought to be unavoidably human.

This is where many organizations sit today – patching the old paradigm with AI, but without a coherent new model.


Model Revolution

A new way of working begins to take shape. Here are some already visible in experimenting, we can all see around us –

  • AI-first engineering – using AI agents for scaffolding code, generating tests, or refactoring large systems. Humans act as curators, reviewers, and high-level designers.
  • Smaller, AI-augmented teams
  • New roles and workflows – QA shifts toward system-level validation; PMs focus less on ticket grooming and more on problem framing and prompting.
  • Org structures evolve – less siloing by specialization, more “AI-augmented full-stack builders.”
  • Economics shift – productivity is no longer headcount-driven but iteration-driven. Cost models change when iteration is nearly free.

Paradigm Change

In the coming years, some of the ideas above, and probably additional ideals, could stabilize as the “normal science” of software development and organizational building. But we are not yet there. Once we get there, today’s experiments will feel as obvious as Agile sprints or pull requests do now.


We are in the midst of model drift tipping into crisis, with glimpses of revolution already underway. Kuhn’s lesson is that revolutions are not just about better tools – they’re about shifts in worldview. For AI, the shift might be that companies will no longer be limited by headcount and manual processes but by their ability to ask the right questions, frame the correct problems, and adapt their models of value creation.

We are moving toward a future where the shape of companies, not just their software stacks, will look radically different, and that’s an exciting era to be a part of.

  1. https://www.thwink.org/sustain/glossary/KuhnCycle.htm ↩︎