Ryan Dahlās recent claim that āthe era of humans writing code is overā has sparked a wave of reactions across the software industry. Coming from the creator of Node.js, this is not a throwaway take. It is a signal that something meaningful may be shifting in how software is built and what it means to be a developer in an AI-first world.
The article āThe creator of Node.js says the era of writing code is overā explores this broader moment, drawing in perspectives from figures like Andrej Karpathy, Kent Beck, DHH, and Martin Fowler, each wrestling with the same underlying question: Is AI redefining software engineering, or are we overstating its impact?

At IntelliTect, this question is not theoretical. We see AI accelerating development workflows, reshaping team dynamics, and changing what clients expect from custom software. At the same time, we believe in correctness, maintainability, security, and long-term system health. The tension between speed and rigor is real.
To explore this moment, we asked two IntelliTect architects to share their perspectives on where AI fits into the future of coding. Kevin Bost, Senior Software Architect and Microsoft MVP, brings a fundamentals-first skepticism. Clayton Gravatt, Data and AI Architect, takes a more forward-looking view on how quickly AI capabilities are improving and what that could mean for developer productivity.
Rather than a settled conclusion, what follows reflects an active and evolving debate.
Kevin Bost: AI Accelerates Syntax, Not Responsibility
Kevin sees the current narrative around āthe end of codingā as exaggerated and potentially misleading, especially for teams responsible for shipping production-grade software.
His view echoes a common theme in the industry: AI can generate code, but it cannot take responsibility for whether that code is correct.
āIf you cannot evaluate AI-generated code, you are not a programmer. You are a prompt typist.ā
In Kevinās experience, AI can speed up implementation, especially in areas where developers are less familiar with a technology stack. He points to infrastructure work, such as Terraform code, where teams can reach functional results far faster than before. But he sees this as an incremental productivity gain rather than a fundamental shift in what engineering work entails.
āThe dream that is being sold has not really manifested at the scale people claim. AI can write syntax, but the promise that humans will no longer write real code does not hold up when you are accountable for production systems.ā
Kevin grounds his estimates and planning in team velocity and observed performance rather than speculative AI-driven multipliers. While he acknowledges that AI has improved throughput in certain scenarios, he remains skeptical of claims that it fundamentally replaces engineering labor.
āI will always stick to estimates based on real team velocity. AI helps, but it does not eliminate architectural complexity, testing requirements, debugging effort, or long-term maintenance.ā
His deeper concern is about erosion of understanding. If developers rely too heavily on generated code without maintaining the ability to reason through it, they risk losing the very skills required to validate correctness, detect subtle defects, and make sound architectural decisions.
āIf you cannot understand what the model produced, you lose control of quality. At that point, you are no longer engineering. You are delegating responsibility without oversight.ā
Clayton Gravatt: AI Is Already Changing the Competitive Landscape
Clayton largely agrees that AI cannot yet replace human judgment in serious software engineering. He also believes that every line of production code still requires careful review and understanding.
āToday, for production-grade software, humans still need to review essentially every line of AI-generated code. The models make enough mistakes that blind trust is not an option.ā
Where Clayton differs is in how he interprets the trajectory. He sees AI as an increasingly powerful multiplier for skilled developers, and he believes the pace of improvement is meaningful.
āIn most cases, a technically proficient person is already faster writing code with AI than without it. The size of that speedup varies, but over time, people who use AI will outcompete those who do not.ā
He emphasizes that effective AI use still requires strong system-level understanding. Prompting well is not about replacing engineering skill, but about applying it at a higher level of abstraction.
āYou still need to understand how a system should be designed in order to prompt AI effectively. The AI does not remove the need for expertise. It amplifies it.ā
Clayton has also observed tangible improvements in model capability over short timeframes. Tasks that once required deep language or framework expertise can now be completed more quickly, even when working outside a developerās primary domain.
āToday, I can build a small game in a language I barely know, and it mostly works. That was not true six months ago. It is not hard to imagine that in another year, medium-sized applications will mostly work out of the box.ā
He points to real examples where AI dramatically shifts the economics of building software. A small homeless-shelter-finder app that might once have taken weeks to build was created in days using AI-assisted development. That kind of speed makes certain projects viable that would previously have been too expensive to justify.
āWe are not seeing ten-times speedups on large systems yet. But if model capabilities continue improving, that multiplier could move upstream into medium and larger applications.ā
Clayton also raises a more philosophical question about fallibility.
āEven if AI always has the potential to make mistakes, so do humans. What happens if AI is wrong less often than I am?ā
Where the Debate Converges
Despite their differences, Kevin and Clayton share a meaningful point of alignment: AI does not remove the need for deep technical understanding. If anything, it increases the importance of being able to read, evaluate, test, and reason about code.
The industry voices cited in the original article reinforce this same conclusion. Whether optimistic or skeptical, experienced engineers consistently return to the idea that fundamentals still matter. The ability to debug what you do not understand remains a limiting factor. AI can generate code confidently, but it cannot reliably judge whether it is correct in context.
From an IntelliTect perspective, the shift is less about whether humans write code and more about where human judgment is applied. Some effort moves away from manually producing syntax. More effort moves toward defining intent, validating outcomes, designing architectures, enforcing quality, and ensuring systems behave correctly under real-world constraints.
The act of typing code may become less central over time. The responsibility for building trustworthy, scalable, secure software does not.
The Conversation Is Far From Over
Whether Ryan Dahlās claim proves prophetic or overstated, one thing is clear: software engineering is changing, and the profession is in the middle of renegotiating its identity.
Some developers will lean into AI as a force multiplier. Others will defend the primacy of hands-on coding and fundamentals. Most teams will land somewhere in between, combining acceleration with discipline.
At IntelliTect, we see this less as the end of coding and more as an expansion of what it means to build software well. The tools are evolving. The standards of responsibility remain.
If the era of typing every line of code is fading, the era of human judgment, technical rigor, and system-level thinking is only becoming more important.

Turning AI hype into practical software outcomes
If you are sorting through bold claims about AI and software development, IntelliTect can help you separate experimentation from production-ready strategy and build systems you can trust.
