Skip to content

In the Age of AI, What's Changing?

AISoftware EngineeringIndustryOpinion

AI changes how we interface with our technology. And that's about it.

Every headline: "AI will replace developers." "AI will transform everything." "The end of coding as we know it." The discourse is apocalyptic or utopian. Never measured.

Here's the measured version: AI is a new interface layer between humans and computers. That's genuinely significant. It's also genuinely just an interface change. The problems are the same. The need for judgment is the same. The craft is the same.

If that sounds dismissive, it isn't. Interface changes matter enormously. But they matter in a specific way that history has shown us before — repeatedly, predictably, and always accompanied by the same panic.


What's Actually Changing: The Interface Layer

The way we communicate intent to computers is shifting. Instead of writing every line of syntax, we can describe what we want and collaborate on the implementation. We can review generated code, direct an AI through architectural decisions, and iterate on output in natural language. This is new. It changes the workflow of software development in real and meaningful ways.

But it's an interface change, not a capability change. The computer doesn't do anything fundamentally different. The problems it solves haven't changed. The constraints — performance, security, maintainability, user experience — haven't changed. What changed is the layer between your intent and the machine's execution.

Jef Raskin, who initiated the Macintosh project at Apple, put this precisely in The Humane Interface (2000): "The system's capabilities remain unchanged; only the interface between the human and the machine changes." He was writing about the GUI revolution, but the observation applies to every interface abstraction that has followed.

The mouse didn't make computers fundamentally different. It made them accessible in a new way. The command line didn't die when GUIs arrived — it gained a broader base of users who could now also interact graphically. The interface layer expanded. It didn't replace.

AI is the same pattern. You still need to know what you want to build, why, and what "good" looks like. You have a new way to express it.


What's NOT Changing: The Hard Parts

Understanding the problem. AI can help you build what you've decided on. It cannot tell you what to build. The hardest part of engineering has always been figuring out what the actual problem is — talking to users, understanding constraints, scoping requirements, identifying the thing that matters versus the thing that's loudest. That's still entirely human.

Making tradeoffs. Every technical decision is a tradeoff. Performance versus readability. Ship now versus ship right. Build versus buy. AI can present options, but the judgment call requires context it doesn't have — your users, your budget, your team, your timeline. Judgment hasn't been automated.

Taste and craft. The difference between code that works and code that's good. Between a product that functions and a product that feels right. AI can generate competent output. Competent isn't the bar. The gap between "it runs" and "it's well-made" is still filled by human judgment, experience, and care.

Accountability. When the system goes down at 2am, AI isn't getting paged. When the architecture decision from six months ago turns out to be wrong, AI isn't explaining it to stakeholders. Ownership hasn't changed. Responsibility hasn't changed. The person who directed the work is still accountable for the result.

Communication. Explaining technical decisions to non-technical people. Translating business requirements into engineering scope. Managing expectations across a team. AI doesn't sit in those meetings.


The Panic Is Familiar

Every major interface abstraction has triggered the same fear. The historical record is consistent enough to be predictable.

1950s: "Compilers will replace programmers." When Grace Hopper developed one of the first compilers in 1952, she met intense resistance. Programmers who wrote assembly code believed automatic programming would make their skills worthless. In her words: "I had a running compiler and nobody would touch it." John Backus, creator of FORTRAN, faced the same pushback — his team at IBM had to prove the compiler's output was nearly as efficient as hand-written assembly before programmers would accept it. (Backus later reflected on this resistance in interviews and his 1977 ACM Turing Award lecture.)

What actually happened: high-level languages created demand for a vastly larger class of programmers. The number of people writing software exploded.

1990s: "Visual Basic will replace real development." The rise of RAD (Rapid Application Development) tools — particularly Visual Basic — triggered a different flavor of the same fear. "Drag-and-drop programmers" were dismissed as "not real developers" across Usenet forums and in publications like Dr. Dobb's Journal. Microsoft's Visual Studio team created the "Mort" persona — the opportunistic developer using visual tools — which leaked publicly and became a cultural lightning rod for the tension between accessibility and craft.

What actually happened: VB became one of the most widely used languages in history, democratized Windows development, and the demand for developers only grew.

2010s: "No-code will replace developers." Gartner predicted in 2019 that by 2024, 65% of application development would happen via low-code and no-code platforms. TechCrunch, Wired, and Harvard Business Review ran years of coverage framing no-code as a fundamental shift in who builds software.

What actually happened: no-code tools thrived for prototyping and simple applications. The U.S. Bureau of Labor Statistics projected software developer employment growing 25% from 2022 to 2032 — far above the average for all occupations. The number of software developers in the U.S. has grown from roughly 500,000 in 1990 to over 1.8 million by 2023.

The pattern is the same every time. Fred Brooks identified why in The Mythical Man-Month (1975): the hard part of software is the essential complexity — understanding the problem — not the accidental complexity — expressing it in code. Each new abstraction removes accidental complexity. The essential work remains, and it grows, because every time building software gets easier, we build more software.

Marc Andreessen called this "software eating the world" in his 2011 Wall Street Journal essay. As software becomes easier to build, its domain of application expands. More software means more need for people who can think clearly about what to build and why. The interface changes. The demand for judgment increases.


What Actually Matters Now

If AI handles more of the syntax, the premium shifts to the people who know what to build and why.

Judgment over syntax. Domain knowledge, user empathy, architectural thinking — these go up in value when routine implementation is automated. The developer who can explain why this approach and not that one becomes more valuable, not less.

Systems thinking. AI is good at local tasks. It's not good at seeing the whole system — dependencies, failure modes, second-order effects, the interaction between subsystems. The ability to think in systems becomes more valuable when the routine work is handled.

Learning to collaborate with AI. This is a genuine new skill. Not "prompt engineering" as a buzzword, but the real discipline of structuring context, reviewing output, maintaining quality standards, and knowing when to trust and when to verify. It's closer to management than to coding — you're directing work, not just writing code.

Building with AI, not just using it. There's a difference between asking AI to write a function and building a methodology, plugin system, and workflow that makes AI a structural part of how you work. The former is consumption. The latter is engineering. Both are valid. Only one creates durable value.


The Honest Assessment

AI does make some work faster. That's real and shouldn't be dismissed. Boilerplate generation, routine refactoring, documentation, research synthesis — these are measurably accelerated.

AI does make some roles less necessary. If a job consisted entirely of producing boilerplate, writing routine CRUD endpoints, or generating standard documentation, that job was already being automated by scripts, templates, and tooling. AI accelerates an existing trend. It doesn't create a new one.

The new jobs and roles will be things we can't fully predict — the same way "web developer" wasn't a job before the web existed, and "mobile developer" wasn't a job before smartphones.

The people who will struggle are the ones who defined themselves by their tool instead of their judgment. If "I write Python" is your identity, AI is threatening. If "I solve problems and Python is one of my tools" is your identity, AI is just another tool in the kit.


The age of AI is real. The changes are real. But they're interface changes, not paradigm changes. The problems are the same. The need for judgment is the same. The value of clear thinking, good taste, and genuine expertise is the same — maybe higher than ever, now that the routine work is being handled.

Stop asking "will AI replace me?" Start asking "am I someone who can only do the parts AI handles, or am I someone who does the parts it can't?"

The tool changed. The craft didn't.