Vibe Coding Isn't New

“The Wheel of Time turns, and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the Age that gave it birth comes again.”

About six months ago, Andrej Karpathy (may the machines always bless him) coined the term vibe coding in a now viral tweet. Much has been said about the term and the widespread phenomenon that now pervades much of the online world; from pearl clutching to nihilistic celebrations of the breaking free of software development from the iron grip of those pesky computer scientists, looking down upon ordinary non-programmer mortals from their ivory towers (found high up in The Cloud, of course).

There is some nuance to the definition that is necessary to clarify first - vibe coding as done by Software Engineers is remarkably different from vibe coding done by everyone else. For people who already know how to code, vibe coding is just laziness - meaning, to trust and not verify the code the AI is generating and not bothering to introspect, review and understand it, especially the consequences of the approach it takes, and the potential issues with a given implementation. AI assistance on the other hand is a powerful tool to augment software development capabilities, similar to a debugger or an Integrated Development Environment (IDE), and it would be foolish to not utilize every tool available to optimize a development workflow. For everyone else vibe coding is a not-so-unique opportunity to partake in what was previously a problem solving methodology limited to those with arcane and esoteric knowledge of programming languages and computer science.

This is not a unique opportunity. When Graphical User Interfaces (GUIs) came about, they expanded the user base of computers to include those not willing to tinker with ominous black screen text-only terminals. Now there was condescension amongst certain groups - the UNIX community for example - that looked down upon the throngs of nonprofessionals now able to blindly frolic their way around computers using the very poorly written Windows GUI. There was no need to spend days getting XFree86 to work, no magical incantations required to get a sound card up and running. Sure, people didn’t need to know what config.sys was, or how it differed from autoexec.bat, and when stuff broke they asked for help and figured it out.

The point is, with hindsight, it would now be an odd thing to say GUIs were a bad thing and the ability for more people to be able to access compute through friendlier UIs was detrimental to society. Given the prevalence now of the subsequent touch interfaces like that on a phone or a tablet, no one quite cares that the majority of people don’t understand the inner workings of these machines in great detail, similar to how most people don’t much care about the inner workings of cars in order to drive to where they want to go. AI now provides a tremendously powerful tool to enable a new audience to solve problems, and whereas there may be the initial concerns of poorly written vibe-coded applications flooding the world, minor corrections to the AI tooling itself will enable these problems to be fixed at scale, so the solution to vibe-coding issues will be more vibe-coding.

So what happens to traditional programming, and programmers then?

Programming in the Age of AI

If your job consists of writing glorified CRUD wrappers, you’re in some trouble. I’d guess about 75% of all Big Tech workers fall in this category. AI can replace you as fast as the bureaucracy your company operates through can move. Luckily for you, for most large organizations over a thousand people this is pretty slow, and the company probably hasn’t innovated anything meaningful in a decade or so anyway so there are no real consequences to the bottom line apart from making EBITDA a viable financial metric again since they don’t need to pay AI bills with precious company stock.

For real programmers, traditional pre-AI programming involved understanding the problem space, visualizing a high-level solution, and then iterating over specific components keeping in mind the interaction of each component with the whole system. Most mental energy and focus were spent on emulating data transformations over time through various combinations of user and system interactions.

AI-augmented programming shifts almost all the focus of programming towards populating the LLM context window with the right set of information for it to be able to “compute” the right diff for the codebase to solve the given problem.

Next we want the AI to be able to check its own work, so setting up a test harness either via unit tests, or something like a Puppeteer MCP server, and allowing the AI to iterate through a change/test cycle by itself helps it (and us) reliably converge on the solution.

Lastly we inevitably run out of context window space. The challenge then becomes how to use traditional Turing-Complete programming with data structures in RAM, bookkeeping on disk, and interleave those with the construction of context windows and invocations of LLMs.

The overall effect this has is that it makes it easier for us to solve difficult problems. At the same time, it makes it easier to produce complex code that is unmaintainable. The challenge remains - as always - the developer’s responsibility to manage the complexity of a solution and ensure it does not exceed the complexity of the problem.

That’s a role that’s not in danger of going out of fashion anytime soon.