On Vibe Coding
- Ethan Smith
- Mar 28
- 9 min read
Updated: 2 days ago

Introduction
Vibe coding may be one of the best and worst things 2025 has had to offer. The memes about Claude thinking for 2 hours just to max your credit card out on AWS are almost real. If you fully embrace and let go, you very well might end up with a dumpster fire of a codebase. It's like letting a unsupervised, eager intern take the steering wheel for a little too long. But it also has the potential to get a lot of work done quickly, and the agentic mode on cursor is pretty great for small to medium sized projects. As with all things in AI, if we consider not where it is right now, but instead observe the trend between where it was a month ago compared to now, the near future gets properly interesting.
Vibe coding was first coined by Andrej Karpathy in this tweet.

It's no longer just using AI as a copilot or an auto-complete. The code itself is effectively gone. Before you is a conversational chatbot and the outputs of what it's just done, and from that you can keep coaching it to guide its outputs to what you want.
It's equal parts incredible and disastrous.
levelsio has been developing a dogfighting multiplayer game in public, and many others now have created games that are actually functional enough.

On the other side, there are also horror stories of developed products now with unexpected backdoors and security risks as a result of the sloppy AI code that the developers now have no idea how to handle.
Despite not being much of an avid partaker, I, for one, am for this age of vibe coding.
Why?
Because that's where things inevitably have to go. Technology doesn't move backwards.
Demanding speed
We're reaching a critical mass. The speed of software development is one of the biggest bottlenecks to making things happen. We have every reason to want it to go faster. If you've ever worked at a tech company with a massive monorepo, you know the reality of this. As things get larger, they slow down. Codebases become swamps to traverse through that everyone would love to refactor one day, but it's insurmountable to imagine where to start.
To some degree, larger things moving relatively slower feels like a fact written into our universe. And maybe it is. Even with this plight, we can discover alternative, faster methods of software development, with scaling laws that are less vulnerable to the slowdowns of size. Going from a terminal to VSCode, git, among other tools surely helped us wrangle larger software with less catastrophe. We demand the next stage on that roadmap.
Imagine if products like Instagram could be replicated in a days time. Even more importantly, imagine if all the foundational level software like CUDA, new coding languages, many of the packages that operating systems use, if they could all be developed far more quickly.
A place of innovation that has the capacity to profoundly leap technological progress is hardware. We know our machines are not running optimally, and there are now many startups proposing alternatives. However, one of the most daunting things about creating novel hardware deviating from the computer we are familiar with today, is how much work needs to be done to have it support all the applications we have today, ideally bug-free.

We've learned a lot and improved how we develop, so it's not 1:1, but its not entirely dissimilar from imagining the long trek from the origins of computers up to how they evolved now. One way or another, we need to go from a small proof of concept, to making something that can operate on the nanometer scale and be interfaced with in "subtle" ways like flipping bit switches through channeled electricity as opposed to needing to rewire a circuit by hand.
So the demand is there. Faster software development, done right, has the potential to vastly accelerate our output across all technological niches.
AI coding presently feels like the strongest candidate to substantially accelerating software development. It really comes down to intelligently designing systems, which is either handled by natural or artificial intelligence. Digital artificial intelligence (presently) has the upper hand here:
It is extensively and quickly improvable.
It operates at the bare metal.
While humans are bottlenecked by time spent on thought as well as converting intention into keystrokes, AI can just emit code directly, straight off the dome.
However, there's also a ton of reasons not to be comfortable handing off the code that runs our lives to the sometimes dimwitted bots.
The Salesforce glitch that broke all the airports across the globe was a rude awakening. Or that time a malicious update was made to an open source linux package that would have introduced a backdoor across all linux machines that updated to it. These were both pretty unnerving events resulting from pretty small acts. I knew in my head software was fragile, but these moments revealed how much society is really built on a house of cards.
I'm not scared of AI much. I'm scared of bad actors and software that is simultaneously crucial to living but also so much of a swamp, it nears incomprehensibility. By proxy, I get a bit uneasy at the thought of AI accelerating our software, but also further obscuring it and risking reaching a point where no one knows how to debug it.
I genuinely feel this is a dilemma that has higher risk of a catastrophe coming to fruition than fears about rogue sentient AI. A nightmare scenario is increasingly putting more dependence on technology, our military, our government, our communication, and beyond without a contingency plan. Follow this with getting hit with a glitch or something like an EMP from a solar flare. We'd be in anarchy, effectively sent back to the stone age for a while, if we can pull it together and recover.
So how do we proceed? Eventually we need to surpass the software development bottleneck. There's no way out but through.
Given the original premise that faster and better software development ultimately demands agents capable of intelligent design and that we'd like to reduce the barrier between the agent and the code itself, I believe it lies in either upgrading human capabilities and their means of interfacing with code or AI writing code. Or a blend of both, which is what present developments feel like.
Current Affairs
Presently, coding languages are structured to be unambiguous and deterministic. Unlike english, there is no debating the meaning of a given line of code, between you and me, but also with the computer. Programs are clear like, "add 2 + 2" or "flip this bit" as opposed to "Move the header of this site somewhere else". The computer executes the instruction exactly, and this generally is consistent across all other computers as well.
These two properties have persisted even as we have evolved from assembly to higher-level languages like Python. The key differences lie abstractions that can improve code length, usability, and readability by hiding away complexity.
Abstraction, I would argue, is key to improving software development (and a key tool to humans in general, written here under "Inverse Gestalt"). Instead of writing a script that writes out all the accesses to the pixels on your screen to raise the red LED to maximum intensity while turning off blue and green, it would be nice to have something short that is read as "turn the screen red" and let the compiler figure out how to make it fast. We'd like to be able to do more with less.
The need to preserve unambiguity and determinism, however, are heavy constraints in designing languages for computers. While the two are present, abstracting further in hopes of reducing coding workload can result in a substantial loss of flexibility, for something like Scratch, an unlikely choice for frontier AI research.

Ultimately, its hard to imagine something that can bring orders of magnitude-sized gains to the speed at which code is developed from our typical routes. Having high level libraries like Pytorch, matplotlib, among others are all wildly beneficial but the appetite for greater speed is still very much there.
As with many things in information theory, we can compress further if we don't mind a little loss of precision instead of a 1:1. We could imagine our vaguer, incomplete instructions being filled in and turned into complete code. Think about how it feels to a coworker line-by-line what exact code to write as opposed to a general direction but the actual result is up to their interpretation.
Channeling Intention into Action, Guided by Abstractions
And this is the broad framework of "Vibe Coding." Hence the name, instead of conveying exact programs, we convey compressed summaries, or vibes, of what we want to happen. In the specific case of LLM coding, its explaining what you want, at your chosen level of detail, and letting the AI "upsample" that into fully functional code.
Coding languages, as is, are a level of abstraction before they get converted into machine code. The AI interface adds another level of abstraction allowing us to interface in plain English, though this time some is left up to the stochastic interpretation of the AI.
To me, turning our back on the once necessary constraints feels like the only way forward for further gains. Stochasticity, the acceptance of a smidge of randomness, is freeing.
The story of human development centers around channeling intention into action. The onset of intelligence was the realization that your environment could work for you, beyond what a single body could produce. We turned the earth around us into tools to address our corporeal shortcomings. We organize ourselves around common efforts to augment output. At some point we realized the power of the computer, and decided the future was figuring out how to make it do our bidding.

Programming is the language of intention made readable for computers. It's just the current meta, the best medium of our time for converting intention and some effort into maximal output. It's one means of communication intention, but far from the last.
By this mindset, if a new interface can further shorten the gap between intention and execution, I will happily take it.
There are many ways this could manifest, each varying in the interface we have and how much is "guessed" or filled in by AI.
As it stands, code compilers can be somewhat considered as "guessing" at what we intend to do in higher level languages, although deterministically. Our code may thoroughly explain to the computer what to execute, but the compiler might figure out the last stretch of how to best execute it quickly.
Moving towards the heavy guessing and stochastic side of the spectrum, I could imagine a coding interface that lets you draw diagrams for neural networks and an AI will write the code for it. Neural networks these days have pretty common parts, though inevitably there will be things we want to do that can't be described as high level blocks, so we likely can't depend on an infinitely growing library of abstracted building blocks to fit every configuration. Instead in real time, here an AI could guess at the intention, fill out much of the needed logic, and also write code in an optimized fashion based on the full picture of the build.
I spoke with a friend some time ago about what a good coding experience should feel like. I said that it would be great to feel like Iron Man sweeping his hands around holograms. It is a shame that systems, neural networks, and other abstract structures must be converted into words for the keyboard middleman. I demanded something more visual. Something where intention can be expressed spatially. Unironically something like this. I wanted the route of reducing the bottleneck of communicating intention, though realistically being able to work symbolically with some AI guessing at my intentions.

My friend shook his head. He said coding should feel like going on a walk while on a call with a really smart software engineer that knows how to implement what you want. He, I think, wanted the route of having the AI take the wheel. This was a level of trust in a machine I'm not ready for. I still have an instinct to be in the drivers seat. But in the long run, I think he's right. Without some kind of superhuman augmentations, AI will eventually be working at a speed and capability such that my involvement might only be slowing it down.
If we can buy this is the path forward, the remaining questions are:
How do we get AI in a place where we can confidently let it develop for us, AND know for sure that the AI is ready for this?
I can't say I know for sure, and given the crisis of evaluations on AI models, I question whether anyone has an answer. Though in the near term, I think the baby steps we are taking right now is the right way through. Copiloted coding is now a quintessential part of the coding experience for many. The vibe coded projects are much more than what they were months ago and even if they're not production ready they are delivering value. We, are the very least, appear to be moving smoothly along to the future.
Comments