Vibe coding to Chomksy: will linguistics pop off in a world full of prompting?
Like many developers, I’ve spent most of the past year vibe coding. Across the many tools I’ve tried, my workflow looks something like this: I prompt AI agents in natural-language and hope they make the edits I want. When it works, it feels like sorcery. When it fails, I’m left spelunking through probabilistic misfires, trying to reverse-engineer why the model chose a broken string-replace
instead of the obvious refactor.
Those moments of chaos keep sending me back to an old question: How formal should a language be if we want machines (or other humans) to get things right?
Chomsky’s detour through programming languages
In 1956 Noam Chomsky set out to describe human syntax with mathematical rigor. Traditional linguistics was rich in intuition but poor in proof, so he looked to the one domain that already treated language as strict, machine-checkable rules: early compiler research.
This was practical for a few reasons:
-
Linguistics lacked explicit grammars: descriptive grammars of the day were informal and inconsistent. Chomsky wanted a system that could generate every grammatical sentence—and only those sentences—so his theories could be tested, not just debated.
-
Compiler pioneers had solved the “explicit rules” problem: Early compiler work (by Backus, Kleene, Post) had formalised notation for regular and context-free grammars and shown how different automata parsed them. That gave Chomsky ready-made models (finite automata, push-down automata, Turing machines) to borrow.
-
The hierarchy exposed hard limits: By mapping English onto those grammar classes he proved that finite-state models (good enough for simple programming languages) cannot handle natural language phenomena such as nested relative clauses. At minimum, human language needs context-free power.
-
Syntax could be isolated from meaning: Formal grammars let him study “competence” (structural knowledge) independently of “performance” (actual usage), a key move in generative linguistics.
The result was the four-tier Chomsky hierarchy: regular, context-free, context-sensitive, recursively enumerable, a ladder that became a cornerstone of compiler theory and, paradoxically, a new lens on human language.
Chomsky didn’t start with computers for love of compilers; he did it because their precise, mechanisable grammars gave him the only logical tools capable of capturing natural language without hand-waving.
Vibe coding with looser rules
Fast-forward to today. We’ve swung to the opposite way of interacting with machines. Natural language prompts are fast, expressive, but ultimately imprecise. LLMs are powerful, but the context we supply them, and the precision of that context, matters deeply. Because LLMs flatten code to tokens and make probabilistic guesses, they offer a less repeatable flow than traditional programming, meaning that while vibe coding is faster, it significantly expands the guesswork, and the debugging surface area.
This is fascinating to me because we’re bumping against the limits of a too-loose system and craving clearer boundaries. Just as Chomsky borrowed programming formalisms to pin down syntax, we may need a new layer of structure—call graphs, type info, symbol tables—to keep our AI coding agents honest. This is why I am building Nuanced.
The case for a linguistics renaissance?
If natural-language interfaces continue to eat software (and they will), I wonder what role the study of linguistics will have, and how that will interact with intermediate. This got me thinking about a few questions:
- Formal semantics for prompts: do we want “center this div” to mean the same thing across multiple prompts, across multiple repos?
- Context models: graphs that ground a sentence in the relevant slice of code or data (shameless plug for Nuanced).
- Hybrid languages: the case for intermediate representations halfway between English and a DSL, readable yet unambiguous.
That sounds a lot like linguistics—syntax, semantics, pragmatics—updated for codebases and APIs. The field could find itself back in the spotlight, not in dusty grammars but in AI tooling, prompt engineering, and compiler-as-a-service layers that translate fuzzy intent into deterministic action.
Where Nuanced fits
Nuanced.dev is my attempt to inject that missing structure. We statically analyze your repo, build a call graph, and hand the LLM only the nodes it needs. The prompt stays English; the execution regains Chomsky-level precision. It’s a tiny step toward making vibe coding safe, and maybe toward rekindling the marriage between linguistics and computer science that Chomsky kicked off nearly seventy years ago.
If you’re experimenting with AI coding tools or have thoughts on the future of linguistics in software, I’d love to hear from you! Feel free to reach me at ayman@nuanced.dev.