I started using Copilot as a GitHub employee when it first emerged, long before it was available publicly. I found myself skeptical. The early versions felt noisy and distracting—auto-complete suggestions caused more interruption than inspiration. The user experience fundamentally scattered my attention and focus, something that was not only difficult for my attention deficit mind to sustain, but also something I had worked for years to cultivate as an engineer.

But then in 2023, something profound shifted with integrating ChatGPT into my workflow. I discovered that the true power of AI coding tools wasn’t in mindless automation, but in cognitive augmentation. The idea of building developer tools that served as “prostheses for the human mind” is something that guided the applied PLT work of the Semantic team on building code understanding tools. With ChatGPT, suddenly, I had a collaborative partner that could help me introspect code, surface hidden perspectives buried in long error messages, and spark ideas or approaches I hadn’t considered. Cursor emerged as a more refined manifestation of this potential—a tool designed for interaction rather than mere replacement.

As with any technology, thepromise of better tooling comes with nuanced challenges. Productivity isn’t simply about generating code faster, but about maintaining the deep, focused state of engineering problem-solving. The current generation of AI coding assistants introduces subtle friction into this delicate psychological ecosystem. I don’t know if this inability to focus is a byproduct of the automation and generative outputs of these tools, or if there are aspects of my own life (shifting from IC engineer to running a company) that have also inhibited deeper focus.

Context remains a critical limitation. These tools often operate in a narrow frame, struggling to understand project-wide implications. Even if project-wide context is available, there are numerous other information sources that guide decision-making for most engineers that are not represented.The result? Hallucinations and outputs that require more debugging than they save. More insidiously, they risk keeping engineers skimming the surface of problems rather than diving deep.

The most intriguing question now isn’t whether AI can write code, but whether it can actually enhance our cognitive processes and make us more productive. I want tools that amplify human creativity rather than replace focused attention. Sure, some amount of automation is helpful for driving productivity, like writing boilerplate. But I think that thinking about where these AI interventions are most useful is similar to thinking about what well-designed abstractions look like, because you have to determine what the boundaries of where the right intervention point is, separation of logic vs what the right amount of context is.

I’m sure RAG pipelines will be a big part of the answer, but I’m also sure that the right UX will be a big part of the answer. Right now, we’re still figuring out what that looks like.