A year ago I was skeptical of AI coding tools. Not ideologically — I wasn’t worried about the robot apocalypse or the death of programming. I just didn’t think they’d actually save me time. Every demo I saw felt staged. Every “look how fast” video skipped the part where you spend 20 minutes reading the generated code and fixing the subtle errors you didn’t notice until the tests failed.
I was half right.
The demos are still staged
Let me get this out of the way: most AI tool demos are optimized for looking impressive, not for showing you what your actual day-to-day will look like. The demo picks an easy problem. The model gets it right. Everyone applauds.
Real usage is messier. The model confidently generates code that calls an API endpoint that doesn’t exist. It “refactors” a function and breaks three other things. It writes tests that pass but don’t actually test the thing you care about.
None of this makes the tools useless. It just means you need to calibrate expectations.
What actually compounds
After a year, here’s where I’ve found genuine leverage:
Boilerplate and scaffolding. This is where AI tools are simply better than me. Starting a new project, wiring up a new integration, writing the tenth variation of a similar component — AI handles this faster and with less frustration. The code isn’t surprising. That’s exactly the point.
First drafts of documentation. I hate writing docs. I’m slow at it and I always feel like I’m stating the obvious. AI is fast, doesn’t mind stating the obvious, and produces something I can edit into shape in a fraction of the time it would take to write from scratch.
Explaining unfamiliar code. Drop in a function from a library you’ve never touched and ask what it does. This is surprisingly reliable and has saved me hours of spelunking through source code and Stack Overflow.
Rubber duck debugging at scale. The classic technique of explaining your problem to someone else — even a rubber duck — forces you to articulate what you’re assuming. AI takes this further: it’ll actually respond, ask clarifying questions, and sometimes catch the thing you glossed over.
What doesn’t work well
Architecture decisions. AI tools are trained on a distribution of past decisions. They’ll give you an answer that looks like a plausible architectural choice, but they can’t actually reason about your specific constraints, team capabilities, and long-term maintenance burden. Trust your own judgment here.
Security-sensitive code. Not because AI is malicious, but because subtle security bugs require exactly the kind of careful adversarial thinking that models aren’t great at. Always review security-critical code with human eyes and proper tooling.
Anything where the spec is fuzzy. If you can’t precisely describe what you want, the model can’t give it to you. You’ll get a plausible-looking thing that doesn’t quite fit.
The real skill
The engineers I’ve seen get the most out of AI tools share something: they’ve gotten very good at prompting. Not in some mystical way — just in the same way a good manager is good at delegating. Clear instructions. Concrete examples. Specific feedback.
If you treat it like an oracle — just ask and expect a perfect answer — you’ll be disappointed. If you treat it like a capable but literal-minded junior developer who needs clear briefs and explicit feedback loops, you’ll get a lot done.
That’s where I’ve landed. Not magic. Not useless. A genuine tool, used deliberately.