What's a human for
How to hone AI engineering
A day late, it took me a lot of trial and error to figure out what I really wanted to do with this blog. Please enjoy!
Last week’s blog generated a number of thoughtful and exciting responses. I’ve gained a sense of how far behind the curve I am on how people are using AI to compound their work.
This post attempts to substantiate the following claims:
The most important skill for software engineers to hone is to set up feedback loops for AI to build and test integrated components of a system.
As implementation gets cheaper, the value shifts entirely to problem selection. The engineers who survive will be the ones who can answer: what’s worth building?
The people who make the shift today from writing great code to writing great prompts will become exponentially more productive while those who don’t will become obsolete as total tech engineering jobs contract.
Disagree? Email olly.k.cohen@gmail.com.
Follow Up on You’re Fired
I’m still making sense of the fact that our two principal engineers— whom I was hoping would become my managers—are now gone.
My work this week consisted of integrating their code across Web App —> API Server —> Mobile App. I can confirm that their code quality is readable and generally excellent. I can also confirm that each piece of code they touched only worked in isolation.
At no point did they seem to test their code against the complete system. They left no evidence of trying to verify that their changes propagated correctly through our full tech stack. They built rooms, not a house.
I still don’t believe I’m a better engineer than either of the ones we let go. But I have absolutely come to believe in working on the integrated system for the duration of the development process instead of building separately and connecting at the end.
For example, the mobile app wasn’t receiving tags from the API server. I didn't debug the mobile code and the server code separately. I gave Claude access to both layers, described what I could see on the phone screen, and let it trace the problem across the boundary.
Claude’s fix touched three files in two repositories. No frontend or backend specialist would have written the same solution on their own—and that coordination used to require meetings, etc. This is what I mean by integration being the skill.
All software engineers are full stack engineers now.
Tangent: Brief History
I took my first computer science class second semester freshman year of college and sucked at it… but was exhilarated by the joy of writing programs and seeing them run.
A perpetually mediocre computer science student, I chased greatness outside the classroom by starting a Developer Club and a student-run software consulting business with my friend Daniel. My energy and passion for tech entrepreneurship seemed inexhaustible when I graduated in 2021.
By the time I left Amazon at the end of 2023, however, I had absolutely no vision for a future in tech for myself. During my one year there, Amazon ordered all remote employees back to the office, forcing many to move, and then proceeded to conduct three rounds of mass layoffs. I survived all the layoffs, at which point you may recall I got fired for underperformance.
Since entering the job market in 2021, the dream of becoming a highly sought-after software engineer by the world’s most innovative companies has died not just for me, but for thousands of computer science grads. Many who didn’t lose their jobs are seen as disposable and work long hours with less job security.
Now in the 2026 rubble of a burned forest, I’m beginning to wonder if I may be suited for a wildly fast regrowth in tech should I want that. Skills that I never mastered like cranking out tons of code and meticulous attention to detail have been devalued. My natural curiosity has always drawn me to big picture thinking and learning the interconnectedness of systems, which AI allows me to chase.
I have freedom in my current job to run experiments, ask new questions, harness AI, and build delightful software.
I imagine this must be how Daniel felt when the AI lightbulb went off for him. I am still just a baby in the AI world and have much to learn, especially from my friends who have been around it for longer than me. Following are the selections from my conversations and research over the past week.
Highlights from Friends on AI
I was most surprised by the vast range of utility my friends have derived from AI between very little and transforming their way of working.
Meta Senior Engineer: I still don’t love AI. I continue to see it hallucinate. I’ve had to read a lot of documentation lately and find mistakes that would have been avoided if the authors didn’t use AI.
Fast-growth Startup Senior Engineer: I think people overestimate short-term impact and underestimate long-term impact. So far, I’ve seen it accelerate good engineering and also accelerate bad engineering. For example, we had one engineer push code with highly sensitive information. The biggest problem we need to solve right now has never been solved, so AI can’t help much with it.
Big Tech UX Designer: It really just takes away busy work. AI should make people super humans at their jobs, not obsolete.
Electrical Engineer at Boeing: I spend a lot of time writing one prompt. Maybe 30 minutes.
Freelance Software Engineer: I use one AI dictation service that I can talk to about an idea, “I’m thinking of designing a tool that integrates with SalesForce…” Then, that synthesizes my thoughts into a prompt that I can copy and paste into Claude Code.
One pattern I noticed was that skeptics treat AI like a tool. You give it a question, and it spits out a response. The AI lovers describe it as a collaborator; they learned to set up problems that AI can solve.
My high school friend Tom, who works on a search engine for AI (exa.ai), had a fascinating conversation with me. He strongly disagreed with my take on humans’ role as facilitating the “last-mile delivery of bug-free well-functioning software.”
Tom made a more ambitious argument that software engineers can now do what they have always been meant to do. They can build at the speed of ideas instead of getting bogged down by tedious implementation for weeks at a time. He then introduced me to the concept of setting up “feedback loops” for AI to work inside, referenced earlier.
After our conversation, I landed on the thesis that our role is to figure out how to set up games that Claude will win. Claude’s own revision was, our role is knowing what game is worth playing.
I do choose to solve different problems with Claude than I would without. The appendix below contains a real prompt, “If you determine there is no bug, checkout a new fix branch and commit this code. Then, we can implement data reconciliation controls in a subsequent commit.”
In this case, I steer Claude towards building new control systems rather than chasing a potential bug. This is why I think the switch from writing code to writing prompts matters now. Every day I have written code with AI, I have diverged further from the past version of myself at Amazon.
Boris Cherny, the creator of Claude Code wrote, “There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently.”
My focus on prompts has redirected my attention towards engineering Claude’s own behavior. Below is my “Global Rules” file that Claude reads before every single prompt:
Take action on the prompt, and surface uncertainty without blocking on it in your written response
Record questions that arise—design flaws, potential bugs, gaps in understanding—and include them in the output
Stay oriented to what I’m actually trying to accomplish, in addition to what I literally asked for
You always have access to the four repositories in “dev-workplace”. If my request involves changing any of the repositories, you can go ahead.
I wrote this because I noticed Claude would try to write solutions to my prompts without fully understanding the problem or considering my final goals. Instead of abandoning Claude and deciding that it still isn’t capable enough yet, I’m now guiding its behavior across all future prompts.
Appendix
For the extra curious…
Real Claude Prompt I’ve used
Thanks. Things appear to be working better. Here is the log output.
WARN Excessive number of pending callbacks: 501.
LOG Starting download for Tags...
LOG Fetching tags with lastSyncedOn: 2026-02-05
LOG [TagSync] Fetching deleted tag IDs from server...
LOG [TagSync] Found 8472 deleted tags to remove locally
LOG [TagSync] Deleted 0 tags locally
LOG <=== Tags sync successfully ===>
I can see in the app that 140,805 out of 284,192 tags have been synced from SalesForce. The reasons for this may have nothing to do with an existing bug.
Please investigate Download.tsx and the API endpoint /api/sales-force-project-tags' for potential errors in the current implementation. One thing I noticed is that tags are not currently downloading in even batches of 1000, which they did previously.
If you determine there is no bug, checkout a new fix branch and commit this code. Then, we can implement data reconciliation controls in a subsequent commit
Commentary: Here, I ask Claude to make changes to the backend and the frontend at the same time. This is a mini feedback loop where I manually provide the information on the phone screen to Claude. When it becomes possible to “give Claude its own phone” (maybe in 2027?), this prompt won’t be necessary because Claude would iterate on its own until the app’s tags are synced.


