The value of a developer in the age of Artificial Intelligence
March 3, 2026
What follows is my opinion about what being a "developer" means, today. This post is partly a letter to my developer friend worried (whoever they are) about the future and partly a moment of self-reflection.
Code is not where the value is
The artifact of development is code (adding, removing, updating it), but the value of development is responsibility.
This is not an original idea of mine: I've read it multiple times over the years, but AI is making that concept more relevant than ever.
I'm aware the artifact (code) is often a representation of my responsibility, but I'm valued on the responsibility I'm willing to take for that artifact; whether I did it myself or I've inherited it from others. I will be valued and compensated for taking over and supporting a code base I've not created myself (thus no artifact on my side), but I will not be valued and compensated for the code I've provided without taking responsibility for it.
This distinction of where the value lies helped me navigate my relationship with open-source: I built the code, why am I not being compensated for it? Because I'm not taking responsibility for it. If it breaks I might assist you, but I might do that in times and modes that do not align with your needs.
The frequent move of open-source developers that offer premium support or make a service out of open-source software boils down to the promise or responsibility.
This is better understood with a metaphor: cooking is open-source by definition.
Recipes are widely known and documented and anyone can potentially reproduce them. There are "secret" recipes here and there, but they rarely stay secret for long. Recipes can be reverse-engineered by the willing, much the same way software can.
What clients pay for in a restaurant is not the secret recipe: it's the responsibility the establishment is willing to take for providing good food, prepared in acceptable conditions with properly sourced ingredients, served in a pleasant and safe environment.
The most common and widely expected form of this responsibility in software is maintenance: supporting, fixing, and evolving code over time.
Responsibility requires ownership
For any developer to feel they can take responsibility for some code, they have to feel a measure of ownership.
Ownership is knowledge but also attachment and experience.
A developer taking on a new piece of code they like will power through their lack of knowledge
and experience because they feel "connected" to the code, they feel they "own" the code even if this sense of ownership is not grounded in deep knowledge.
A developer that knows a piece of code like the back of their hand is more likely to think they "own" the code (ownership need not be exclusive).
AI provides incredible tools to rapidly acquire knowledge about a piece of code, and that might build and increase the sense of ownership in a developer if that is the end they pursue.
But AI also enables a rhythm of consumption and production of code that risks building confidence ("I'm good", "I fixed the issue", "I added the feature") without building ownership and, thus, responsibility. Confidence can be grounded in results, but results might not be grounded in ownership. "AI did it" is a phrase becoming all too frequent, but it's not new: I think its previous incarnation was "I've found it on Stack Overflow".
Without ownership, there cannot be real responsibility.
Without responsibility, there is no value.
Walking the line
Humans, to this day the only species that can take responsibility for code in a meaningful way, have a negativity bias.
As developers we share stories about that time the server crashed, a bug killed thousands of sites or we involuntarily introduced a DDOS vulnerability in our code.
Negative experiences weigh on us, thus shape us, more than the positive ones.
Boredom, struggle, breaking one's head on a problem, hours in a debugging session tracking an obscure defect are all tolerable, negative experiences that shape our confidence and ownership of what we're working on.
When AI paves over that daily collection of smaller, digestible, healthy negative experiences we're trading ownership for time.
My process is to take a moment to think about whether I should be building ownership or not, which translates into whether I'm willing or expected to take responsibility for it.
If the answer is no, then I will happily unleash AI to get to a solution as quickly as I can.
If the answer is yes, then I ask myself in what way the tools I have, including AI but not limited to it, allow me to build ownership.
Then I make a plan and use the tools to build that ownership.
It's not a solved problem; it's a daily choice.
Before AI I had no other option but to build ownership.
Now it takes thinking and intention, but where the value lies has not changed: responsibility.
Suggested reading
"Code Simplicity" by Max Kanat-Alexander is a great book that explores the long-term cost of software and the role maintenance plays in it.