> It’s Still Your Code... AI maximalists will read this section and scoff. They’re already vibe coding everything and have little to no idea what the generated code looks like.
This frames the argument like a dichotomy. And to be honest, using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.
There are plenty of good engineers getting good results whilst accepting code-ownership as a continuum.
> If Claude goes down tomorrow, can you still do your job?
This is a valid counterpoint, but doing software is already a tricky set of dependencies. The answer here isn't automatically "you need to be able to do everything". It could simply be also use Codex.
I think the overall point is well made, I just don't agree with the absolute framing. There are things you can hand over AI safely. Even if you start small and increment it'll have a decent impact.
As a fun exercise replace AI with "junior" and "junior" with "mid-level." It holds up pretty well, as a manager you have responsibility for the work your team does and "make everyone put in more hours for no reason" is dumb. Maybe it comes across a bit neglecting of the "juniors" (in particular, it doesn't show any desire for figuring out ways for AI/"the juniors" to grow their responsibilities in a sustainable way).
Imagine reading that version as someone who doesn't know how big companies work. "But then they'll just fire all the mid-level managers, since they don't do any of the actual work!" Haha, boy would you be wrong.
> You must understand what your AI generated code does
Absolutely yes.
> You must be able to do your job if your AI tooling disappears
Absolutely not.
Look, I'm an alright programmer. Not good, far from great. Interpreted languages work for me; add all that strong typing and compilation and it starts to go beyond what I'm interested in. Nonetheless, pre-AI, I have shipped many very functional, production-grade applications for many companies.
Now, I can write stuff in Go, and Rust, and it's fantastic. So much faster. The AI likes the strong typing, the test-ability, predictability, it all makes total sense. I'm using this stuff all the time, but I have not learned any Go; I'm too busy focusing on the parts the AI cannot do for me, like real requirements gathering, architecture, fit and finish, engaging stakeholders, etc. that still require the human touch. Maybe I could have learned some Go using that time, but at the end of the day my employer is paying me for results, not for my edification!
There are now huge parts of my job I cannot do without AI. Sure, it's like 800-1200 bucks a month of extra cost; ok; but with that extra low-5-figs a year of cost I am a much better employee in terms of my capabilities. It's easily delivering ROI for me, and therefore for my employer. Instead of sitting around wishing I had a Go developer to ask for help implementing a simple feature in a Terraform provider, I can just fork it and add what I need, try to submit it upstream for inclusion, etc. and the lack of language specific skills is no longer holding me back.
Take away the tool and I can't do that part of the job anymore, sorry. I can still do a lot, but slower, and honestly it would feel like going from a car back to walking, now; walking's fun, I do it recreationally for the sheer joy, but when there's hundreds of kilometres to cover in a short amount of time, the car is clearly the correct choice. So too is it with AI: we've invented the car for computers and only a fool would pretend he can do everything the same without it now.
'If you can't build a TODO list app using only punchcards, then you can't do your job...'
Obviously our ambitions expand due to better tools. I now commit to and deliver much more work than before LLMs, and — before then — ditto for frontend frameworks, generation 4 languages etc.
There are projects I now start without thinking twice that I never would have considered a few years ago.
That's what productivity looks like, and it makes you more valuable, and your job more secure (up until the ASI kills us all...).
What if I can do everything the AI can like read, interpret, and implement code(and not in a likely copyright-breaking way) but also reason about it better.
A better analogy would be "the trebuchet for computers".
"but when there's hundreds of kilometres to cover in a short amount of time, the trebuchet is clearly the correct choice."
you point it in the rough direction and distance you want to go, pull the lever, see if you hit your mark, adjust, pull the lever again, etc.
And once you have dialed in the variables for that particular piece of rock that one time, you write it down in a "skill.md" file and announce to everyone on the team "this trebuchet has been carefully calibrated. Trust it with your other rocks too."
> only a fool would pretend he can do everything the same without it now
Unless you're working in a coding sweatshop, I don't see why you need AI to do what people have been doing for decades just fine without breaking a sweat.
Your competition's behavior necessarily affects you unless your company has an unassailable moat.
If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
It's fine if you disagree with the idea that AI lets established companies ship faster. I'm not here to argue that. But I think it's pretty easy to empathize with "why might one need to change their behavior due to this new technology?"
> If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
I'm saying that B2B services are very common outside of SV and more focused on stability, compliance, long-term maintenance, and the operational knowhow that comes with all that rather than just shipping new features. It's not that there isn't some competition, but that the business is built on much more comprehensive partnerships than just being a software vendor. I can't believe I'm saying this, but "synergy" sometimes isn't just a meaningless buzzword.
When you try to jam "AI" into the mix, the disruption harms the business value. Many including myself would like to be enlightened if you think otherwise.
The same shit they've always been asking for, judging by what OpenAI and Anthropic are pumping out surrounding their models: bloated, buggy Electron apps that consume gigabytes of memory to display fucking <1kb of text. We are not witnessing better software, even from the people who have unlimited capital and unlimited access to frontier models and are true believers in its potential to replace engineers.
I can do everything the same without it, because I'm still not using it. Why would I want to be a guinea pig for the world's richest companies and also atrophy my brain.
For another type of incoherent policy: don't restrict your employees to 2025 models and then accuse them of being sticks in the mud when they say the models are inadequate.
Academia is the place with the least coherent policy. In the few institutions I'm aware of the AI rules for, the guide is usually 3 lines long and it is basically we don't promote usage of it, which is a meaningless phrase. Therefore you end up with students who are not supposed to use it unless they are international masters students who require it because of language barriers, and in that scenario, it is basically allow them to use it however they like even if it makes a mockery of the rigour of a degree. Lecturers can use it as and when they wish, then you get researchers who either use it endlessly or not at all. Then upper management who use it instead of using their own brain.
> It’s Still Your Code... AI maximalists will read this section and scoff. They’re already vibe coding everything and have little to no idea what the generated code looks like.
This frames the argument like a dichotomy. And to be honest, using the Social Media "vibe-coding" as a strawman risks anchoring against something that's a mirage.
There are plenty of good engineers getting good results whilst accepting code-ownership as a continuum.
> If Claude goes down tomorrow, can you still do your job?
This is a valid counterpoint, but doing software is already a tricky set of dependencies. The answer here isn't automatically "you need to be able to do everything". It could simply be also use Codex.
I think the overall point is well made, I just don't agree with the absolute framing. There are things you can hand over AI safely. Even if you start small and increment it'll have a decent impact.
Is anyone actually at a company that is purposely trying to use a ton of tokens? It gets expensive really fast.
I've personally had people talk about token leaderboards at their work. Amazon and Meta did have ones, but I'd take with a decent grain of salt.
We all know it's such an insanely gameable metric you'd be insane to actually use it...
I came across a comedy clip where the employees are fighting over how many billion tokens they were using and assumed it was a joke.
For about 2 months, then I assume our fearless leaders saw the bill and wet themselves. Since then, Opus is off limits lol.
As a fun exercise replace AI with "junior" and "junior" with "mid-level." It holds up pretty well, as a manager you have responsibility for the work your team does and "make everyone put in more hours for no reason" is dumb. Maybe it comes across a bit neglecting of the "juniors" (in particular, it doesn't show any desire for figuring out ways for AI/"the juniors" to grow their responsibilities in a sustainable way).
Imagine reading that version as someone who doesn't know how big companies work. "But then they'll just fire all the mid-level managers, since they don't do any of the actual work!" Haha, boy would you be wrong.
> You must understand what your AI generated code does
Absolutely yes.
> You must be able to do your job if your AI tooling disappears
Absolutely not.
Look, I'm an alright programmer. Not good, far from great. Interpreted languages work for me; add all that strong typing and compilation and it starts to go beyond what I'm interested in. Nonetheless, pre-AI, I have shipped many very functional, production-grade applications for many companies.
Now, I can write stuff in Go, and Rust, and it's fantastic. So much faster. The AI likes the strong typing, the test-ability, predictability, it all makes total sense. I'm using this stuff all the time, but I have not learned any Go; I'm too busy focusing on the parts the AI cannot do for me, like real requirements gathering, architecture, fit and finish, engaging stakeholders, etc. that still require the human touch. Maybe I could have learned some Go using that time, but at the end of the day my employer is paying me for results, not for my edification!
There are now huge parts of my job I cannot do without AI. Sure, it's like 800-1200 bucks a month of extra cost; ok; but with that extra low-5-figs a year of cost I am a much better employee in terms of my capabilities. It's easily delivering ROI for me, and therefore for my employer. Instead of sitting around wishing I had a Go developer to ask for help implementing a simple feature in a Terraform provider, I can just fork it and add what I need, try to submit it upstream for inclusion, etc. and the lack of language specific skills is no longer holding me back.
Take away the tool and I can't do that part of the job anymore, sorry. I can still do a lot, but slower, and honestly it would feel like going from a car back to walking, now; walking's fun, I do it recreationally for the sheer joy, but when there's hundreds of kilometres to cover in a short amount of time, the car is clearly the correct choice. So too is it with AI: we've invented the car for computers and only a fool would pretend he can do everything the same without it now.
If you can't do the job without AI, you can't do the job.
Spoiler alert: if you can't do the job, you're not going to be doing the job much longer.
'If you can't build a TODO list app using only punchcards, then you can't do your job...'
Obviously our ambitions expand due to better tools. I now commit to and deliver much more work than before LLMs, and — before then — ditto for frontend frameworks, generation 4 languages etc.
There are projects I now start without thinking twice that I never would have considered a few years ago.
That's what productivity looks like, and it makes you more valuable, and your job more secure (up until the ASI kills us all...).
How is this different from saying “if you can’t do the job without the compiler, you can’t do the job”?
AI allows you to do things you could not do before so it is fair to say they can't do the new job without AI.
What if I can do everything the AI can like read, interpret, and implement code(and not in a likely copyright-breaking way) but also reason about it better.
in before the mods accuse you of being "too mean"
A better analogy would be "the trebuchet for computers".
"but when there's hundreds of kilometres to cover in a short amount of time, the trebuchet is clearly the correct choice."
you point it in the rough direction and distance you want to go, pull the lever, see if you hit your mark, adjust, pull the lever again, etc.
And once you have dialed in the variables for that particular piece of rock that one time, you write it down in a "skill.md" file and announce to everyone on the team "this trebuchet has been carefully calibrated. Trust it with your other rocks too."
> only a fool would pretend he can do everything the same without it now
Unless you're working in a coding sweatshop, I don't see why you need AI to do what people have been doing for decades just fine without breaking a sweat.
What are you working on?
Your competition's behavior necessarily affects you unless your company has an unassailable moat.
If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
It's fine if you disagree with the idea that AI lets established companies ship faster. I'm not here to argue that. But I think it's pretty easy to empathize with "why might one need to change their behavior due to this new technology?"
> unless your company has an unassailable moat
Is not working in SV enough of a moat?
> If other companies are able to tolerate larger amounts of tech debt while shipping new features faster then you'll be out of a job at some point when your company loses market share.
I'm saying that B2B services are very common outside of SV and more focused on stability, compliance, long-term maintenance, and the operational knowhow that comes with all that rather than just shipping new features. It's not that there isn't some competition, but that the business is built on much more comprehensive partnerships than just being a software vendor. I can't believe I'm saying this, but "synergy" sometimes isn't just a meaningless buzzword.
When you try to jam "AI" into the mix, the disruption harms the business value. Many including myself would like to be enlightened if you think otherwise.
> Unless you're working in a coding sweatshop
You are obviously unaware of what the silicon valley companies are asking for and commiting to.
The same shit they've always been asking for, judging by what OpenAI and Anthropic are pumping out surrounding their models: bloated, buggy Electron apps that consume gigabytes of memory to display fucking <1kb of text. We are not witnessing better software, even from the people who have unlimited capital and unlimited access to frontier models and are true believers in its potential to replace engineers.
I can do everything the same without it, because I'm still not using it. Why would I want to be a guinea pig for the world's richest companies and also atrophy my brain.
uh oh you guys didn't realize you were guinea pigs for products that can permanently alter your mental health?
For another type of incoherent policy: don't restrict your employees to 2025 models and then accuse them of being sticks in the mud when they say the models are inadequate.
Academia is the place with the least coherent policy. In the few institutions I'm aware of the AI rules for, the guide is usually 3 lines long and it is basically we don't promote usage of it, which is a meaningless phrase. Therefore you end up with students who are not supposed to use it unless they are international masters students who require it because of language barriers, and in that scenario, it is basically allow them to use it however they like even if it makes a mockery of the rigour of a degree. Lecturers can use it as and when they wish, then you get researchers who either use it endlessly or not at all. Then upper management who use it instead of using their own brain.
DORA.dev (DevOps Research And Assessment) also point to having a clearly communicated stance concerning AI to be a foundational capability.
https://dora.dev/capabilities/clear-and-communicated-ai-stan...
When I see "in the year of our Lord" I immediately tune out the writer. Almost as bad as "Unreasonable Effectiveness"