I get ai-adjacent dev teams going all in, as they're equipped to deal with the hard edges, but this is brutal for everyone else. Like Frank in collections is just trying to make sure people get paid, not worry about whatever it means to be "prompt injected".
Whatever happened to "show, don't tell"? Other productivity boosters certainly didn't need such memos; they were naturally adopted because the benefits were unambiguous. There were no "IDE-first company memos" or "software framework-first company memos"; devs organically picked these up because the productivity gains were immediately self-evident.
Think about the Industrial Age transition from individual craftspeople working on small shops using hand tools to make things into working in factories on large-scale assembly lines. The latter is wildly more productive than the former. If you owned a business that employed a bunch of cobblers, then moving them all out of their little shops into one big factory where they can produce 100x as many shoes means you just got yourself 100x richer.
But for an individual cobbler, you basically got fired at one job and hired at another. This may come as a surprise to those who view work as simply an abstract concept that produces value units, but people actually have preferences about how they spend their time. If you're a cobbler, you might enjoy your little workshop, slicing off the edge of leather around the heel, hammering in the pegs, sitting at your workbench.
The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
You might not want to quit that job and get a different job running a shoe assembly line in a factory. Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer. But the boss isn't saying that. He's saying "all of the cobblers at the other companies are doing this to, so where are you gonna go?".
Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
Yup. It’s what I’ve come to realize. My job is probably safe, as long as I will be willing to adapt. I have still not even tried AI once, and don’t care for it, but I know at one point I probably will have to.
I don’t actually think it’ll be a productivity boost the way I work. Code has never been the difficult part, but I’ll definitely have to show I have included AI in my workflow to be left alone.
> Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer.
... is now the moment to form worker cooperatives? The companies don't really have privileged access to these tools, and unlike many other things that drive increased productivity, there's not a huge up-front capital investment for the adopter. Why shouldn't ICs capture the value of their increased output?
The industrial revolution was extremely hard on individual craftspeople. Jobs became lower paying and lower skilled. People were forced to move into cities. Conditions didn't improve for decades. If AI is anything comparable it's not going to get better in 5-10 years. It will be decades before the new 'jobs' come into place.
Unfortunately, I would expect the boss to say, "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x the shoes to repair".
>Think about the Industrial Age transition from individual craftspeople working on small shops using hand tools to make things into working in factories on large-scale assembly lines.
I wouldn't analogize the adoption of AI tools to a transition from individual craftspeople to an assembly line, which is a top-down total reorganization of the company (akin to the transition of a factory from steam power to electricity, as a sibling commenter noted [0]). As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization. Continuing your analogy, it's more akin to letting craftspeople bring whatever tools they want to work, whether those be hand tools or power tools. If the power tools are any good, most will naturally opt for them because they make the job easier.
>The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
That's certainly a part of it, but I also think workers enjoy and strive to be productive. Why else would they naturally adopt things like compilers, IDEs, and frameworks? Many workers enjoyed the respective intellectual puzzles of hand-optimizing assembly, or memorizing esoteric key combinations in their tricked-out text editors, or implementing everything from scratch, yet nonetheless jumped at the opportunity to adopt modern tooling because it increased how much they could accomplish.
> As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization.
I'm sorry, but did you forget what page this comment thread is attached to? It's literally about corporate communication from CEOs reorganizing their companies around AI and mandating that employees use it.
> That's certainly a part of it, but I also think workers enjoy and strive to be productive.
Agreed! Feeling productive and getting stuff done is also one of the joys of work and part of the compensation package. You're right that to the degree that AI lets you get more done, it can make the job more rewarding.
For some people, that's a clear net win. They feel good about being more productive, and they maybe never particularly enjoyed the programming part anyway and are happy to delegate that to AI.
For other people, it's not a net win. The job is being replaced with a different job that they enjoy less. Maybe they're getting more done, but they've having so little fun doing it that it's a worse job.
>I'm sorry, but did you forget what page this comment thread is attached to? It's literally about corporate communication from CEOs reorganizing their companies around AI and mandating that employees use it.
That’s exactly my point. The fact that management is trying to top-down force adoption of something that operates at the individual level and whose adoption is thus inherently a bottom-up decision says it all. Individual workers naturally pick up tools that make them more productive and don’t need to be forced to use them from the top-down. We never saw CEOs issue memos “reorganizing” the company around IDEs or software frameworks and mandate that the employees use them because employees naturally saw their productivity gains and adopted them organically. It seems the same is not true for AI.
Blacksmiths pretty much existed until the ‘50s and ‘60s for most of the world, making bespoke tools and things. Then they just vanished, for the most part.
Goes to show how infested with disconnected management this industry is.
All the tools that improved productivity for software devs (Docker, K8S/ECS/autoscaling, Telemetry providers) took very long for management to realize they bring value, and in some places with a lot of resistance. Some places where I worked, asking for an IntelliJ license would make your manager look at you like you were asking "hey can I bang your wife?".
Remember when companies all forced us to buy smartphones? Or switch to search engines instead of books? Or when Amazon announced it was "react native first"?
I agree with the sentiment you're expressing but, to be fair, companies forcing us all to use smartphones (as consumers or as citizens) is, unfortunately, happening implicitly.
That doesn't work in an environment where there are compliance and regulatory controls.
In most companies, you can't just pick up random new tools (especially ones that send data to third parties). The telling part is giving internal safety to use these tools.
>Other productivity boosters certainly didn't need such memos; they were naturally adopted because the benefits were unambiguous.
This is simply not true. As a counter example consider debuggers. They are a big productivity boost, but it requires the user to change their development practice and learn a new tool. This makes adoption very hard. AI has a similar issue of being a new tool with a learning curve.
People will voluntarily adopt modest productivity boosters that don't threaten their job security. They will rebel against extraordinary productivity boosters that may make some of their skills obsolete or threaten their career.
You have to remember that our trade is automating things. We're all enthusiasts about automating things, and there's very clearly a lot of enthusiasm about using AI for that purpose.
If anything, the problem is that management wants to automate poorly. The employees are asked to "figure it out", and if they give feedback that it's probably not the best option, that feedback is rejected.
There might be a temporary resistance from violence but eventually competition will take over. The issue in this case is that we're not looking at voluntary adoption due to a competitive advantage - we're seeing adoption by fiat.
AI is a broad category of tools, some of which are highly useful to some people - but mandating wide adoption is going to waste a lot of people's time on inefficient tools.
The competitive advantage belongs to companies, not engineers. That's exactly the conflict. What you're predicting -- voluntary adoption due to advantages -- is precisely what is happening, but it's happening at the company level. It's why companies are mandating it and some engineers are resisting it. Just like in the riots I mentioned -- introduction of agricultural machinery was a unilateral decision made by landowners and tenant farmers, often directly against the wishes of the laborers.
A well run company would provide an incentive to their employees for increasing their productivity. Why would employees enthusiastically respond to a mandate that will provide them with no benefit?
Companies are just groups of employees - and if the companies are failing to provide a clear rationale to increase productivity those companies will fail.
I'm sorry to say this, but the company does not need employees to respond enthusiastically. They'll just replace the people who resist for too long. Employees who resist indefinitely have absolutely zero leverage unless they're working on a small subset of services or technologies where AI coding agents will never be useful (which rules out the vast majority of employed software developers).
Oh, they can certainly do that (in part evidenced by companies doing that). It's a large cost to the company, you'll get attrition and lose a lot of employee good-will, and it'll only pay off if you're right. Going with an optional system by making such tools available and incentivizing their use will dodge both of those issues and let you pivot if the technology isn't as beneficial as you thought.
Your examples are productivity boosters that don't threaten job security. A human has to provide inputs to the compiler, the spreadsheet, and the tractor.
The tractor, or more generally farm automation, was maybe the biggest single destruction of jobs in human history. In 1800 about 65% of people worked in agriculture, now it's about 1%. Even if AI eliminated every single computer programmers' job it would be a drop in the bucket compared to how many jobs farm automation destroyed.
On the other hand, there were surely memos like "our facility will be using electric power now. Steam is out". Sometimes execs do set a company's direction.
AI adoption is a bottom-up decision at the level of the individual worker. Converting an entire factory is a top-down decision. No single worker can individually decide to start using electricity instead of steam power, but individuals can choose whether/how to use AI or any other individual-level tool.
I had you a power tool, and your productivity goes up immediately. Your IDE highlights problems, same story. Everyone can observe that this has happened.
It's so sad to see some of these companies completely fail their AI-first communication [1], when they would just get so much from "We think AI can transform the way we work. We're giving you access to all these tools, please tell us what works and what doesn't". And that would be it.
[1] there was a remote universe where I could see myself working for Shopify, now that company is sitting somewhere between Wipro and Accenture in my ranking.
Unfortunately at this scale, when you are this soft on the message, everyone ignores it and keeps doing what they were doing before. Carrot and stick are both required for performance management at this scale. You can argue whether the bet is worth it or not, but to even take the bet, you need a lot more than some resources and a "please".
If performance was the true goal then we'd just naturally see slow adopters unperform and phase out of that company. If you make good tooling available and it is significantly impactful the results will be extremely obvious - and, just speaking from a point of view of psychology, if the person next to you is able to do their job in half the time because they experimented with new tooling _and sees some personal benefit from it_ then you'll be curious and experiment too!
It might be that these companies don't care about actual performance or it might be that these companies are too cheap/poorly run to reward/incentivize actual performance gains but either way... the fault is on leadership.
It's not inevitable, it's just poor leadership. I've seen changes at large organizations take without being crudely pushed top-down and you'd better believe I've seen top-down initiatives fail, so "performance management" is neither necessary nor sufficient.
Performance management isn't rating how people are doing. It's transforming the resources of the company into something that you want it to do. If they want to transform the current state of the company into something that has AI use as a core capability, that is performance management.
HubSpot CTO was very vocal about how AI is changing everything and how he is supporting by offering the domain chat.com to OpenAI etc. I say was because it has toned down quite a bit. I always thought HubSpot will transform into a true AI CRM given how invested the CTO was in the space from the early days.
Now the stock is down from $800+ to $200+ and the whole messaging has changed. The last one I saw on LinkedIn was
""
No comment on the HubSpot stock price.
But, I strongly agree with this statement:
"...I don't see companies trusting their revenue engine to something vibe-coded over a weekend."
""
The stock dip is likely because of the true AI native CRMs being built and coming to market, but why couldn't HubSpot take that spot given the CTOs interest in the space.
I work for a large tech company, and our CTO has just released a memo with a new rubric for SDEs that includes "AI Fluency". We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.
A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.
This is absolutely the norm across corporate America right now.
Chief AI Czars enforcing AI usage metrics with mandatory AI training for anyone that isn't complying.
People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.
It's deeply fascinating psychologically and I'm not sure where this ends.
I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
> I've never seen any tech theme pushed top down so hard in 20+ years working.
> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.
Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.
Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.
What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.
And telling us how to do our jobs. As if they've ever compared the optimized output of clang and gcc on an example program to track down a performance regression at 2AM.
I don't understand how all these companies issue these sorts of policies in lock-step with each other. The same happened with "Return To Office". All of a sudden every company decided to kill work from home within the same week or so. Is there some secret CEO cabal that meets on a remote island somewhere to coordinate what they're going to all make workers do next?
CEOs are ladder climbers. The main skill in ladder climbing is being in tune with what the people around them are thinking, and doing what pleases/maximizes other's approval of the job they are doing.
It's extremely human behavior. We all do it to some degree or another. The incentives work like this:
- If all your peers are doing it and you do it and it doesn't work, it's not your fault, because all your peers were doing it too. "Who could have known? Everyone was doing it."
- If all your peers _aren't_ doing it and you do it and it doesn't work, it's your fault alone, and your board and shareholders crucify you. "You idiot! What were you thinking? You should have just played it safe with our existing revenue streams."
And the one for what's happening with RTO, AI, etc.:
- If all your peers are doing it and you _don't do it_ and it _works_, your board crucifies you for missing a plainly obvious sea change to the upside. "You idiot! How did you miss this? Everyone else was doing it!"
Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.
Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired".
It is investor sentiment and FOMO. If your investors feel like AI is the answer you will need to start using AI.
I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable.
I agree that a lot of the current push is driven by investor sentiment and a degree of FOMO. If capital markets start to believe AI is table stakes, companies don’t really have the option to ignore it anymore.
That said, I’m not bearish on AI either. I think there’s a meaningful difference between chasing AI for signaling purposes and deliberately building an “AI muscle” inside the organization. Companies that start learning how to use, govern, and integrate AI thoughtfully are likely to outpace those that never engage at all.
From that perspective, most of these memos feel fairly reasonable to me. They’re less about declaring AI as a silver bullet and more about acknowledging that standing still carries its own risk.
If AI is the answer, then there's no reason for a top-down mandate like this. People will just start using as they see fit because it helps them do their jobs better, instead of it being forced on them, which doesn't sound much like AI is the answer investors thought it was.
No, because as discussed AI also changes the nature of your job in a way that might be negative to a worker, even if it’s more productive. Ie, it may be more fun to ride a horse to your friends house, but it’s not faster than a car. Or as the previous example, it may be more enjoyable to make a shoe by hand, but it’s less productive than using an assembly line
I have wondered the exact same thing. It's uncanny how in-sync they all are. I can only suppose that the trend trickles down from the same few influential sources.
This is a great line - evocative, funny, and a bit o wordplay.
I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.
I think one thing that I noticed this week in terms of "eye of the beholder" view on AI was the Goldman press release.
Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it!
A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate.
I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take.
It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle.
I'm so glad I'm nearer the end of my career than the beginning. Can't wait to leave this industry. I've got a stock cliff coming up late this summer, probably a good time to get out and find something better to do with my life.
> Then, you might even tinker with some AI stuff on your own terms, you never know
Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is.
If you're finding their a bad tool for most jobs you're using them for, you're probably being closed minded and using it wrong. The trick with AI these days is to ask it to do something that you think is impossible and it will usually do a pretty decent job at it, or at least close enough for you to pick up or to guide it further.
I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it.
I try them a few times a month, always to underwhelming results. They're always wrong. Maybe I'll find an interesting thing to do with them some day, I dunno. It's just not a fun or interesting tool for me to learn to use so I'm not motivated. I like deterministic & understandable systems that always function correctly; "smart" has always been a negative term in marketing to me. I'm more motivated to learn to drive a city bus or walk a postal route or something, so that's the direction I'm headed in.
Okay, I use OpenCode/Codex/Gemini daily (recently cancelled my personal CC plan given GPT 5.2/3 High/XHigh being a better value, but still have access to Opus 4.5/6 at work) and have found it can provide value in certain parts of my job and personal projects.
But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style.
Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised.
There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too.
But, I enjoy the work style personally so stick with it.
I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices.
1. execs likely have spend commits and pressure from the board about their 'ai strategy', what better way to show we're making progress than stamping on some kpis like # of agents created?
2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)
3. there is some subset of ai detractors that refuse to use the tools for whatever reason
the metrics pushed by 1) rarely account for 2) and dont really serve 3)
i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo
> (cc / codex / cursor / copilot (jk, nobody should be using copilot)
I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?
I'm currently using opus in Zed via copilot (I think that's what you're recommending?) and tbh couldn't be happier. It's hard to imagine what better would look like.
The KPI problem is systemic and bigger than just Gen-AI, it’s in everything these days. Actual governance starts by being explicit about business value.
If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.
For some reason the last decade or so we have confused activity with productivity.
(and words/claims with company value - but that's another topic)
I'm so happy I work at a sane company. We're pushing the limits of AI and everyone sees the value, but we also see the danger/risks.
I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.
Leadership loves AI more than anything they have ever loved before. It's because for them, the fawning, sycophantic, ego-stroking agents who cheerfully champions every dumb idea they have and helps them realize it with spectacular averageness, is EXACTLY what they've always expected to receive from their employees.
Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).
Awesome, with that new policy we'll be sure to justify my purple drill evangelist role by showing that our average employee is dependent on purple drills for at least 1/8th of their workload. Who knew that our employees would so quickly embrace the new technology. Now the board can't cut me!
Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.
Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.
And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.
That kind of makes sense philosophically if your business is trains, but I don't think that their business was AI agents. Although given they have a VP of AI, I have no idea. What a crazy title.
> We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.
Enforced use means one of two things:
1. The tool sucks, so few will use it unless forced.
2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).
I know you're speaking half in jest but the C-suite of my area actually used a tweet by an OpenAI executive as the agenda for an AI brainstorm meeting.
Well that's inspiring. If you're going to follow anyone right now be sure to follow someone from the company that has committed to spending a trillion dollars without ever having a profitable product. Those are the folks who know what good business is!
I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage ..
Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game.
edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro
That sounds awful... Thankfully our CTO is quite supportive of our teams anti-AI policy and is even supportive of posting our LLM-ban on job postings. I honestly dont think that I could operate in an environment with any sort of AI mandate...
I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. If you think it's still just slop and errors when managed by experienced devs, you're already behind.
It's not obvious because the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team. Which yeah is scary, but my point is if you don't see any multiplier effect from using that latest AI tools, you are either doing a bad job of using them (or don't have the budget, can't blame anyone for that), or are maybe in some obscure niche coding world?
>the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team
This simply isn’t how economics works. There is always additional demand, especially in the software space. Every other productivity-boosting technology has resulted in an increase in jobs, not a decrease.
I try these things a couple times a month. They're always underwhelming. Earlier this week I had the thing work tells me to use (claude code sonnet 4? something like that) generate some unit tests for a new function I wrote. I had a number of objections about the utility of the test cases it chose to write, but the largest problem was that it assigned the expected value to a test case struct field and then... didn't actually validate the retrieved value against it. If you didn't review the code, you wouldn't know that the test it wrote did literally nothing of value.
Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.
Trillions of dollars for this? Sigh... try again next week, I guess.
Twice now in this same story, different subthreads, I've seen AI dullards declaring that you, specifically, are holding it wrong. It's delightful, really.
I don't really care if other people want to be on or off the AI train (no hate to the gp poster), but if you are on the train and you read the above comment, it's hard not to think that this person might be holding it wrong.
Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.
This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.
Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.
Yeah, I'm definitely guilty of not being motivated to use these tools. I find them annoying and boring. But my company's screaming that we should be using them, so I have been trying to find ways to integrate it into my work. As I mentioned, it's mostly not been going very well. I'm just using the tool the company put in front of me and told me to use, I don't know or really care what it is.
"Hey boss, I tried to replace my screwdriver with this thing you said I have to use? Milwaukee or something? When I used it, it rammed the screw in so tight that it cracked the wood."
^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built.
No need to be mean. It's not living up to the marketing (no surprise), but I am trying to find a way to use these things that doesn't suck. Not there yet, but I'll keep trying.
Eh, there's a new shiny thing every 2 months. I'm waiting for the tools to settle down rather than keep up with that treadmill. Or I'll just go find a new career that's more appealing.
I dunno. At some point the people who make these tools will have to turn a profit, and I suspect we'll find out that 98% of the AI industry is swimming naked.
Yeah I think it'll consolidate around one or two players. Mostly likely Xai, even though they're behind at the moment. No one can compete with the orbital infrastructure, if that works out. Big if. That's all a different topic.
But I feel you, part of me wants to quit too, but can't afford that yet.
Fall behind what? Writing code is only one part of building a successful product and business. Speed of writing code is often not what bottlenecks success.
Yes, the execution part has become cheap, but planning and strategizing is not much easier. But devs and organizations that keep their head in the sand will fall behind on one leg of that stool.
> I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting.
I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.
Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.
If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.
Same here in LATAM. We are also an AI-First company now. No customer-first, or product-first, or data-driven (I actually liked the idea behind being data-driven). All the code must be AI-generated by the end of Q1. All the employees must include at least one AI-adoption metric or project in their goals.
Why wouldn't HFT be disrupted by AI? AI-enhanced trading algo designs are likely to be competitive? AI disrupts everything on the computer from the low-end on up. The higher end requires more expensive or custom models that aren't as easy to obtain yet.
HFT is about the last technical domain that could possibly be touched by LLMs. There is next to no good training data and there is zero margin for error.
happy to be corrected but im not aware of any direct improvements llms bring to ultra low latency market making, time to first token is just too high (not including coding agents)
from talking to some friends in the space theres some meaningful improvements in tooling especially in discretionary trading that operate on longer time horizons where agents can actually help w research and sentiment analysis
It also lines up with what has to be their outlook on the market, their model is especially challenged by AI. Years ago I paid for a Ukrainian to write me some scrapers, today that would be a quick project in Cursor. Lots of people used it for cheap art, voice work, etc, now the low end is all AI.
The trap for many companies is that as everyone automate with AI, their competitive advantage erodes, as they prove that a few centralized models can run the businesses.
What are the trenches in businesses in 2030, purely ownership over physical assets and energy?
I'm not sure a person wrote this website, but FYI on Firefox Nightly the text of the tweet is shown below the blocked tracker in a box labeled "Content from blocked embed". It doesn't have images or longer posts, so not that useful for this specific website, but it's a nice feature. It also gives you a link to the tweet so you can easily open it in a private window or XCancel if you want to.
If you hire good people and set proper incentives they will figure out the best way to do something. If leadership has to direct employees to use AI it's a bad sign. If it were a huge boon to productivity, you won't need to force people to use it.
What are the AI-never companies doing? May be a useful comparison. Is the AI work actually improving the bottom line, or is it being used to assuage noisy shareholders that think AI is a hack for infinite profit?
This reads like they don't want AI, they just want tooling. More, better tooling. AI is just a scapegoat/easy out for writing more tooling that makes them more efficient.
Isn’t there tons more, like the note from Andy Jassy at Amazon and the CEO at Airwallex etc? Maybe you can use an ai agent to find all the other big examples? ;-)
Also notice how almost all the stocks of these companies except Meta who have announced AI-first initiatives are at best flat or down but more than 20% YTD.
People have always resisted change especially one that modifies the way they work. They’d rather work on the same thing for life. To get them to adopt new tools you need to do this stuff.
And yes, people did resist IDEs (“I’m best with my eMacs” - no you weren’t), people resisted the “sufficiently smart compiler”, and so on. What happened was that they were replaced by the sheer growth in the industry providing new people who didn’t have these constraints.
The software defined storage company croit.io announced it in their Workation in May 2023. AI is just another tool and people have to understand that it's not going away. As a company, you still need people to make use of this tool.
I get ai-adjacent dev teams going all in, as they're equipped to deal with the hard edges, but this is brutal for everyone else. Like Frank in collections is just trying to make sure people get paid, not worry about whatever it means to be "prompt injected".
Whatever happened to "show, don't tell"? Other productivity boosters certainly didn't need such memos; they were naturally adopted because the benefits were unambiguous. There were no "IDE-first company memos" or "software framework-first company memos"; devs organically picked these up because the productivity gains were immediately self-evident.
Think about the Industrial Age transition from individual craftspeople working on small shops using hand tools to make things into working in factories on large-scale assembly lines. The latter is wildly more productive than the former. If you owned a business that employed a bunch of cobblers, then moving them all out of their little shops into one big factory where they can produce 100x as many shoes means you just got yourself 100x richer.
But for an individual cobbler, you basically got fired at one job and hired at another. This may come as a surprise to those who view work as simply an abstract concept that produces value units, but people actually have preferences about how they spend their time. If you're a cobbler, you might enjoy your little workshop, slicing off the edge of leather around the heel, hammering in the pegs, sitting at your workbench.
The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
You might not want to quit that job and get a different job running a shoe assembly line in a factory. Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer. But the boss isn't saying that. He's saying "all of the cobblers at the other companies are doing this to, so where are you gonna go?".
Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
Yup. It’s what I’ve come to realize. My job is probably safe, as long as I will be willing to adapt. I have still not even tried AI once, and don’t care for it, but I know at one point I probably will have to.
I don’t actually think it’ll be a productivity boost the way I work. Code has never been the difficult part, but I’ll definitely have to show I have included AI in my workflow to be left alone.
Oh well…
Why have you never even _tried_ it? It’s very easy to try and surely you are somewhat curious.
I've never had to try lots of things in order to know that I won't like them.
> Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer.
... is now the moment to form worker cooperatives? The companies don't really have privileged access to these tools, and unlike many other things that drive increased productivity, there's not a huge up-front capital investment for the adopter. Why shouldn't ICs capture the value of their increased output?
The industrial revolution was extremely hard on individual craftspeople. Jobs became lower paying and lower skilled. People were forced to move into cities. Conditions didn't improve for decades. If AI is anything comparable it's not going to get better in 5-10 years. It will be decades before the new 'jobs' come into place.
Unfortunately, I would expect the boss to say, "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x the shoes to repair".
>Think about the Industrial Age transition from individual craftspeople working on small shops using hand tools to make things into working in factories on large-scale assembly lines.
I wouldn't analogize the adoption of AI tools to a transition from individual craftspeople to an assembly line, which is a top-down total reorganization of the company (akin to the transition of a factory from steam power to electricity, as a sibling commenter noted [0]). As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization. Continuing your analogy, it's more akin to letting craftspeople bring whatever tools they want to work, whether those be hand tools or power tools. If the power tools are any good, most will naturally opt for them because they make the job easier.
>The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
That's certainly a part of it, but I also think workers enjoy and strive to be productive. Why else would they naturally adopt things like compilers, IDEs, and frameworks? Many workers enjoyed the respective intellectual puzzles of hand-optimizing assembly, or memorizing esoteric key combinations in their tricked-out text editors, or implementing everything from scratch, yet nonetheless jumped at the opportunity to adopt modern tooling because it increased how much they could accomplish.
[0] https://news.ycombinator.com/item?id=46976955
> As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization.
I'm sorry, but did you forget what page this comment thread is attached to? It's literally about corporate communication from CEOs reorganizing their companies around AI and mandating that employees use it.
> That's certainly a part of it, but I also think workers enjoy and strive to be productive.
Agreed! Feeling productive and getting stuff done is also one of the joys of work and part of the compensation package. You're right that to the degree that AI lets you get more done, it can make the job more rewarding.
For some people, that's a clear net win. They feel good about being more productive, and they maybe never particularly enjoyed the programming part anyway and are happy to delegate that to AI.
For other people, it's not a net win. The job is being replaced with a different job that they enjoy less. Maybe they're getting more done, but they've having so little fun doing it that it's a worse job.
>I'm sorry, but did you forget what page this comment thread is attached to? It's literally about corporate communication from CEOs reorganizing their companies around AI and mandating that employees use it.
That’s exactly my point. The fact that management is trying to top-down force adoption of something that operates at the individual level and whose adoption is thus inherently a bottom-up decision says it all. Individual workers naturally pick up tools that make them more productive and don’t need to be forced to use them from the top-down. We never saw CEOs issue memos “reorganizing” the company around IDEs or software frameworks and mandate that the employees use them because employees naturally saw their productivity gains and adopted them organically. It seems the same is not true for AI.
Blacksmiths pretty much existed until the ‘50s and ‘60s for most of the world, making bespoke tools and things. Then they just vanished, for the most part.
We are probably on a similar trajectory.
Goes to show how infested with disconnected management this industry is.
All the tools that improved productivity for software devs (Docker, K8S/ECS/autoscaling, Telemetry providers) took very long for management to realize they bring value, and in some places with a lot of resistance. Some places where I worked, asking for an IntelliJ license would make your manager look at you like you were asking "hey can I bang your wife?".
Remember when companies all forced us to buy smartphones? Or switch to search engines instead of books? Or when Amazon announced it was "react native first"?
I agree with the sentiment you're expressing but, to be fair, companies forcing us all to use smartphones (as consumers or as citizens) is, unfortunately, happening implicitly.
There was an Apple memo like this though that said they were word processing first.
https://writingball.blogspot.com/2020/02/the-infamous-apple-...
That doesn't work in an environment where there are compliance and regulatory controls.
In most companies, you can't just pick up random new tools (especially ones that send data to third parties). The telling part is giving internal safety to use these tools.
>Other productivity boosters certainly didn't need such memos; they were naturally adopted because the benefits were unambiguous.
This is simply not true. As a counter example consider debuggers. They are a big productivity boost, but it requires the user to change their development practice and learn a new tool. This makes adoption very hard. AI has a similar issue of being a new tool with a learning curve.
Did companies actually send out memos saying "We're going to be a company that uses debuggers!"
I would have just thought that people using them would quickly outpace the people that weren't and the people falling behind would adapt or die.
>Did companies actually send out memos saying "We're going to be a company that uses debuggers!"
I could believe it. Especially if there are big licensing costs for the debuggers.
>the people falling behind would adapt or die.
It is better to educate people, make them more efficient, and avoid having them die. Having employees die is expensive for the company.
Do they have to die though? I know some folks that use them and others who don't. They both seem to get along fine.
People will voluntarily adopt modest productivity boosters that don't threaten their job security. They will rebel against extraordinary productivity boosters that may make some of their skills obsolete or threaten their career.
You have to remember that our trade is automating things. We're all enthusiasts about automating things, and there's very clearly a lot of enthusiasm about using AI for that purpose.
If anything, the problem is that management wants to automate poorly. The employees are asked to "figure it out", and if they give feedback that it's probably not the best option, that feedback is rejected.
That’s simply not true. Developers hand-writing assembly readily adopted compilers, accountants readily adopted spreadsheets, and farmers readily adopted tractors and powered mills.
That's false. Those things were in fact resisted in some cases. For instance, look up the swing riots of the 1830s.
There might be a temporary resistance from violence but eventually competition will take over. The issue in this case is that we're not looking at voluntary adoption due to a competitive advantage - we're seeing adoption by fiat.
AI is a broad category of tools, some of which are highly useful to some people - but mandating wide adoption is going to waste a lot of people's time on inefficient tools.
The competitive advantage belongs to companies, not engineers. That's exactly the conflict. What you're predicting -- voluntary adoption due to advantages -- is precisely what is happening, but it's happening at the company level. It's why companies are mandating it and some engineers are resisting it. Just like in the riots I mentioned -- introduction of agricultural machinery was a unilateral decision made by landowners and tenant farmers, often directly against the wishes of the laborers.
A well run company would provide an incentive to their employees for increasing their productivity. Why would employees enthusiastically respond to a mandate that will provide them with no benefit?
Companies are just groups of employees - and if the companies are failing to provide a clear rationale to increase productivity those companies will fail.
I'm sorry to say this, but the company does not need employees to respond enthusiastically. They'll just replace the people who resist for too long. Employees who resist indefinitely have absolutely zero leverage unless they're working on a small subset of services or technologies where AI coding agents will never be useful (which rules out the vast majority of employed software developers).
Oh, they can certainly do that (in part evidenced by companies doing that). It's a large cost to the company, you'll get attrition and lose a lot of employee good-will, and it'll only pay off if you're right. Going with an optional system by making such tools available and incentivizing their use will dodge both of those issues and let you pivot if the technology isn't as beneficial as you thought.
Your examples are productivity boosters that don't threaten job security. A human has to provide inputs to the compiler, the spreadsheet, and the tractor.
The tractor, or more generally farm automation, was maybe the biggest single destruction of jobs in human history. In 1800 about 65% of people worked in agriculture, now it's about 1%. Even if AI eliminated every single computer programmers' job it would be a drop in the bucket compared to how many jobs farm automation destroyed.
On the other hand, there were surely memos like "our facility will be using electric power now. Steam is out". Sometimes execs do set a company's direction.
AI adoption is a bottom-up decision at the level of the individual worker. Converting an entire factory is a top-down decision. No single worker can individually decide to start using electricity instead of steam power, but individuals can choose whether/how to use AI or any other individual-level tool.
That transition took 40-50 years. Electrical power in manufacturing was infeasible for lot of reasons for a longtime.
Any company issuing such an edict early on would have bankrupted themselves. And by the time it became practical, no such edict was needed.
That's a choice individual employees couldn't make. Or, at least, one management wouldn't let them make. It'd require a huge amount of spending.
20 years ago or so, we had an exec ask us about our unit tests.
Productive output is a lagging indicator. Using AI tools is theoretically leading???
I had you a power tool, and your productivity goes up immediately. Your IDE highlights problems, same story. Everyone can observe that this has happened.
It's so sad to see some of these companies completely fail their AI-first communication [1], when they would just get so much from "We think AI can transform the way we work. We're giving you access to all these tools, please tell us what works and what doesn't". And that would be it.
[1] there was a remote universe where I could see myself working for Shopify, now that company is sitting somewhere between Wipro and Accenture in my ranking.
Unfortunately at this scale, when you are this soft on the message, everyone ignores it and keeps doing what they were doing before. Carrot and stick are both required for performance management at this scale. You can argue whether the bet is worth it or not, but to even take the bet, you need a lot more than some resources and a "please".
If performance was the true goal then we'd just naturally see slow adopters unperform and phase out of that company. If you make good tooling available and it is significantly impactful the results will be extremely obvious - and, just speaking from a point of view of psychology, if the person next to you is able to do their job in half the time because they experimented with new tooling _and sees some personal benefit from it_ then you'll be curious and experiment too!
It might be that these companies don't care about actual performance or it might be that these companies are too cheap/poorly run to reward/incentivize actual performance gains but either way... the fault is on leadership.
It's not inevitable, it's just poor leadership. I've seen changes at large organizations take without being crudely pushed top-down and you'd better believe I've seen top-down initiatives fail, so "performance management" is neither necessary nor sufficient.
The executives pushing AI use everywhere aren’t basing it on actual performance (which is an orthogonal concept). It’s just the latest shiny bauble.
Performance management isn't rating how people are doing. It's transforming the resources of the company into something that you want it to do. If they want to transform the current state of the company into something that has AI use as a core capability, that is performance management.
There are good books on this: e.g. https://www.amazon.ca/Next-Generation-Performance-Management...
If everyone is ignoring it, it can't be that great. If it's that great, people will adopt it organically based on how it's useful for them.
HubSpot CTO was very vocal about how AI is changing everything and how he is supporting by offering the domain chat.com to OpenAI etc. I say was because it has toned down quite a bit. I always thought HubSpot will transform into a true AI CRM given how invested the CTO was in the space from the early days.
Now the stock is down from $800+ to $200+ and the whole messaging has changed. The last one I saw on LinkedIn was "" No comment on the HubSpot stock price.
But, I strongly agree with this statement:
"...I don't see companies trusting their revenue engine to something vibe-coded over a weekend." ""
The stock dip is likely because of the true AI native CRMs being built and coming to market, but why couldn't HubSpot take that spot given the CTOs interest in the space.
I work for a large tech company, and our CTO has just released a memo with a new rubric for SDEs that includes "AI Fluency". We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.
A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.
edit: typo
This is absolutely the norm across corporate America right now. Chief AI Czars enforcing AI usage metrics with mandatory AI training for anyone that isn't complying.
People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.
It's deeply fascinating psychologically and I'm not sure where this ends.
I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
> I've never seen any tech theme pushed top down so hard in 20+ years working.
> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.
Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.
Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.
What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.
And telling us how to do our jobs. As if they've ever compared the optimized output of clang and gcc on an example program to track down a performance regression at 2AM.
I don't understand how all these companies issue these sorts of policies in lock-step with each other. The same happened with "Return To Office". All of a sudden every company decided to kill work from home within the same week or so. Is there some secret CEO cabal that meets on a remote island somewhere to coordinate what they're going to all make workers do next?
CEOs are ladder climbers. The main skill in ladder climbing is being in tune with what the people around them are thinking, and doing what pleases/maximizes other's approval of the job they are doing.
It's extremely human behavior. We all do it to some degree or another. The incentives work like this:
And the one for what's happening with RTO, AI, etc.: Non-founder/mercenary C-suites are incentivized to be fundamentally conservative by shareholders and boards. This is not necessarily bad, but sometimes it leads to funny aggregate behavior, like we're seeing now, when a critical mass of participants and/or money passes some arbitrary threshold resulting in a social environment that makes it hard for the remaining participants to sit on the sidelines.Imagine a CEO going to their board today and going, "we're going to sit out on potentially historic productivity gains because we think everyone else in the United States is full of shit and we know something they don't". The board responds with, "but everything I've seen on CNBC and Bloomberg says we're the only ones not doing this, you're fired".
It is investor sentiment and FOMO. If your investors feel like AI is the answer you will need to start using AI.
I am not as negative on AI as the rest of the group here though. I think AI first companies will out pace companies that never start to learn the AI muscle. From my prospective these memos mostly seem reasonable.
I agree that a lot of the current push is driven by investor sentiment and a degree of FOMO. If capital markets start to believe AI is table stakes, companies don’t really have the option to ignore it anymore. That said, I’m not bearish on AI either. I think there’s a meaningful difference between chasing AI for signaling purposes and deliberately building an “AI muscle” inside the organization. Companies that start learning how to use, govern, and integrate AI thoughtfully are likely to outpace those that never engage at all. From that perspective, most of these memos feel fairly reasonable to me. They’re less about declaring AI as a silver bullet and more about acknowledging that standing still carries its own risk.
You might be misreading negative sentiment towards poor leadership as negative sentiment towards AI.
If AI is the answer, then there's no reason for a top-down mandate like this. People will just start using as they see fit because it helps them do their jobs better, instead of it being forced on them, which doesn't sound much like AI is the answer investors thought it was.
No, because as discussed AI also changes the nature of your job in a way that might be negative to a worker, even if it’s more productive. Ie, it may be more fun to ride a horse to your friends house, but it’s not faster than a car. Or as the previous example, it may be more enjoyable to make a shoe by hand, but it’s less productive than using an assembly line
I have wondered the exact same thing. It's uncanny how in-sync they all are. I can only suppose that the trend trickles down from the same few influential sources.
> Is there some secret CEO cabal that meets on a remote island somewhere
I mean.. recent FBI files of certain emails would imply.. probably, yes.
Probably yes? Definitely. See also articles like this one [1]. These guys all run in the same circles and the groupthink gets out of control.
https://www.semafor.com/article/04/27/2025/the-group-chats-t...
> FOMOing at the mouth
This is a great line - evocative, funny, and a bit o wordplay.
I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.
I think one thing that I noticed this week in terms of "eye of the beholder" view on AI was the Goldman press release.
Apparently Anthropic has been in there for 6 months helping them with some back office streamlining and the outcome of that so far has been.. a press release announcing that they are working on it!
A cynic might also ask if this is simply PR for Goldman to get Anthropic's IPO mandate.
I think people underestimate the size/scope/complexity of big company tech stacks and what any sort of AI transformation may actually take.
It may turn into another cottage industry like big data / cloud / whatever adoption where "forward deployed / customer success engineers" are collocated by the 1000s for years at a time in order to move the needle.
At least they are consistently applying this to all roles instead of only making tech roles suffer through it like they do with interview processes
I'm so glad I'm nearer the end of my career than the beginning. Can't wait to leave this industry. I've got a stock cliff coming up late this summer, probably a good time to get out and find something better to do with my life.
Then, you might even tinker with some AI stuff on your own terms, you never know. :)
Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)
> Then, you might even tinker with some AI stuff on your own terms, you never know
Indeed! I'm not like dead set against them. I just find they're kind of a bad tool for most jobs I've used them for and I'm just so goddamn tired of hearing about how revolutionary this kinda-bad tool is.
If you're finding their a bad tool for most jobs you're using them for, you're probably being closed minded and using it wrong. The trick with AI these days is to ask it to do something that you think is impossible and it will usually do a pretty decent job at it, or at least close enough for you to pick up or to guide it further.
I was a huge AI skeptic but since Jan 2025, I have been watching AI take my job away from me, so I adapted and am using AI now to accelerate my productivity. I'm in my 50s and have been programming for 30 years so I've seen both sides and there is nothing that is going to stop it.
I try them a few times a month, always to underwhelming results. They're always wrong. Maybe I'll find an interesting thing to do with them some day, I dunno. It's just not a fun or interesting tool for me to learn to use so I'm not motivated. I like deterministic & understandable systems that always function correctly; "smart" has always been a negative term in marketing to me. I'm more motivated to learn to drive a city bus or walk a postal route or something, so that's the direction I'm headed in.
Okay, I use OpenCode/Codex/Gemini daily (recently cancelled my personal CC plan given GPT 5.2/3 High/XHigh being a better value, but still have access to Opus 4.5/6 at work) and have found it can provide value in certain parts of my job and personal projects.
But the evangelist insistence that it literally cannot be a net negative in any contexts/workflows is just exhausting to read and is a massive turn-off. Or that others may simply not benefit the same way with that different work style.
Like I said, I feel like I get net value out of it, but if my work patterns were scientifically studied and it turned out it wasn't actually a time saver on the whole I wouldn't be that surprised.
There are times where after knocking request after request out of the park, I spend hours wrangling some dumb failures or run into spaghetti code from the last "successful" session that massively slow down new development or require painful refactoring and start to question whether this is a sustainable, true net multiplier in the long term. Plus the constant time investment of learning and maintaining new tools/rules/hooks/etc that should be counted too.
But, I enjoy the work style personally so stick with it.
I just find FOMO/hype inherently off-putting and don't understand why random people feel they can confidently say that some random other person they don't know anything about is doing it wrong or will be "left behind" by not chasing constantly changing SOTA/best practices.
> you're probably being closed minded and using it wrong
> I was a huge AI skeptic but since Jan 2025,
> I'm in my 50s and have been programming for 30 years
> there is nothing that is going to stop it.
I need to turn this into one of those checklists like the anti-spam one and just paste it every time we get the same 5 or 6 clichés
Maybe not everyone finds them as useful for their everyday tasks as you do? Software development is quite a broad term.
1. execs likely have spend commits and pressure from the board about their 'ai strategy', what better way to show we're making progress than stamping on some kpis like # of agents created?
2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)
3. there is some subset of ai detractors that refuse to use the tools for whatever reason
the metrics pushed by 1) rarely account for 2) and dont really serve 3)
i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo
> (cc / codex / cursor / copilot (jk, nobody should be using copilot)
I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?
copilot is a much worse harness, although recently improvements in base model intelligence have helped it a bit
if you can, use cc or codex through your ide instead, oai and anthropic train on their own harnesses, you get better performance
I'm currently using opus in Zed via copilot (I think that's what you're recommending?) and tbh couldn't be happier. It's hard to imagine what better would look like.
oh, i meant copilot as in microsoft copilot in vscode. i havent used zed so can't speak to it but if it works for you it works!
The KPI problem is systemic and bigger than just Gen-AI, it’s in everything these days. Actual governance starts by being explicit about business value.
If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.
For some reason the last decade or so we have confused activity with productivity.
(and words/claims with company value - but that's another topic)
I'm so happy I work at a sane company. We're pushing the limits of AI and everyone sees the value, but we also see the danger/risks.
I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.
Leadership loves AI more than anything they have ever loved before. It's because for them, the fawning, sycophantic, ego-stroking agents who cheerfully champions every dumb idea they have and helps them realize it with spectacular averageness, is EXACTLY what they've always expected to receive from their employees.
This feels like a construction company demanding that everyone, from drywaller to admin assistant, go out and buy a drill.
Can I modify your example to:
Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).
Better, yeah.
Awesome, with that new policy we'll be sure to justify my purple drill evangelist role by showing that our average employee is dependent on purple drills for at least 1/8th of their workload. Who knew that our employees would so quickly embrace the new technology. Now the board can't cut me!
It's really cascaded down too.
Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.
Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.
And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.
Drilling cannot be fail, it can only be failed.
This is literally happening in non-tech finance firms where people in non-tech roles are being judged on their AI adoption.
Some companies swear by this. CP Rail is notorious for training everyone to drive a train.
That kind of makes sense philosophically if your business is trains, but I don't think that their business was AI agents. Although given they have a VP of AI, I have no idea. What a crazy title.
> We also have a dashboard with AI Adoption per developer, that is being used to surveil the teams lagging on the topic. All very depressing.
Enforced use means one of two things:
1. The tool sucks, so few will use it unless forced.
2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).
3. They discovered it's something they can measure so they made a metric about it.
4. They heard from their golf buddy who heard from his racquetball buddy that this other CTO at this other shop is saving lots of money with AI
I know you're speaking half in jest but the C-suite of my area actually used a tweet by an OpenAI executive as the agenda for an AI brainstorm meeting.
Well that's inspiring. If you're going to follow anyone right now be sure to follow someone from the company that has committed to spending a trillion dollars without ever having a profitable product. Those are the folks who know what good business is!
It'll never cease to amaze me how many powerful people can't tell advice from advertising.
I am at less than half jest here.
I have friends who are finance industry CTOs, and they have described it to me in realtime as CEO FOMO they need to manage ..
Remember tech is sort of an odd duck in how open people are about things and the amount of cross pollination. Many industries are far more secretive and so whatever people are hearing about competitors AI usage is 4th hand hearsay telephone game.
edit: noteworthy someone sent yet another firmwide email about AI today which was just linking to some twitter thread by a VC AI booster thinkbro
Or it has an annoying learning curve.
Reminds me of those little gadgets, which move your mouse, so that you show up online on Slack.
I’d just add a cron job to burn some tokens.
That sounds like a lot of work - maybe you could burn some tokens asking AI to write a cron to burn some tokens for you?
That sounds awful... Thankfully our CTO is quite supportive of our teams anti-AI policy and is even supportive of posting our LLM-ban on job postings. I honestly dont think that I could operate in an environment with any sort of AI mandate...
That seems just as bad but the opposite direction.
I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting. If you think it's still just slop and errors when managed by experienced devs, you're already behind.
The obvious pulling ahead from early AI adopters/forcers will happen any moment now... any moment
It's not obvious because the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team. Which yeah is scary, but my point is if you don't see any multiplier effect from using that latest AI tools, you are either doing a bad job of using them (or don't have the budget, can't blame anyone for that), or are maybe in some obscure niche coding world?
>the multiplier effect of AI is being used to reduce head count more than to drastically increase net output of a team
This simply isn’t how economics works. There is always additional demand, especially in the software space. Every other productivity-boosting technology has resulted in an increase in jobs, not a decrease.
Well that's certainly and obviously how it's working at the moment in the software industry.
We're in the transition between traditional coding jobs and agentic managers (or something like that)
I try these things a couple times a month. They're always underwhelming. Earlier this week I had the thing work tells me to use (claude code sonnet 4? something like that) generate some unit tests for a new function I wrote. I had a number of objections about the utility of the test cases it chose to write, but the largest problem was that it assigned the expected value to a test case struct field and then... didn't actually validate the retrieved value against it. If you didn't review the code, you wouldn't know that the test it wrote did literally nothing of value.
Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.
Trillions of dollars for this? Sigh... try again next week, I guess.
Twice now in this same story, different subthreads, I've seen AI dullards declaring that you, specifically, are holding it wrong. It's delightful, really.
I don't really care if other people want to be on or off the AI train (no hate to the gp poster), but if you are on the train and you read the above comment, it's hard not to think that this person might be holding it wrong.
Using sonnet 4 or even just not knowing which model they are using is a sign of someone not really taking this tech all that seriously. More or less anyone who is seriously trying to adopt this technology knows they are using Opus 4.6 and probably even knows when they stopped using Opus 4. Also, the idea that you wouldn't review the code it generated is, perhaps not uncommon, but I think a minority opinion among people who are using the tools effectively. Also a rename falls squarely in the realm of operations that will reliably work in my experience.
This is why these conversations are so fruitless online - someone describes their experience with an anecdote that is (IMO) a fairly inaccurate representation of what the technology can do today. If this is their experience, I think it's very possible they are holding it wrong.
Again, I don't mean any hate towards the original poster, everyone can have their own approach to AI.
Yeah, I'm definitely guilty of not being motivated to use these tools. I find them annoying and boring. But my company's screaming that we should be using them, so I have been trying to find ways to integrate it into my work. As I mentioned, it's mostly not been going very well. I'm just using the tool the company put in front of me and told me to use, I don't know or really care what it is.
"Hey boss, I tried to replace my screwdriver with this thing you said I have to use? Milwaukee or something? When I used it, it rammed the screw in so tight that it cracked the wood."
^ If someone says that they are definitely "holding it wrong", yes. If they used it more they would understand that you use the clutch ring to the appropriate setting to avoid this. What you don't do, is keep using the screwdriver while the business that pays you needs 55 more townhouses built.
No need to be mean. It's not living up to the marketing (no surprise), but I am trying to find a way to use these things that doesn't suck. Not there yet, but I'll keep trying.
Try Opus?
Eh, there's a new shiny thing every 2 months. I'm waiting for the tools to settle down rather than keep up with that treadmill. Or I'll just go find a new career that's more appealing.
It seems that the rate of change will only accelerate.
I dunno. At some point the people who make these tools will have to turn a profit, and I suspect we'll find out that 98% of the AI industry is swimming naked.
Yeah I think it'll consolidate around one or two players. Mostly likely Xai, even though they're behind at the moment. No one can compete with the orbital infrastructure, if that works out. Big if. That's all a different topic.
But I feel you, part of me wants to quit too, but can't afford that yet.
Fall behind what? Writing code is only one part of building a successful product and business. Speed of writing code is often not what bottlenecks success.
Yes, the execution part has become cheap, but planning and strategizing is not much easier. But devs and organizations that keep their head in the sand will fall behind on one leg of that stool.
> I mean get onboard or fall behind, that's the situation we're all in. It can also be exciting.
I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.
Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.
If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.
Anyone with more than 2 years of professional software engineering experience can tell this is completely nonsense.
Fiverr CEO goes on a pretty dark rant about how people who don’t upskill and convert to AI workflows will have to change professions, etc.
Then concludes his email with:
> I have asked Shelly to free up time on my calendar next week so people can have conversations with me about our future.
I assume Shelly is an AI, and not human headcount the CEO is wasting on menial admin tasks??
Same here in LATAM. We are also an AI-First company now. No customer-first, or product-first, or data-driven (I actually liked the idea behind being data-driven). All the code must be AI-generated by the end of Q1. All the employees must include at least one AI-adoption metric or project in their goals.
The Klarna guy:
>The misconceptions about Klarna and AI adoption baffle me sometimes.
>Yes, we removed close to 1,500 micro SaaS services and some large. Not to save on licenses, but to give AI the cleanest possible context.
If you remove all your services...
seems to be a pattern:
[Company that's getting disrupted by AI: Fiverr, Duolingo]: rush to adopt internal AI to cut costs before they get undercut by competition
[Company that's orthogonal: Box, Ramp, HFT]: build internal tools to boost productivity, maintain 'ai-first' image to keep talent
[Company whose business model is AI]: time to go all in
Why wouldn't HFT be disrupted by AI? AI-enhanced trading algo designs are likely to be competitive? AI disrupts everything on the computer from the low-end on up. The higher end requires more expensive or custom models that aren't as easy to obtain yet.
HFT is about the last technical domain that could possibly be touched by LLMs. There is next to no good training data and there is zero margin for error.
Because HFT is about adversarial play with a lot of hidden information.
Relevant article from two days ago https://www.latent.space/p/adversarial-reasoning
i mean llms
happy to be corrected but im not aware of any direct improvements llms bring to ultra low latency market making, time to first token is just too high (not including coding agents)
from talking to some friends in the space theres some meaningful improvements in tooling especially in discretionary trading that operate on longer time horizons where agents can actually help w research and sentiment analysis
I like to think if someone can't be bothered to write something, I can't be bothered to read it.
That Fiverr one seems especially cold somehow. I guess it lines up with their business model though.
It also lines up with what has to be their outlook on the market, their model is especially challenged by AI. Years ago I paid for a Ukrainian to write me some scrapers, today that would be a quick project in Cursor. Lots of people used it for cheap art, voice work, etc, now the low end is all AI.
Reminds me a bit of the old f*ckedcompany website, where internal layoff notices and other insider gossip got posted.
May I suggest directly hosting the images and text instead of embedding the x and linked in posts.
You could use AI to summarize the website without X and LinkedIn embedding code.
The trap for many companies is that as everyone automate with AI, their competitive advantage erodes, as they prove that a few centralized models can run the businesses.
What are the trenches in businesses in 2030, purely ownership over physical assets and energy?
If the person who wrote this is reading, many of us block tracking in our browsers and all we see for each of those X embeds is
"X trackers and content blocked
Your Firefox settings blocked this content from tracking you across sites or being used for ads."
Screenshots don't track me so they would be ok.
I'm not sure a person wrote this website, but FYI on Firefox Nightly the text of the tweet is shown below the blocked tracker in a box labeled "Content from blocked embed". It doesn't have images or longer posts, so not that useful for this specific website, but it's a nice feature. It also gives you a link to the tweet so you can easily open it in a private window or XCancel if you want to.
If you hire good people and set proper incentives they will figure out the best way to do something. If leadership has to direct employees to use AI it's a bad sign. If it were a huge boon to productivity, you won't need to force people to use it.
What are the AI-never companies doing? May be a useful comparison. Is the AI work actually improving the bottom line, or is it being used to assuage noisy shareholders that think AI is a hack for infinite profit?
> Within weeks, CEOs across industries posted their own versions. The AI-first memo became a genre. Here they all are.
That may be all the publicly-posted ones, but I'm skeptical. They have 11.
There were a lot more internal memos.
This reads like they don't want AI, they just want tooling. More, better tooling. AI is just a scapegoat/easy out for writing more tooling that makes them more efficient.
CEOs want productivity. They don't want employees. Employees are a cost center.
Employees also make for customers. Not sure how AIs and robots will be replacing human consumers once CEOs realize their dream.
Isn’t there tons more, like the note from Andy Jassy at Amazon and the CEO at Airwallex etc? Maybe you can use an ai agent to find all the other big examples? ;-)
Its easier to adopt AI when starting from scratch or code base is well maintained.
This is "AGI".
Also notice how almost all the stocks of these companies except Meta who have announced AI-first initiatives are at best flat or down but more than 20% YTD.
What does that tell you?
People have always resisted change especially one that modifies the way they work. They’d rather work on the same thing for life. To get them to adopt new tools you need to do this stuff.
And yes, people did resist IDEs (“I’m best with my eMacs” - no you weren’t), people resisted the “sufficiently smart compiler”, and so on. What happened was that they were replaced by the sheer growth in the industry providing new people who didn’t have these constraints.
The software defined storage company croit.io announced it in their Workation in May 2023. AI is just another tool and people have to understand that it's not going away. As a company, you still need people to make use of this tool.
"Cool. I quit."
[flagged]
"Please don't fulminate."
https://news.ycombinator.com/newsguidelines.html