> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
Claude.. I switched my phone assistent to claude and it does everything that google (used to) do like set alarms and timers, but also does everything claude can do.
> I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
I don't think that's part of their decision making, Liquid Glass moved most things around for seemingly not much else than novelty and that's not the first time.
Right now Alexa+ and Gemini are objectively better.
The best is ChatGPT voice mode. It understands non English words and accents amazingly well, and even though the LLM model isn’t the full fledged one, I can have deep conversations with it for an hour without it missing a beat.
My preference, however, is for a voice-control UX just like I get with my Amazon Echo and "classic" Alexa like I have been for the past 10 years I've been using it: I think I can best describe it as a "voice-driven command-line" just like your OS' CLI shell, which makes its interactions predictable, even if it means I need to "know" what commands are valid in a given context. We all need predictability and reliability when it comes to my home-automation integrations.
...but computer interaction with a LLM / transformer-driven / "AI agent" is anything but predictable. When Amazon opted everyone into Alexa+ I agreed to give it a go and see if it really made things better or not - and it did not. I opted-out of Alexa+ and went back to something actually reliable.
Whenever I see one of these comments, it's always from someone that tried it at the start and then gave up because of a bad experience. And many times there are more people commenting back that this was essentially the 1.0 version and that the current 2.0 version is much better. So as someone that uses none of these products (old voice assistants vs. ai ones) it's really hard to evaluate if any of these anecdotes mean anything.
You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.
... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.
I concur that the ChatGPT voice mode is excellent. I can't even think of anything to knock it for other than for whatever reason it never 'hears' my kids, but that's probably because it's not intended to be used in multi-participant chats?
But for one-on-one, it is a really outstanding experience. Especially since they tamped down the way over-the-top humanisms.
Plus, if someone else does it better (or different), I bet they've got a team and technology at a 90% done state waiting to jump on it, pick it apart and make it better. I don't think they're not doing anything.
Yesterday my google home mini gave me the current temperature in farenheit. I live in Canada and use a pixel. Dumbest fucking AI going. May as well give it to me in coulombs per hectare.
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
I dunno, Apple has always had a pretty high level of hardware independence, and one could imagine even if Intel did produce great chips for longer the ARM architecture would replace it eventually. Certainly the timeline got shifted (and I'm glad for it) but I don't know if that really impacted Siri. If anything it seems like it got pushed to the bottom of the pile in favor of projects like the Apple Car and Vision Pro OS one on side and the demand to increase services revenue on the other.
They're valued at $4T, they have hundreds of billions hoarded. They could run 50 billion dollar startup projects and not feel it. Imagine a startup getting handed a billion dollars ... and the vast knowledge that Apple has access to already.
There's no way they couldn't do a better Siri. For some reason, they just ... won't.
It's one of the biggest and wealthiest companies in the world, but your comment seems to imply they have to pick and choose what they pursue. They really don't, especially if it's hard- vs software.
> It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
I didn't imply, it's explicit in my comment. it's what their actions show. Their updates make their systems worse and worse, Tim Cook is out and Siri is in shambles. It might have been something else, but I'm willing to give it the benefit of the doubt, because the alternative is just sheer stupidity.
I only partly agree with this. The answer is maddeningly more complicated.
Some parts of their software stack -- higher up than the kernel -- are actually pretty great. There's a lot of realy brilliant stuff in their system frameworks, and in SwiftUI, Cocoa, and UIKit. I've been using Linux at home recently, and I find myself missing some of it.
But, on the flip side, suddenly you hid maddening bugs, crashes, or terrible developer-experience papercuts. And, of course, there's the App Store, which is just evil. For my next app I'm just going to go Notarization only, and see how that goes...
The comment above is on to something. I find CarPlay to much more valuable and much more of a lock in to the iPhone than Siri. I do not think I could ever go back to using the infotainment systems that ship with cars. So makes sense why they might prioritize over Siri. And in the context of CarPlay, the simplicity of Siri is nice. I really only need it to execute a few simple commands like looking up directions, making calls, reading / sending texts, playing a podcast, etc.
I don't dispute that, but Apple made its business on the premise of being the best in the business in terms of UX. Note though that you can have great UX powered by mediocre software, so those aren't mutually exclusive.
I think they could never make it good enough at the right price.
You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.
If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.
nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
Apple is a pretty difficult company to deal with on a B2B basis.
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
Yeah, that makes more sense to me than "Anthropic had them over the barrel". Which seemed quite odd given the relative cash positions and installed base of each firm.
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
It's trending in that direction. If you want genuine conversation with humans, it's best to start looking for small, private communities that have and enforce LLM policies that align with your desires. Public social media is universally trash, don't waste your time there. I think HN is still worth visiting for now, but it's getting harder to justify spending time here with the quantity of garbage-quality LLM articles and even many comments.
These points might be fake, but they are far from being useless, and actually have monetary value.
There is a market for buying and selling "aged" Hacker News accounts (15 USD for ~500 points).
By purchasing just ~300 karma points, founders can unlock an uplift of tens of thousands of dollars in visibility on the home page (clients and investors).
So the LLM comments are not here just for fun, they are clearly farming points.
Ironically, it also increases actual human engagement. This way the day Ycombinator wants to announce something, they already have more public than if there was low engagement.
Like the shilling you mentioned, these bots can push downvotes and flag competitors service.
Essentially the same as on Reddit. If you have incentive, you have a market.
We're getting to a point where we're going to have to consistently start putting content in that AI is banned from writing, just to prove that we're humans
Only a matter of time (if not already) before there's counter-LLMs or whatnot that convince free-reign LLM agents to go and generate cryptocurrencies for the attacker or run propaganda campaigns.
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation.
If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.
I'm also not sure why you'd think that, Apple's been at the forefront of "AI" for years now, running models locally and optimizing their CPUs for local workloads to e.g. identify people, places and pets (much appreciated lmao), create slideshows, and subtly improve photo's made on the device.
Whilst tempting, I think it is important not to read too much into this.
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
Not really, almost all active software developers use AI nowadays.
The research surveyed 121.000 developers across 450+ companies. A striking 92.6% of them use an AI coding assistant at least once a month, and roughly 75% use one weekly
It's weird to believe that large corporations should be ashamed to use AI.
It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
Have you never used Claude? It regularly ignores directives, no matter how they're worded or how many times they're repeated. It's also hierarchal. Org-wide rules would be in a higher-level directory than repo rules or component rules. This is obviously just a tiny snippet of prompts.
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.
> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.
--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199
Apple seems to purposefully have decided to sit out the arms race.
Probably smart time to rent and not buy if they plan on buying in a downturn.
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
Claude.. I switched my phone assistent to claude and it does everything that google (used to) do like set alarms and timers, but also does everything claude can do.
How did you do that?
> I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
I don't think that's part of their decision making, Liquid Glass moved most things around for seemingly not much else than novelty and that's not the first time.
Right now Alexa+ and Gemini are objectively better.
The best is ChatGPT voice mode. It understands non English words and accents amazingly well, and even though the LLM model isn’t the full fledged one, I can have deep conversations with it for an hour without it missing a beat.
Siri doesn't need to have conversations with you. ChatGPT can do that. But, it should be able to do actions you'd do on your phone.
"objectively better" is a subjective statement :)
My preference, however, is for a voice-control UX just like I get with my Amazon Echo and "classic" Alexa like I have been for the past 10 years I've been using it: I think I can best describe it as a "voice-driven command-line" just like your OS' CLI shell, which makes its interactions predictable, even if it means I need to "know" what commands are valid in a given context. We all need predictability and reliability when it comes to my home-automation integrations.
...but computer interaction with a LLM / transformer-driven / "AI agent" is anything but predictable. When Amazon opted everyone into Alexa+ I agreed to give it a go and see if it really made things better or not - and it did not. I opted-out of Alexa+ and went back to something actually reliable.
Whenever I see one of these comments, it's always from someone that tried it at the start and then gave up because of a bad experience. And many times there are more people commenting back that this was essentially the 1.0 version and that the current 2.0 version is much better. So as someone that uses none of these products (old voice assistants vs. ai ones) it's really hard to evaluate if any of these anecdotes mean anything.
You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.
... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.
I concur that the ChatGPT voice mode is excellent. I can't even think of anything to knock it for other than for whatever reason it never 'hears' my kids, but that's probably because it's not intended to be used in multi-participant chats?
But for one-on-one, it is a really outstanding experience. Especially since they tamped down the way over-the-top humanisms.
Plus, if someone else does it better (or different), I bet they've got a team and technology at a 90% done state waiting to jump on it, pick it apart and make it better. I don't think they're not doing anything.
> Who’s doing it better?
Any of the Whisper-based apps on the App Store.
Yesterday my google home mini gave me the current temperature in farenheit. I live in Canada and use a pixel. Dumbest fucking AI going. May as well give it to me in coulombs per hectare.
Here have an anecdote: Gemini Assistant is pretty good.
Gemini will be replacing the legacy Siri:
https://blog.google/company-news/inside-google/company-annou...
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
I dunno, Apple has always had a pretty high level of hardware independence, and one could imagine even if Intel did produce great chips for longer the ARM architecture would replace it eventually. Certainly the timeline got shifted (and I'm glad for it) but I don't know if that really impacted Siri. If anything it seems like it got pushed to the bottom of the pile in favor of projects like the Apple Car and Vision Pro OS one on side and the demand to increase services revenue on the other.
A series is their own chip design, not Power PC or Intel designs.
It's the CPUs they have built for their purposes, which is next level hardware independence.
They're valued at $4T, they have hundreds of billions hoarded. They could run 50 billion dollar startup projects and not feel it. Imagine a startup getting handed a billion dollars ... and the vast knowledge that Apple has access to already.
There's no way they couldn't do a better Siri. For some reason, they just ... won't.
It's one of the biggest and wealthiest companies in the world, but your comment seems to imply they have to pick and choose what they pursue. They really don't, especially if it's hard- vs software.
> It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
I didn't imply, it's explicit in my comment. it's what their actions show. Their updates make their systems worse and worse, Tim Cook is out and Siri is in shambles. It might have been something else, but I'm willing to give it the benefit of the doubt, because the alternative is just sheer stupidity.
I always found that Apple had pretty mediocre software qualify, it's always been a very strong hardware company first and foremost.
They have great kernel, drivers and low level engineering but the stack above that has a lot of questionable stuff.
I only partly agree with this. The answer is maddeningly more complicated.
Some parts of their software stack -- higher up than the kernel -- are actually pretty great. There's a lot of realy brilliant stuff in their system frameworks, and in SwiftUI, Cocoa, and UIKit. I've been using Linux at home recently, and I find myself missing some of it.
But, on the flip side, suddenly you hid maddening bugs, crashes, or terrible developer-experience papercuts. And, of course, there's the App Store, which is just evil. For my next app I'm just going to go Notarization only, and see how that goes...
The comment above is on to something. I find CarPlay to much more valuable and much more of a lock in to the iPhone than Siri. I do not think I could ever go back to using the infotainment systems that ship with cars. So makes sense why they might prioritize over Siri. And in the context of CarPlay, the simplicity of Siri is nice. I really only need it to execute a few simple commands like looking up directions, making calls, reading / sending texts, playing a podcast, etc.
I don't dispute that, but Apple made its business on the premise of being the best in the business in terms of UX. Note though that you can have great UX powered by mediocre software, so those aren't mutually exclusive.
I think they could never make it good enough at the right price.
You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.
If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.
nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.
> They have custom versions of Claude running on their own servers internally.
This is the important point.
Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.
But if they are running the models on their own servers, why not!
Was it even publicly known that Anthropic offered this capability? I wasn't aware on-prem Claude was a thing.
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
Apple is a pretty difficult company to deal with on a B2B basis.
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
Yeah, that makes more sense to me than "Anthropic had them over the barrel". Which seemed quite odd given the relative cash positions and installed base of each firm.
Tbh I thought their purpose was to power the war machine
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
The reporting says it's running on their own hardware.
Internal dev tools, but the point I'm making relates to the discussion about choosing Gemini over Claude for their consumer-facing products.
Gurman is clearly Apple's preferred go to for leaking info
Unrelated:
Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?
It's trending in that direction. If you want genuine conversation with humans, it's best to start looking for small, private communities that have and enforce LLM policies that align with your desires. Public social media is universally trash, don't waste your time there. I think HN is still worth visiting for now, but it's getting harder to justify spending time here with the quantity of garbage-quality LLM articles and even many comments.
It’s not that they’ve lost their identity— it’s that… { “error”: “Claude Max limits exceeded” }
Dead internet. Twitter is 95% bots now, especially when it comes to any topic relevant to corporations.
It's not about contributing to the conversation — it's about the fake internet points.
It's not about the fake internet points — it's about manipulating people to support companies they otherwise wouldn't.
That's why he said fake internet points.
These points might be fake, but they are far from being useless, and actually have monetary value.
There is a market for buying and selling "aged" Hacker News accounts (15 USD for ~500 points).
By purchasing just ~300 karma points, founders can unlock an uplift of tens of thousands of dollars in visibility on the home page (clients and investors).
So the LLM comments are not here just for fun, they are clearly farming points.
Ironically, it also increases actual human engagement. This way the day Ycombinator wants to announce something, they already have more public than if there was low engagement.
Like the shilling you mentioned, these bots can push downvotes and flag competitors service.
Essentially the same as on Reddit. If you have incentive, you have a market.
You've hit the nail on the head with that observation! And honestly? The points are all that matters.
I find it hilarious that your comment has an emdash.
We're getting to a point where we're going to have to consistently start putting content in that AI is banned from writing, just to prove that we're humans
arse
Only a matter of time (if not already) before there's counter-LLMs or whatnot that convince free-reign LLM agents to go and generate cryptocurrencies for the attacker or run propaganda campaigns.
Also, at some point someone will figure out how to reliably produce non-smelly LLM replies.
Yep, path of least resistance unfortunately. Any recommendations that isn’t discord where to have meaningful online interactions with actual humans?
You're absolutely right!
(sorry couldn't resist)
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
Come join the party at ClaudeyLemonParty
People become so lazy after ai. Even they don't check what they commit.
Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation.
If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.
to be honest, for some reason I expected most of apple to eschew claude/ai coding.
I'm not sure why. It just doesn't feel very Apple-like
Because unlike Apple Intelligence, Claude is useful?
I'm also not sure why you'd think that, Apple's been at the forefront of "AI" for years now, running models locally and optimizing their CPUs for local workloads to e.g. identify people, places and pets (much appreciated lmao), create slideshows, and subtly improve photo's made on the device.
They've had it built in to Xcode for a while now, and I imagine internally a lot longer.
Whilst tempting, I think it is important not to read too much into this.
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
100% agree
Risk of embarrassment is too great to be vibe coding, apple's brand is TRUST and people don't trust AI... A slip like this erodes their brand
Not really, almost all active software developers use AI nowadays.
It's weird to believe that large corporations should be ashamed to use AI.It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.
Especially considering that Apple added it as a headline feature to the latest Xcode releases…
Original link: https://x.com/aaronp613/status/2049986504617820551?s=20
Dozens of comments, but not a single "What was in their Claude.md"
The what is in the screenshots….
Screenshots aren't very accessible though.
[delayed]
You’re expected to read the ~article~ twitter thing :)
So much FUD (and bot replies dogpiling on?) in that thread. It's just a file that specifies some structure for the project. Nothing super secret.
X somehow manages to get worse for this as time goes on.
Seems like at some point most of the actual humans just gave up on replying.
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
Have you never used Claude? It regularly ignores directives, no matter how they're worded or how many times they're repeated. It's also hierarchal. Org-wide rules would be in a higher-level directory than repo rules or component rules. This is obviously just a tiny snippet of prompts.
I really hope its not churning out massive amounts of code for osx and ios or we are in for some pretty interesting times in the next year or so.
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
It was a mistake yes. And they corrected it. Why would you assume they would do this intentionally?
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.