I love the idea and I would like to build something like this. But the few attempts i have made using whisper locally has so far been underwhelming. Has anyone gotten results with small whisper models that are good enough for a use case like this?
Which model do you use? I use large usually, on a GPU. It's fast and works really well. Be aware though that it can only recognise one language at a time. It will autodetect if you don't specify one.
Of course the smaller models don't work nearly as well and they are often restricted to English. Large works great for me though it does require GPU hardware to be responsive enough, even with faster-whisper or insanely-fast-whisper.
Yeah, I would definitely double-check your setup. At work we use Whisper to live-transcribe-and-translate all-hands meetings and it works exceptionally well.
+1 this. Whisper works insanely well. I've been using the medium model as it has yet to mis transcribe anything noticeable, and it's very lightweight. I even converted it to a coreML model so it runs accelerated on apple silicon. It doesn't run *that* much faster than before.. but it ran really fast to begin with. For anyone tinkering, ive had much success with whisper.cpp.
This feels like the right direction: useful voice assistants without cloud dependencies. Would love to see where the performance bottlenecks are on commodity hardware.
This summary-like style — with heavy formatting and every (!) paragraph as a bulleted list — drives me nuts tbqh. Especially in lengthy texts, it just looks... noisy, bland, and sometimes confusing.
What's the format you would prefer? We're not using ChatGPT to write and we've experimented with this format. The other articles may have a better format?
I’ve noticed recently (maybe I missed an announcement) that Siri now functions locally for at least some commands. Try putting an Apple watch in airplane mode and asking it to set a timer or reminder
Siri has had limited offline functionality since at least iOS 15? Although I don't think most users noticed at the time, since most of Siri's command vocabulary is for things that require a network connection...
They are doing this, just at a mind-numbingly slow pace. They seem to add controls for brightness and power but don't make it clear what works when offline. It's not even worth trying because there's no guide or documentation on what commands would be available. You just have to go into airplane mode and try asking stuff. Awful UX
Does Apple even allow you to replace Siri with another assistant? For the longest time on android, all non-Google assistants were crippled by not being able to listen in the background or use the assistant hardkey, gestures, or shortcuts. I'm not sure if the Google assistant still has privileges others don't, but I wouldn't be surprised in the least.
Part of the problem is the wake word “hey siri” is actually handed by a separate coprocessor (AOP) with the model compiled into the firmware. While anything is technically possible, it isn’t as simple as just letting the google app run in the background since the AP is asleep when any of these gesture happen. You could probably setup the action button on the side to open an assistant, but that’s going to be a less pleasant experience (app might not be open, etc).
There's open solutions for that like openwakeword and microwakeword (the latter can even run on an esp32!)
The training is a lot of work though and requires a lot of material. For Home Assistant's voice preview model they had tens of thousands of volunteers record the "okay nabu" wakeword and even still it doesn't work quite as well as hey siri on Apple devices.
Same with android phones - a super-specific hardcoded phrase is much easier to work in the power budgets required for an "always on" part of the device.
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
Back in 1992 or so the NeXT could distinguish (was it 16 or) 64 fixed, trained, phrases. Point being, it doesn’t take too much compute with a finite vocabulary.
I saw an article about this and downloaded the Perplexity app but I was unable to figure out if this was true? Do I need a paid tier? I just quickly worked through the free sign up and couldn't sort it out. The demo looked really slick. Is it worth pursuing?
Faithful year and half user of chatGPT on my iPhone which has made me loathe Siri for how dumb she is in every sense of the way!
When will OpenAI (with the help of Microsoft) release a GPT phone to compete with the iPhone? Im so tired of the boring iPhone! Give me a GPT phone where from my lock screen GPT does everything for me. Fingers crossed :) it's secretively in the works!
In earnest though, I'm certain we'll see a community replacement of Siri by end-of-year if the iPhone permissions model allows it or there's some workaround. IDK what the limitations are here but I'm eagerly awaiting the community to step in where Siri has failed.
For me I ask a lot of things like "How do I say <xxx> in Spanish". It's better than a google translate because it's not quite as literal, it will translate to proper colloquialisms if necessary.
It's even better asking it to play a playlist I have made and downloaded in Apple Music only for it to say "I'm going to need your permission to access your Spotify data" and play something completely random.
Even saying something like "play the playlist ____ in Apple Music" doesn't help, it cuts the "in Apple Music" part of.
I love the idea and I would like to build something like this. But the few attempts i have made using whisper locally has so far been underwhelming. Has anyone gotten results with small whisper models that are good enough for a use case like this?
Maybe I've just had a bad microphone.
Which model do you use? I use large usually, on a GPU. It's fast and works really well. Be aware though that it can only recognise one language at a time. It will autodetect if you don't specify one.
Of course the smaller models don't work nearly as well and they are often restricted to English. Large works great for me though it does require GPU hardware to be responsive enough, even with faster-whisper or insanely-fast-whisper.
> Maybe I've just had a bad microphone.
Yeah, I would definitely double-check your setup. At work we use Whisper to live-transcribe-and-translate all-hands meetings and it works exceptionally well.
+1 this. Whisper works insanely well. I've been using the medium model as it has yet to mis transcribe anything noticeable, and it's very lightweight. I even converted it to a coreML model so it runs accelerated on apple silicon. It doesn't run *that* much faster than before.. but it ran really fast to begin with. For anyone tinkering, ive had much success with whisper.cpp.
What was the process of converting it like? I assume you then had to write all of the inference code as well?
This feels like the right direction: useful voice assistants without cloud dependencies. Would love to see where the performance bottlenecks are on commodity hardware.
Man, I'd really love it if this were just a product/app I could download and use a UI to configure/teach.
But this guide gives me what I need to make that, I think, so a big thank you for this!
Why an LLM-written article, though?
This summary-like style — with heavy formatting and every (!) paragraph as a bulleted list — drives me nuts tbqh. Especially in lengthy texts, it just looks... noisy, bland, and sometimes confusing.
What's the format you would prefer? We're not using ChatGPT to write and we've experimented with this format. The other articles may have a better format?
This is great, thanks for putting this together.
Haven't followed it through yet, but does this model run successfully on an iPhone?
My 9 year old ran a Qwen 0.6B model using ollama quite well, anything else was too slow to offer a good UX.
Oh, a nine year old PHONE.
I was thinking there was a fourth grader out there deploying models when at that age I was still learning multiplication tables.
My son just turned 9 today so I was like, "Wow! I wonder if my kid would be interested in doing this?"
MLC[0] indicates that it can run models in the 8B range on iOS, but 1-3B sounds more reasonable to me.
[0] https://llm.mlc.ai/docs/deploy/ios.html#bring-your-own-model
I’ve noticed recently (maybe I missed an announcement) that Siri now functions locally for at least some commands. Try putting an Apple watch in airplane mode and asking it to set a timer or reminder
Siri has had limited offline functionality since at least iOS 15? Although I don't think most users noticed at the time, since most of Siri's command vocabulary is for things that require a network connection...
Why haven’t Apple taken a look at the data then hardcoded handlers for the top ~1000 usages???
They are doing this, just at a mind-numbingly slow pace. They seem to add controls for brightness and power but don't make it clear what works when offline. It's not even worth trying because there's no guide or documentation on what commands would be available. You just have to go into airplane mode and try asking stuff. Awful UX
Cool project and nice write-up!
Does Apple even allow you to replace Siri with another assistant? For the longest time on android, all non-Google assistants were crippled by not being able to listen in the background or use the assistant hardkey, gestures, or shortcuts. I'm not sure if the Google assistant still has privileges others don't, but I wouldn't be surprised in the least.
Part of the problem is the wake word “hey siri” is actually handed by a separate coprocessor (AOP) with the model compiled into the firmware. While anything is technically possible, it isn’t as simple as just letting the google app run in the background since the AP is asleep when any of these gesture happen. You could probably setup the action button on the side to open an assistant, but that’s going to be a less pleasant experience (app might not be open, etc).
Details are listed below
https://machinelearning.apple.com/research/hey-siri
There's open solutions for that like openwakeword and microwakeword (the latter can even run on an esp32!)
The training is a lot of work though and requires a lot of material. For Home Assistant's voice preview model they had tens of thousands of volunteers record the "okay nabu" wakeword and even still it doesn't work quite as well as hey siri on Apple devices.
Same with android phones - a super-specific hardcoded phrase is much easier to work in the power budgets required for an "always on" part of the device.
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
Back in 1992 or so the NeXT could distinguish (was it 16 or) 64 fixed, trained, phrases. Point being, it doesn’t take too much compute with a finite vocabulary.
But wouldn't adding your own phrases require a reflash of parts of firmware in this context?
I think people would be fine with having to call it Siri if only they could replace the actual assistant.
I presume you could pretty easily use new-ish action button to run a custom shortcut that brings up an alternative assistant app.
More or less. This is what Perplexity does.
I saw an article about this and downloaded the Perplexity app but I was unable to figure out if this was true? Do I need a paid tier? I just quickly worked through the free sign up and couldn't sort it out. The demo looked really slick. Is it worth pursuing?
Faithful year and half user of chatGPT on my iPhone which has made me loathe Siri for how dumb she is in every sense of the way!
When will OpenAI (with the help of Microsoft) release a GPT phone to compete with the iPhone? Im so tired of the boring iPhone! Give me a GPT phone where from my lock screen GPT does everything for me. Fingers crossed :) it's secretively in the works!
So build your own crappy agent-assistant?
In earnest though, I'm certain we'll see a community replacement of Siri by end-of-year if the iPhone permissions model allows it or there's some workaround. IDK what the limitations are here but I'm eagerly awaiting the community to step in where Siri has failed.
> So build your own crappy agent-assistant?
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." - https://news.ycombinator.com/newsguidelines.html
(Your comment would be fine without that first bit.)
The assistant is only half the story here. This looks like a great, well-defined tutorial project to learn how to put this stuff together locally.
A better one* Siri can't even tell me the weather right, this assistant is an elevated version that works first, then performs great second
Crappy? Dude, Siri at one point couldn't even tell you what today's date was. The bar is on the ground.
I do not know if it is because I have been trained to make simple requests, but there are only a half dozen things I would verbally ask a robot.
- time of day
- calendar date
- weather
- set a timer
- simple math calculation
That’s 90% of the functionality right there.
For me I ask a lot of things like "How do I say <xxx> in Spanish". It's better than a google translate because it's not quite as literal, it will translate to proper colloquialisms if necessary.
I'm sorry, something went wrong with your "what time is it" request.
Oh you asked "what time is it in Miami?" Because I already cut you off after "what time is it".
This happens on a daily basis when I'm not talking right into my phone.
It's even better asking it to play a playlist I have made and downloaded in Apple Music only for it to say "I'm going to need your permission to access your Spotify data" and play something completely random.
Even saying something like "play the playlist ____ in Apple Music" doesn't help, it cuts the "in Apple Music" part of.
And incredibly it manages to get those wrong a non-trivial amount of the time.
Funny you mention this. My usage is the same. I suspect we’ve all been trained to expect little-to-nothing from these assistants.
Think Different