Meanwhile they are pushing AI transcription and note taking solutions hard.
Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
It's honestly such a big problem. One of my colleagues uses an AI scribe. I can't rely on any of his chart notes because the AI sometimes hallucinates (I've already informed him). It also tends to write a ridiculous amount of detail that are totally unnecessary and leave out important details such that I still need to comb through patient charts for (med rec, consults, etc). In the end it ends up creating more work for me. And if my colleague ever gets a college complaint I have no clue how he's gonna navigate any AI generated errors. I'm all for AI and it's great for things like copywriting, brainstorming and code generation. But from what I'm seeing, it's creating a lot more headache in the clinical setting.
If you're why doesn't this guy just check the AI scribe notes? Well, probably because with the amount of detail it writes, he'd be better off writing a quick soap note.
It feels very much like AI is creating AI lock-in (if not AI _vendor_ lock-in) by creating so much detailed information that it's futile to consume it without AI tools.
I was updating some gitlab pipelines and some simple testing scripts and it created 3 separate 300+ line README type metadata files (I think even the QUCIKSTART.md was 300 lines).
Very little protections. The entire medical records of a significant percentage of the NZ population were stolen recently and put up for sale online. Zero consequences for the medical practices who adopted the hacked software.
Many AI companies, including Azure with their OpenAI hosting, are more than willing to sign privacy agreements that allow processing sensitive medical data with their models.
The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.
The New Zealand Chief Digital Officer allowed Australian cloud providers to be used as there weren't suitable NZ data centers and this was many years ago.
Health NZ adopted Snowflake. It was about costs/fancy tech. We have always had data centres. Nobody *needs* snowflake. They could have used Apache Spark.
Yeah, no privacy or security there. There are some tools explicitly designed at helping healthcare providers produce better notes faster, and a couple of them are AMAZING. I'm an AI-half-empty guy, I'm keenly aware of its shortcomings and deploy it thoughtfully, and even with my skepticism there are a couple of tools that are just plain great. I think using LLMs to create overviews and summaries is a great use of the tech.
The union rep gets it - people improvise when you cut their tools and then threaten discipline for improvising.
That memo is how you make staff hide things instead of asking for help.
The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.
Nobody's re-reading the phrasing until a patient outcome goes wrong.
Physicians need to have it pounded into them that every hallucination is downstream harm. AI has no place in medicine. If they insist on it, then all transcripts must be stored with the raw audio. Which should be accessible side by side, with lines of transcript time coded. It's the only way to actually use these safely, while guarding against hallucinations.
Meanwhile they are pushing AI transcription and note taking solutions hard.
Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
It's honestly such a big problem. One of my colleagues uses an AI scribe. I can't rely on any of his chart notes because the AI sometimes hallucinates (I've already informed him). It also tends to write a ridiculous amount of detail that are totally unnecessary and leave out important details such that I still need to comb through patient charts for (med rec, consults, etc). In the end it ends up creating more work for me. And if my colleague ever gets a college complaint I have no clue how he's gonna navigate any AI generated errors. I'm all for AI and it's great for things like copywriting, brainstorming and code generation. But from what I'm seeing, it's creating a lot more headache in the clinical setting.
If you're why doesn't this guy just check the AI scribe notes? Well, probably because with the amount of detail it writes, he'd be better off writing a quick soap note.
It feels very much like AI is creating AI lock-in (if not AI _vendor_ lock-in) by creating so much detailed information that it's futile to consume it without AI tools.
I was updating some gitlab pipelines and some simple testing scripts and it created 3 separate 300+ line README type metadata files (I think even the QUCIKSTART.md was 300 lines).
Is there nothing like HIPAA there or what?
Very little protections. The entire medical records of a significant percentage of the NZ population were stolen recently and put up for sale online. Zero consequences for the medical practices who adopted the hacked software.
Many AI companies, including Azure with their OpenAI hosting, are more than willing to sign privacy agreements that allow processing sensitive medical data with their models.
The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.
[0] https://developers.openai.com/api/docs/guides/your-data#whic...
[1] https://developers.openai.com/api/docs/guides/your-data#stor...
[2] https://platform.claude.com/docs/en/build-with-claude/zero-d...
The New Zealand Chief Digital Officer allowed Australian cloud providers to be used as there weren't suitable NZ data centers and this was many years ago.
Health NZ adopted Snowflake. It was about costs/fancy tech. We have always had data centres. Nobody *needs* snowflake. They could have used Apache Spark.
Yeah, no privacy or security there. There are some tools explicitly designed at helping healthcare providers produce better notes faster, and a couple of them are AMAZING. I'm an AI-half-empty guy, I'm keenly aware of its shortcomings and deploy it thoughtfully, and even with my skepticism there are a couple of tools that are just plain great. I think using LLMs to create overviews and summaries is a great use of the tech.
The one my doctor was using got my obs numbers completely wrong.
We had to correct them at the end of the consultation.
Gotta break a few eggs to save 2 minutes of thinking and work
The union rep gets it - people improvise when you cut their tools and then threaten discipline for improvising.
That memo is how you make staff hide things instead of asking for help.
The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.
Nobody's re-reading the phrasing until a patient outcome goes wrong.
Physicians need to have it pounded into them that every hallucination is downstream harm. AI has no place in medicine. If they insist on it, then all transcripts must be stored with the raw audio. Which should be accessible side by side, with lines of transcript time coded. It's the only way to actually use these safely, while guarding against hallucinations.
FYI, AI adoption in health in NZ is moving forward, for example https://www.rnz.co.nz/news/national/589774/emergency-doctors...
This is just about not using free/public AI tools.