From the dialogues in the pictures it doesn’t sound like they are using anyones emails for training. The messaging indicates it’s more like using as context and/or generating embeddings for RAG. Unless there’s something else I’m not aware of.
I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing
This doomsday messaging an alarmism is only serves to degrade the whole cause
edit: and before someone say that they also don’t want that then let’s criticize it for what it is (opting users to feature without consent). We don’t need to make stuff up, it really doesn’t help.
>When smart features are on, your data may be used to improve these features. Across Google and Workspace, we’ve long shared robust privacy commitments that outline how we protect user data and prioritize privacy. Generative AI doesn’t change these commitments — it actually reaffirms their importance. Learn how Gemini in Gmail, Chat, Docs, Drive, Sheets, Slides, Meet & Vids protects your data.
When I click "Learn more" in toggling the smart features on/off
It may not do it now, but I really don't like the implications. Especially a tone of "it's not actually bad, it's good!"
"Your data stays in Workspace. We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission."
But then if the terms include a vague permission and/or license to use the data for improving the results, the text is factually correct while obscuring the fact that they do in fact solicit your permission and thus use the data, with your permission.
Discovering new settings that I was opted in to without being asked does not scream good faith.
Separately, their help docs are gibberish. They must use this phrase 20 times: "content is not used for training generative AI models outside of your domain without your permission." Without telling you if that checkbox is that permission; where that permission is set; or indeed, even if that permission is set. From reading their documentation, I cannot tell if that checkbox in gmail allows using my data outside my organization or not.
They disable so many features when you remove "Smart features" i.e.
Grammar, Spelling, Autocorrect, Smart Compose/Reply (those templated suggested replies), Nudges, Package tracking, Desktop notifications.
Google really wants to punish you for doing that.
Turning off the inbox categories feature was particularly annoying. A feature they had for a good decade before deciding they weren't jsut happy with collecting my data.
Yep I went through this sad journey with my gmail this week. Got tired of seeing "coming soon" packages cluttering up my inbox, so I looked into how to turn them off. It turned off the categories. Reminds me of the dark pattern used by many apps, where if you turn off notifications to avoid ads/spam, you also lose useful notifications.
It is unbelievably manipulative that they tied this organizational feature with this new LLM-training scam they have running now.
Learning to not rely on inbox categories does make it easier for me to finally leave Gmail for a real email provider though, so maybe this will all work out in the end.
These switches don't control whether your emails are used to train models. They control whether you get to use machine inference features on your own emails.
I'm Australian, I use Australian idioms, spelling, contractions, et al in emails to regular contacts for 20+ years via gmail (Yet Another Early Gmail Invite User).
Despite having selected UK English (there's no 'Strayla option) gmail via the web still insists on suggesting I morph into a cookie cutter middle north American Engrish typer.
They're claiming that these options allow Google to use your data to train its AI, but that's not what it says at all. Where are they getting that idea from?
"training on our data" has turned into a catchphrase like "taking our guns" or "banning our books" - dumb propaganda for anti-AI crowd to enrage people. Whether personalized AI-based experience is useful can be debated but everything has to be twisted into culture wars, thats just how media is nowadays
It's like with Gemini and Google smart devices. You need to opt-in for data training to use Gemini apps. This means you won't be able to access basic features like asking Gemini to turn off your smart light bulbs. Essentially, Google is preventing you from using any smart features unless you allow data training on your own. Even to access basic features like chat history, you need to enable Gemini activity. This essentially allows Google to train on all of your conversations. This is even for paid tiers
Hey, if anyone wants to use AI to draft replies but wants to make sure their data isn't trained by it, I've built a Chrome extension that does exactly that! You can plug in your API key, and it supports all SOTA models.
The article is useful overall, but the following line is such bad journalism that I want to call it out, since we can't afford to have people dumbed-down.
> The reason behind this is Google’s push to power new Gmail features with its Gemini AI, helping you write emails faster and manage your inbox more efficiently.
actually, I do want them to read my auto subscribed newsletter slop for random services that I register to. I do want them to use that messy, unorganized, faulty or maybe even useless data to train their AI models.
It's just for "improving", not training! Why would anyone assume the worst? They're the good guys.. more like a cuddly teddy bear than a monopolistic megacorp! Surely they'd explicitly tell us exactly what they were doing before they do it. Stop fear-mongering, people! anxiously checks the price of my GOOG stock
Reading comprehension and media literacy at all-time lows, below what cognitive scientists formerly believed was a hard floor of "absolute zero comprehension".
From the dialogues in the pictures it doesn’t sound like they are using anyones emails for training. The messaging indicates it’s more like using as context and/or generating embeddings for RAG. Unless there’s something else I’m not aware of.
I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing
This doomsday messaging an alarmism is only serves to degrade the whole cause
edit: and before someone say that they also don’t want that then let’s criticize it for what it is (opting users to feature without consent). We don’t need to make stuff up, it really doesn’t help.
>When smart features are on, your data may be used to improve these features. Across Google and Workspace, we’ve long shared robust privacy commitments that outline how we protect user data and prioritize privacy. Generative AI doesn’t change these commitments — it actually reaffirms their importance. Learn how Gemini in Gmail, Chat, Docs, Drive, Sheets, Slides, Meet & Vids protects your data.
When I click "Learn more" in toggling the smart features on/off
It may not do it now, but I really don't like the implications. Especially a tone of "it's not actually bad, it's good!"
I agree. What I really don’t like is that we have to choose between having smart search and giving up our data.
Is it too much to ask to be able to not give up data for “improvements” but keep the functionality?
> It may not do it now
I don't believe "may" is being used to indicate possibility, but rather permission.
That is to say, there's no reason to think it's not being used, given that wording.
This seems to indeed be confirmed over at https://support.google.com/mail/answer/14615114
"Your data stays in Workspace. We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission."
But then if the terms include a vague permission and/or license to use the data for improving the results, the text is factually correct while obscuring the fact that they do in fact solicit your permission and thus use the data, with your permission.
Discovering new settings that I was opted in to without being asked does not scream good faith.
Separately, their help docs are gibberish. They must use this phrase 20 times: "content is not used for training generative AI models outside of your domain without your permission." Without telling you if that checkbox is that permission; where that permission is set; or indeed, even if that permission is set. From reading their documentation, I cannot tell if that checkbox in gmail allows using my data outside my organization or not.
Link for the lazy: https://mail.google.com/mail/u/0/#settings/general
Find "Smart Features", uncheck, and press save, which will reload Gmail.
Then find "Google Workspace smart features" and click the "Manage Workspace smart features settings" button and unselect everything.
They disable so many features when you remove "Smart features" i.e. Grammar, Spelling, Autocorrect, Smart Compose/Reply (those templated suggested replies), Nudges, Package tracking, Desktop notifications. Google really wants to punish you for doing that.
Turning off the inbox categories feature was particularly annoying. A feature they had for a good decade before deciding they weren't jsut happy with collecting my data.
Yep I went through this sad journey with my gmail this week. Got tired of seeing "coming soon" packages cluttering up my inbox, so I looked into how to turn them off. It turned off the categories. Reminds me of the dark pattern used by many apps, where if you turn off notifications to avoid ads/spam, you also lose useful notifications.
There's an entire setting for "Turn on package tracking" alone. You do not need to disable Smart Features altogether.
It is unbelievably manipulative that they tied this organizational feature with this new LLM-training scam they have running now.
Learning to not rely on inbox categories does make it easier for me to finally leave Gmail for a real email provider though, so maybe this will all work out in the end.
ugh, yeah, I was going to turn this off but not having inbox categories is a deal breaker
For me this feature has been disabled for as long as I remember it being there. I think it predates the AI craze by quite some time.
What do you believe you have gained by throwing those switches?
I think I’m improving my security posture. For example, I’d hope that prompt injection attacks against Gemini AI will be less likely to scoop my data.
These switches don't control whether your emails are used to train models. They control whether you get to use machine inference features on your own emails.
It makes Gmail a normal email client again?
Reclaiming my English.
I'm Australian, I use Australian idioms, spelling, contractions, et al in emails to regular contacts for 20+ years via gmail (Yet Another Early Gmail Invite User).
Despite having selected UK English (there's no 'Strayla option) gmail via the web still insists on suggesting I morph into a cookie cutter middle north American Engrish typer.
Yes, Engrish .. AmerEngrish is an abomination.
Feck that shite. Hard.
Google will use your family's pictures and memories "to improve Google’s AI assistants, like Smart Compose or AI-generated replies."
Someone will ask a Google AI service to generate an image some day and your daughter will be used. And that's one of the least worrying outcomes.
That's what training means, yeah, but it isn't clear Google is actually doing what the article claims it is.
They're claiming that these options allow Google to use your data to train its AI, but that's not what it says at all. Where are they getting that idea from?
https://support.google.com/mail/answer/15604322?sjid=1029580...
> When smart features are on, your data may be used to improve these features.
"training on our data" has turned into a catchphrase like "taking our guns" or "banning our books" - dumb propaganda for anti-AI crowd to enrage people. Whether personalized AI-based experience is useful can be debated but everything has to be twisted into culture wars, thats just how media is nowadays
Thank god they also got consent from the non google people who sent the email.
Ha ! I disabled the smart features in gmail and then used gemini to ask something unrelated. The massage ended with:
"By the way, to unlock the full functionality of all Apps, enable Gemini Apps Activity." with a link to myactivitydotgoogledotcom
Google can read the emails of everyone who corresponds with a Gmail user - which is almost everyone - and they can't opt out.
It's like with Gemini and Google smart devices. You need to opt-in for data training to use Gemini apps. This means you won't be able to access basic features like asking Gemini to turn off your smart light bulbs. Essentially, Google is preventing you from using any smart features unless you allow data training on your own. Even to access basic features like chat history, you need to enable Gemini activity. This essentially allows Google to train on all of your conversations. This is even for paid tiers
Hey, if anyone wants to use AI to draft replies but wants to make sure their data isn't trained by it, I've built a Chrome extension that does exactly that! You can plug in your API key, and it supports all SOTA models.
https://jetwriter.ai
The article is useful overall, but the following line is such bad journalism that I want to call it out, since we can't afford to have people dumbed-down.
> The reason behind this is Google’s push to power new Gmail features with its Gemini AI, helping you write emails faster and manage your inbox more efficiently.
Funny how the law and the corporations see the meaning of "optional" differently.
actually, I do want them to read my auto subscribed newsletter slop for random services that I register to. I do want them to use that messy, unorganized, faulty or maybe even useless data to train their AI models.
Oh Well, I left gmail years ago, now it is used for junk mail, good luck loading ads going to gmail into your AI :)
What did you move too? I was thinking of moving my personal email off of Gmail for a while. It's just 15+ years of accounts to untangle.
I long made a "spam/unimportant site verification " account, so that's set.
My own domain.
Alternatively these exist: https://www.fsf.org/resources/webmail-systems
It's just for "improving", not training! Why would anyone assume the worst? They're the good guys.. more like a cuddly teddy bear than a monopolistic megacorp! Surely they'd explicitly tell us exactly what they were doing before they do it. Stop fear-mongering, people! anxiously checks the price of my GOOG stock
Reading comprehension and media literacy at all-time lows, below what cognitive scientists formerly believed was a hard floor of "absolute zero comprehension".