MediaNama’s Take: Google’s decision to default users into allowing their inboxes to train its AI models shifts the responsibility for privacy onto people who may not be very proactive in protecting their own data. Because opt-out systems claim to offer choice but instead exploit users’ inattention and probable lack of awareness, especially when Google didn’t send out any notifications to inform them of this change.
Additionally, an email inbox remains one of the few digital spaces where users reasonably expect confidentiality and privacy, and Google’s default opt-in raises a significant red flag when it comes to building a trustworthy AI system.
Similarly, the uneven global rollout, which varies depending on the geographic location of users, exposes another core problem. Google adopts privacy-protective defaults only when regulators mandate them, which calls for stricter AI regulation which can protect the data of citizens.
What’s the News?
Google has recently updated its AI Training Policy to utilise the content of emails to train its LLMs, which power AI chatbots like Gemini, as well as other AI-based tools such as Google AI Studio or NotebookLM. The more problematic aspect of this change is the lack of notifying the users regarding this policy change and the default opt-ins for AI training.
Under Google’s “Smart Features”, this defaults to opt-in for both personal email accounts and professional/work email accounts, which may contain more commercial and confidential information, raising significant privacy-related issues. Users can turn off this feature by clearing the checkbox under “Smart Features” in “General Settings” after opening the settings menu from the top right corner.
Why is the implementation uneven across countries?
Google’s AI policy on using Workspace content and activity to train systems like Gemini varies significantly across countries due to the wide differences in privacy laws. In the European Economic Area, Japan, Switzerland, and the United Kingdom, Google disables “smart features” by default to comply with strict rules that require clear consent and robust data protection, including the GDPR.

In regions with less stringent privacy regulations, Google enables these features by default and uses the data to improve tools such as Gmail’s Smart Compose. The policy gap underscores the impact of legal frameworks on the deployment and utilisation of AI across various jurisdictions.
Why is India exempted from Default Opt-outs?
The uneven implementation of Google’s AI policies across countries is largely driven by regional privacy laws that regulate the use of personal data for purposes such as training generative AI models. In countries with strict, enforceable privacy regulations, such as those in the European Economic Area (EEA), the United Kingdom, Switzerland, and Japan, Google cannot automatically opt users into data processing for AI training without explicit consent.
The region-specific regulations that forced Google to follow default opt-outs are:
Advertisements
Privacy regulations in the European Economic Area, the United Kingdom, Switzerland, and Japan impose strict limits on using personal data for AI training, especially when companies collect that data for communication services such as email. These laws require companies to obtain explicit, informed consent before they use personal data for new purposes, including training generative AI models.
The GDPR in the EEA and UK, along with Switzerland’s Data Protection Act, enforces purpose limitation and prohibits repurposing data without requiring consent from the data owners. Japan’s APPI similarly bars the use of data for AI training without explicit consent. In June 2023, Japan’s Personal Information Protection Commission warned that entering personal data into AI systems could violate the law if the data is stored or reused for training. Regulators in these regions can impose significant fines for violations, underscoring the strong enforcement and legal constraints that govern data privacy.
In contrast, countries like the U.S., India, and many nations in the APAC, LATAM, and African regions lack such stringent data protection regulations, allowing Google to adopt default opt-in settings in these regions, where privacy laws either don’t exist or are less enforceable.
Also Read:
Support our journalism:
For You
Source link

