Google quietly took another step toward becoming the de facto guardian of your online identity. Its upgraded “Results About You” and non‑consensual explicit imagery (NCEI) tools promise stronger protection against doxxing and deepfake porn—if you’re willing to hand Google some of your most sensitive data first.
This is not just a feature update; it’s a test of how far we’re ready to go in outsourcing personal safety to a single, ad‑driven platform. In this piece, we’ll unpack what changed, what it really protects you from, how it reshapes power on the web, and why European‑style regulation will increasingly define the limits of tools like these.
The news in brief
According to Ars Technica’s reporting, Google has upgraded two of its personal‑safety tools that sit on top of Search.
First, the “Results About You” dashboard can now continuously scan the web for additional categories of personal data: national ID numbers such as passport numbers, driving licence numbers, and US Social Security numbers. To enable this, users need to provide at least part of those identifiers. Google requires a full driver’s licence number, while for passports and Social Security numbers only the last four digits are needed.
Second, the tool for reporting and removing non‑consensual explicit imagery (NCEI)—including AI‑generated deepfake porn—has been streamlined. Users can now start a removal request directly from the three‑dot menu on any image result, specify whether it’s real or AI‑generated, and batch multiple images into a single report.
As Ars Technica notes, neither tool deletes content from the original website. Instead, if Google approves a request, it de‑indexes the content so it no longer appears in Search. Both tools also now support ongoing monitoring, sending alerts when new matching content is detected. The ID‑scanning upgrade is live, and the enhanced NCEI flows are rolling out to “most countries”.
Why this matters
At a surface level, this looks like an unambiguous win for users: less doxxing, fewer humiliating deepfakes, more control. But the trade‑offs are bigger than they appear.
Who benefits?
- Individuals at high risk of harassment—journalists, activists, vulnerable groups, public figures—gain a much more practical way to mitigate doxxing and pornographic abuse.
- Ordinary users who never learned advanced search operators or reputation‑management tricks suddenly have a one‑click “find and scrub” option for some of the worst kinds of exposure.
Who loses?
- Data brokers and shady “people search” sites that thrive on amplifying personal information in Google results suddenly face more friction and more automated takedowns.
- Abusers, extortionists and stalkers lose one of their most powerful weapons: easy discoverability via the world’s dominant search engine.
Yet there’s a second power shift: from the open web to Google.
To work, the system needs a reference set of sensitive data—your IDs, your email addresses, your phone numbers—stored and matched by Google. That deepens a long‑term trend: your safety from the web increasingly depends on the same company that monetises your behaviour across it.
This is not purely sinister—Google is arguably the only actor with both the index and the infrastructure to do this at global scale. But it reinforces an uncomfortable reality: meaningful privacy on today’s web often means negotiating a deeper relationship with one of a handful of tech giants.
The immediate implication: if you are at real risk of doxxing or explicit image abuse, not using these tools could become the bigger gamble. We’re heading toward a world where opting out of Google’s protective ecosystem may itself be a security vulnerability.
The bigger picture
These upgrades sit at the intersection of three major trends.
1. The industrialisation of harassment via AI
Deepfake porn and synthetic harassment have moved from niche to mainstream in just a few years. The barrier to entry used to be Photoshop skills and time; today it’s a prompt and a GPU. Open models and lax moderation on some AI platforms mean mass‑produced abuse is practically free.
Google’s faster NCEI reporting and batch handling are a direct response to this scale problem. When an attacker can generate 500 images overnight, a one‑form‑per‑image workflow is simply unusable.
2. The “right to be forgotten” becomes a product feature
Europe’s right‑to‑be‑forgotten jurisprudence forced search engines to accept that they are not neutral indexes but gatekeepers that can and must remove certain lawful content from results. Over the last decade, we’ve watched this principle slowly transform from a legal exception into a mainstream feature set: personal dashboards, takedown flows, alerts.
Google’s “Results About You” is essentially a consumer‑friendly front end to that broader regulatory pressure. What started as a legal right has evolved into a productised service, complete with notifications and setting toggles.
3. Platforms as personal security providers
Big platforms are increasingly judged not just on what they host, but on what they fail to prevent: abuse, fraud, impersonation, image‑based sexual violence. Meta’s content reporting tools, X’s anti‑doxxing policies, TikTok’s privacy controls—none exist purely out of corporate benevolence. They reflect a shift in public expectation: if you mediate reality, you share responsibility for its harms.
Google’s latest move signals where search is heading: from a neutral index to a personalised perimeter, where what you see—and what others can see about you—is actively managed by default.
The European / regional angle
For European users, these tools don’t exist in a vacuum; they sit on top of an already strong legal toolbox: GDPR, the Digital Services Act (DSA) and soon the EU AI Act.
Under GDPR, data such as ID numbers and sexual imagery fall firmly into the “highly sensitive” category. In theory, you already have rights to erasure, objection and restriction of processing. In practice, exercising those rights across hundreds of obscure sites is impossible for most people. Google’s scanning effectively turns those paper rights into something usable: you assert your identity once, and the system hunts for matches.
The DSA meanwhile obliges very large online platforms—including Google Search—to offer user‑friendly reporting mechanisms and to mitigate systemic risks like online violence against women. The streamlined NCEI reporting and proactive monitoring features look very much like a product response to that regulatory climate.
The upcoming EU AI Act will impose obligations around deepfakes, transparency and risk management for high‑risk AI systems. While Search itself may sit at the edge of that framework, AI‑generated sexual content is exactly the kind of harm regulators have in mind. Expect Brussels to ask: if Google can detect these images well enough to filter them for specific complainants, why can’t it detect and downgrade them by default?
For European companies that offer privacy‑tech or online‑reputation services, Google’s encroachment is a mixed blessing. On one hand, it normalises the idea that individuals should monitor and manage their digital footprint. On the other, it may crowd out smaller players who cannot match Google’s index depth, pushing them toward niche, high‑touch services instead of mass‑market tools.
Looking ahead
Three trajectories to watch over the next two to three years:
1. From opt‑in to default protection
Today, you must tell Google which IDs to monitor. Over time, expect a shift toward more implicit protection: automatic detection of obvious national IDs, smarter pattern recognition for doxxing, perhaps even browser‑level or OS‑level “personal data vaults” that hook into search engines’ safety APIs.
This raises regulatory questions: at what point does scanning for “your” data become general content surveillance, and how transparent must that be?
2. Abuse, appeals and edge cases
Any tool that hides content can be weaponised. Abusive partners might try to remove public records or news reports documenting their behaviour. Political actors might push to downrank legitimate criticism under the banner of “personal data”.
We should expect more disputes at the boundary between privacy and public interest. Google will need clearer, appealable processes—and regulators under the DSA will have opinions about how those are designed.
3. Competitive and regulatory convergence
Microsoft’s Bing, privacy‑focused engines like DuckDuckGo, and social platforms will be pushed to offer comparable protections, or risk looking negligent. At the same time, regulators—especially in the EU—will push for baseline standards: similar takedown flows, similar timelines, similar transparency.
In other words, what looks like a Google feature today may become tomorrow’s industry compliance checklist.
The bottom line
Google’s upgraded safety tools are both genuinely useful and strategically self‑serving. They make doxxing and deepfake abuse harder, while deepening our reliance on a single gatekeeper to police what the world can see about us.
If you are realistically at risk, using these tools is probably worth the additional data you hand over. But we should not confuse Google’s evolving role as our de facto safety provider with a substitute for strong, enforceable rights and independent oversight.
The real question for the next decade is simple: who do you want guarding the doors to your digital life—tech giants, regulators, or some yet‑to‑be‑invented mix of both?



