Are you ready to fight deepfakes?

July 9, 2025
Are you ready to fight deepfakes?

Are you ready to fight deepfakes?

The advent of deepfakes is upsetting the balance between visual confidence and digital proof. What was yesterday science fiction has become an economic, operational, and legal reality.

And the most exposed companies are precisely those regulated, chiefs of sensitive data, and subject to strict identification obligations: banks, insurance companies, public services, and payment platforms.

A threat that takes many forms

What is a deepfake?

A deepfake is a video, audio, or image generated or altered by artificial intelligence, designed to imitate the appearance, voice, or behavior of a real person.

But in 2025, it's no longer just faces stuck on videos:

We are talking about complete synthetic profiles: forged digital identity cards, AI-generated proof of addresses, credible resumes, cloned voices, and personalized videos.

And this, at a level of enough realism to fool a human... or an automated system.

Key figures 2024-2025

  • +281% use of synthetic identity documents detected
  • +900% of cases in Europe, +110% in Germany, +3400% in Canada
  • The most targeted sectors: e-commerce, edtech, crypto platforms, neobanks

Why it's a serious problem for regulated businesses

Regulatory compliance risk

A well-designed deepfake can pass through KYC/AML processes (identity verification, anti-money laundering), directly undermining the legal foundations of regulated industries. This exposes organizations to:

  • Administrative sanctions from regulatory bodies (AMF, ACPR, CNIL, etc.) for failure to implement sufficient fraud prevention measures.
  • Loss of accreditation or licensing, particularly for financial institutions, insurers, or digital ID providers.
  • Civil and criminal liability if customers or partners are harmed due to insufficient authentication safeguards.
  • Financial losses caused by successful fraud, refund obligations, and investigation costs.

In high-stakes industries, failing to stop a deepfake is no longer a technical issue; it’s a compliance failure.

Security and access risk

Deepfakes are used not just for identity theft but to gain access to highly sensitive or restricted environments. 

This includes:

  • Opening accounts under fake or synthetic identities, allowing money laundering or fraud.
  • Accessing healthcare portals, impersonating doctors or patients to retrieve confidential data.
  • Bypassing corporate authentication flows, gaining admin-level access to internal tools, payroll, or customer data.
  • SIM swap attacks, using manipulated documents or identities to hijack phone numbers and intercept 2FA.

These attacks render traditional authentication ineffective:

  • OTP by SMS? Can be intercepted.
  • Dynamic selfie? Can be deepfaked.
  • Static biometric? Can be spoofed.

Major reputational risk

A single deepfake incident can cause disproportionate brand damage:

  • Falsified video of a CEO announcing false information (layoffs, merger, political statement) spreading on social media.
  • Fake online meetings with high-level managers giving fraudulent orders to employees or partners.
  • Deepfake phone calls are used to manipulate customer support or validate high-risk transactions.
  • Impersonation in media with manipulated interviews affecting public perception.

In a digital world saturated with manipulated content, trust becomes your most fragile asset.

What the European regulations say about deepfakes

ETSI TS 119 461

  • European standard applicable to trust service providers
  • Taxation of Reinforced proof of life, supervised video, traceability
  • Objective: to prevent automated biometric attacks

eIDAS 2 & EUDI Wallet

  • The European digital identity wallet requires means that are resistant to AI attacks (deepfakes, spoofing)
  • Identification = strong proof + interoperability
  • Strict conditions for service providers (technical guarantees + auditability)

FIDA (Financial Data Access)

  • The opening of financial data (banks, insurance, crypto) will only be possible if proof is provided of the real identity of the applicant
  • The entry into force is conditioned on that of eIDAS 2

National regulations

  • France: SREN law (2024) → 1 year in prison for the unauthorized distribution of deepfakes (+2 years if online)
  • Denmark: Bill to give copyright to images and voices
  • UK: Creation of sexual deepfakes = criminal offense, unlimited fines

The answer: A proof of identity in real time

At ShareID, our conviction is simple: you can't trust what you see. OR you can trust what we prove.

Our authentication technology is:

  • Tied to the official identity through the verification of a government-issued ID. The document is then linked to both active and passive liveness detection of the user.‍
  • Based on the Zero Knowledge Proof concept → We store 0 personal or biometric data 
  • Frictionless, with re-authentication in 3 seconds with a simple smile

The result? A system that is resistant to deepfakes, phishing, and identity fraud.

Conclusion 

In the face of deepfakes, inaction is no longer an option. Anything can be imitated, but only a strong digital proof allows you to authenticate with certainty.

Let's talk about it 👉 Contact us

← Tous les articles