Startups
Tackling the Threat: How European Startups are Combatting Deepfakes and AI Fraud
The cost of creating convincing synthetic media has collapsed – and so has society’s ability to distinguish between real and fabricated information. So what are Europe’s rising startups doing about it?
According to the latest study by Dutch cybersecurity startup Surfshark, reported losses linked to deepfakes have now surpassed €1.3 billion, with €860 million stolen in 2025 alone, up €500 million year-on-year.
As Oliver Quie, CEO of British cybersecurity startup Innerworks commented: “We’re facing AI-powered deception that can mimic legitimate users with frightening accuracy. Existing security companies have become obsolete because they assume threats will behave differently than legitimate users.”
Only a few years ago, producing a one-minute deepfake video could cost anywhere between €257 and €17k, depending on quality. With the arrival of widely accessible AI video tools such as Veo 3 and Sora 2, that same minute can now be generated for just a few euros.
This dramatic price collapse has made deception cheaper to run and far easier to scale.
Scalable deception and “lost pet” scams
As costs fall, new categories of fraud have emerged. One striking example is the lost pet scam – fraudsters now generate AI-made images of supposedly found pets, tricking anxious owners into paying small ‘recovery fees’, often around €43, in the hope of reuniting with their animals.
“As the cost of fabricating lifelike images and videos approaches zero, scammers are industrialising deception,” said Miguel Fornes, Information Security Manager at Surfshark. “The lost-pet scam is a clear example: it exploits emotion for small sums, making victims less suspicious and far less likely to pursue legal action. For criminals, that’s an ideal model for mass-scale fraud.” (Translated)
Fornes adds that such small-ticket scams are only part of the picture. The larger threat comes from deepfake-enabled investment schemes and identity spoofing.
Deepfakes have been used in corporate recruitment processes to bypass background checks, including one case where a cybersecurity company unwittingly hired a North Korean hacker who successfully faked his video interview and credentials.
A wave of European startups fights back
So how is Europe fighting back?
The surge in AI-driven deception has triggered a corresponding wave of innovation – and investment – across Europe. So far this year, EU-Startups has reported on several funding rounds targeting the detection and prevention of deepfake-enabled fraud.
- Acoru (Madrid, Spain) – Just today they raised €10 million (Series A) to help banks predict and prevent AI-powered fraud and money laundering before transactions occur. Its platform monitors pre-fraud intent signals and uses consortium-based intelligence sharing to stop scams at the source.
- IdentifAI (Cesena, Italy) – secured €5 million in July 2025 to expand its deepfake detection platform, which analyses images, video and voice to authenticate content and flag AI-generated material. The startup reports an increase in demand from newsrooms and law-enforcement clients since early 2024.
- Trustfull (Milan, Italy) – raised €6 million in July 2025 to broaden its fraud-prevention suite to cover “deepfake scams and large-scale phishing campaigns.” The company said this is a pivotal moment for the global fraud detection and prevention market, which is projected to nearly triple from €28.4 billion in 2024 to €77.4 billion by 2030.
-
Innerworks (London, UK) – raised €3.7 million in August 2025 to expand its AI-powered platform for stopping synthetic-identity and deepfake-driven fraud. The company reported that fraud attempts using deepfakes rose by over 2,000% since 2022.
- Keyless (London, UK) – closed a €1.9 million round in January 2025 to strengthen its privacy-preserving biometric technology, designed to thwart injection attacks and deepfake-based identity spoofing. It reports its clients have seen a 73% reduction in Account Takeover (ATO) fraud and 81% reduction in help desk costs.
Together, these startups reflect a continental effort to counteract a new layer of cyber-risk. Italy in particular stands out, with two active ventures in the space, suggesting a developing national cluster around biometric and deepfake-detection innovation.
Regulatory backdrop: EU policies tighten in 2025
The rising economic impact of AI-enabled deception has coincided with new EU-level measures aimed at increasing accountability and transparency in artificial intelligence and digital services.
In February 2025, the EU Artificial Intelligence Act began applying key provisions. It requires clear labelling of AI-generated content and transparency when individuals interact with AI systems. These rules directly target the misuse of generative tools for manipulation or fraud, including deepfakes and voice cloning.
AI systems that deceive or exploit vulnerable users can now be classified as posing an “unacceptable risk,” making them illegal within the EU market.
Meanwhile, under the Digital Services Act (DSA), large online platforms are now obliged to assess and mitigate systemic risks arising from manipulative or fraudulent content – a framework that extends to deepfake media used in phishing and impersonation scams.
The financial sector has also been addressed. In July 2025, the European Banking Authority (EBA) issued an opinion highlighting how AI is being exploited for money laundering and fraud, including through fabricated identities and deepfake documents. It urged financial institutions to adapt anti-money-laundering (AML) systems to account for AI-enabled risks – a move that aligns closely with the missions of startups such as Acoru and Trustfull.
Together, these policy developments show that 2025 is shaping up as the year Europe tightened its legal net around AI misuse – introducing a compliance-driven incentive for startups combating fraud and synthetic deception.
What can you do?
“AI has changed the face of fraud and money laundering. You simply cannot expect technology built in 2010 to combat fraud happening in 2025,” – Pablo de la Riva Ferrezuelo, Co‑founder and CEO of Acoru.
His words reflect a wider sentiment among Europe’s cybersecurity Founders: that a new generation of technology – built for an era of synthetic media and AI‑powered deception – is urgently needed.
This perspective is echoed across the sector.
In 2025, both founders and investors are recognizing a significant shift in the landscape, as regulation, awareness, and financial support come together to fuel a race between scammers and defenders. This convergence has led to a surge in fraudulent activity, with perpetrators utilizing generative AI to manipulate voices, identities, and video content. However, a new wave of startups in Europe is fighting back by leveraging AI technology to detect and prevent malicious activities before they harm unsuspecting victims.
Industry experts emphasize the importance of vigilance and education as crucial defenses against scams. They recommend using robust cybersecurity measures and identity verification tools, as well as providing regular training for staff members. It is also advised to verify any unexpected requests, particularly those involving money or sensitive information, through trusted channels. Additionally, individuals are urged to scrutinize media for subtle inconsistencies, resist falling for emotional appeals or urgent demands, and implement multifactor authentication for added security.
Furthermore, high-risk teams, such as those in finance, HR, and customer support, should be equipped with in-person verification protocols. Contacts from unfamiliar domains or virtual numbers should be treated with caution, as they may indicate potential red flags. These proactive measures can help safeguard against fraudulent activities and protect individuals and businesses from financial losses.
A study conducted by Surfshark utilized data from the AI Incident Database and Resemble.AI to analyze deepfake-related incidents from 2017 to September 2025. The study focused on cases involving falsified video, image, or audio content reported in the media, with a specific emphasis on fraud-related incidents that resulted in quantifiable financial losses. These incidents were further classified into 12 distinct sub-categories to provide a comprehensive understanding of the impact of deepfake technology on cybersecurity.
For more detailed insights and research findings, individuals can visit Surfshark’s research hub. This comprehensive analysis sheds light on the evolving landscape of cybersecurity threats and underscores the importance of staying informed and proactive in combating fraudulent activities. By staying vigilant and implementing best practices, individuals and organizations can better protect themselves from falling victim to scams and fraudulent schemes.
-
Facebook4 months agoEU Takes Action Against Instagram and Facebook for Violating Illegal Content Rules
-
Facebook4 months agoWarning: Facebook Creators Face Monetization Loss for Stealing and Reposting Videos
-
Facebook4 months agoFacebook Compliance: ICE-tracking Page Removed After US Government Intervention
-
Facebook4 months agoInstaDub: Meta’s AI Translation Tool for Instagram Videos
-
Facebook2 months agoFacebook’s New Look: A Blend of Instagram’s Style
-
Facebook2 months agoFacebook and Instagram to Reduce Personalized Ads for European Users
-
Facebook2 months agoReclaim Your Account: Facebook and Instagram Launch New Hub for Account Recovery
-
Apple4 months agoMeta discontinues Messenger apps for Windows and macOS

