AI Scam Hits Italy’s Elite with Cloned Defence Minister Guido Crosetto Voice
Guido Crosetto
Image Source: Wikipedia
(The image has been horizontally extended for design presentation purposes.)
In a striking example of artificial intelligence being weaponized for deception, some of Italy’s most prominent business figures have fallen victim to a sophisticated scam involving the cloned voice of Defence Minister Guido Crosetto. The scam, which has sparked outrage and prompted legal action, saw fraudsters mimic Crosetto’s voice to trick high-profile entrepreneurs into transferring large sums of money under the guise of a national emergency. Milan prosecutors are investigating four formal complaints, with more expected, highlighting the growing threat of AI-driven fraud in Italy.
[Read More: O2 Launches "AI Granny" Daisy to Combat Scammers by Wasting Their Time]
A Convincing Ruse Powered by AI
The scam centered on a fabricated story claiming that Italian journalists had been kidnapped in the Middle East, requiring urgent financial assistance for their release. Using advanced AI technology, the perpetrators replicated Crosetto’s voice to make phone calls that appeared authentic, even spoofing the defence ministry’s phone number in Rome to bolster credibility. Among the targets were fashion icon Giorgio Armani, Prada chair Patrizio Bertelli, former Inter Milan owner Massimo Moratti, and a member of the Beretta firearms dynasty—individuals known for their wealth and influence.
In at least one confirmed case, the scam succeeded. An unnamed entrepreneur, convinced by the cloned voice and the promise of reimbursement from the Bank of Italy, made two transfers totalling €1 million to a Hong Kong account. The victim only grew suspicious after the fact and contacted Crosetto directly, sparking the minister’s decision to go public. Crosetto chose to make the case public 'so that no one falls into the trap', as he stated in a social media post on February 10, 2025. He also revealed that two more individuals had reached out to him with similar experiences.
[Read More: AI Scams Target Hong Kong Legislators with Deepfake Images and Voice Phishing Tactics]
The Targets: Italy’s Business Titans
The list of those targeted reads like a who’s who of Italian industry. Complaints have been lodged by Moratti, now chair of the Saras Group energy firm, and Lucia Aleotti, a board member of the Menarini pharmaceutical company. Reports indicate Armani’s staff were contacted, and a complaint from his team is anticipated. Other notable figures include Diego Della Valle of Tod’s, Marco Tronchetti Provera of Pirelli, and members of the families behind the Esselunga supermarket chain and Beretta arms manufacturer.
Moratti, the first to file a complaint, described the scam’s chilling realism to La Repubblica: “It all seemed real, they were good, it could happen to anyone”. His experience underscores how even seasoned business leaders can be ensnared by AI’s ability to replicate human voices with uncanny accuracy. Meanwhile, Aleotti credited her assistant, Chiara, with thwarting the scam at Menarini. The caller, posing as a “Dr. Giovanni Montalbano” from the defence ministry, insisted on speaking to company leaders about a supposed national security issue tied to North Atlantic Treaty Organization (NATO). Chiara’s skepticism—honed by past dubious offers, including sales pitches for artworks by Caravaggio and Leonardo da Vinci—prompted her to flag the foreign callback number as suspicious.
AI Technology: A Blessing or a Curse?
The scam’s success hinges on voice cloning, a technology that uses machine learning to analyze and replicate a person’s speech patterns based on minimal audio samples. Experts note that such tools, once the domain of science fiction, are now widely accessible, requiring only a few seconds of recorded audio—readily available from public speeches or media appearances—to generate convincing fakes. Crosetto himself acknowledged the perpetrators’ sophistication, telling an Italian TV program on February 9, 2025,
“They are professional scammers who clearly have both the technology and ability to identify targets”.
This incident is part of a broader surge in AI-enabled fraud across Italy. In early February, an elderly woman lost €30,000 to a scammer impersonating her daughter, who claimed her husband had caused a car accident necessitating immediate legal funds. The proliferation of such cases has raised alarm about the misuse of AI, which can exploit trust in ways traditional scams cannot.
[Read More: Deed Fraud and AI: How Scammers Use Technology to Steal Property Ownership Rights]
Legal and Public Response
Milan prosecutors are now piecing together the scam’s scope, with Crosetto set to file his own complaint following the unauthorized use of his voice. The defence minister’s decision to publicize the fraud aims to protect others, but it also underscores the challenge of combating tech-savvy criminals. Authorities have yet to identify the perpetrators or trace the Hong Kong account, suggesting a complex international operation.
For the victims, the experience has been a sobering lesson. While some, like Menarini, escaped unscathed thanks to vigilant staff, others suffered significant financial losses. The targeting of patriotic entrepreneurs—chosen, Crosetto suggested, for their willingness to aid Italy in a crisis—adds a cynical twist to the scheme.
The Broader Implications
This high-profile case casts a spotlight on the dual nature of AI: a tool of innovation and a weapon for deceit. As voice cloning technology becomes more accessible, experts warn that such scams could proliferate, targeting not just the elite but everyday citizens. Italy’s experience may prompt tighter regulations on AI use, though balancing innovation with security remains a daunting task.
For now, the nation’s business leaders are on high alert, and the public is left grappling with a new reality: in the age of AI, even a familiar voice may not be what it seems. As investigations continue, this scam serves as a stark reminder of technology’s power—and its potential for harm.
[Read More: AI Scams Take Over 2024: Top 10 Threats and How to Stay Safe]
Source: The Guardian