As Google Ireland faces scrutiny over its AI practices, concerns are growing about the misuse of artificial intelligence in criminal activity. Scammers are increasingly using AI voice cloning to impersonate family members or colleagues to steal sensitive information.
Impersonation scams have long been a threat, but artificial intelligence (AI) is making them more dangerous. A recent Deloitte report predicts that generative AI will lead to a sharp rise in fraud-related losses in the coming years.
Source: Deloitte
The report highlights a case in Hong Kong where an employee unknowingly sent US$25 million to fraudsters who used deepfake video technology to mimic her chief financial officer and authorize the transfer. AI voice cloning is a growing concern for financial institutions.
“Gen AI could enable fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023 — a compound annual growth rate of 32%,” the Deloitte report said.
Deepfakes and AI Voice Cloning: What Experts Recommend
Generative AI’s ability to produce deepfake content is a key factor in the projected rise. AI-driven systems can “self-learn,” improving their ability to bypass security measures and evade detection.
Dr Thomas Hyslip, a cybersecurity expert, told WbTv that the use of AI helps in committing fraud, like cloning voices. “You can clone somebody’s voice and then make phone calls or, you know, use it to try to trick voice recognition. Some of the banks now use voice recognition in place of the phone number,” he said.
The FBI also acknowledged that AI-enabled synthetic audio content is becoming increasingly difficult to detect and easier to make. Criminals are using this technology to conduct fraud against individuals, businesses, and financial institutions.
To safeguard against these scams, the FBI urges people to stay cautious and be more alert and sensitive to possible voice cloning and deepfakes. The Deloitte report asks the public to couple modern technology with human intuition to safeguard themselves.
“Banks can also focus on developing new fraud detection software using internal engineering teams, third-party vendors, and contract employees, which can help foster a culture of continuous learning and adaptation,” said the report.
Read More
- Google AI Model Faces Scrutiny as Data Protection Commission Questions Compliance
- OpenAI Unveils New AI Series with Advanced Reasoning Capabilities
- Russia Ramps Up Crypto Mining: 15 New Data Centers Planned Amid Shifting Policies
Gairika holds positions in BTC. This article is provided for informational purposes only and should not be construed as financial advice. The Shib Magazine and The Shib Daily are the official media and publications of the Shiba Inu cryptocurrency project. Readers are encouraged to conduct their own research and consult with a qualified financial adviser before making any investment decisions.