ChatGPT, OpenAI’s popular AI chatbot, is widely known for its helpfulness in answering questions and assisting with various tasks. However, recent reports show that some threat actors are misusing this technology for harmful activities, such as creating malware and advising on criminal tactics.
A new study highlights a troubling use of ChatGPT’s voice API, GPT-4o, for financial scams. Researchers from the University of Illinois Urbana-Champaign (UIUC) found that cybercriminals could manipulate the real-time voice capabilities to impersonate people convincingly and trick victims into making bank or crypto transfers, buying gift cards, or even sharing personal credentials.
The UIUC team simulated different scams, such as using ChatGPT-4o to pose as a trusted individual to trick victims into transferring funds on legitimate sites like Bank of America. According to researcher Richard Fang, their tests showed success rates between 20% and 60%, depending on the complexity of the scam, with each attempt involving up to 26 online actions over three minutes.
While bank transfers were less successful due to complex navigation, scams targeting credentials on Gmail and Instagram had success rates of 60% and 40%, respectively. The cost for cybercriminals to execute these scams is low, averaging $0.75 (around Rs 63), and bank transfer scams cost approximately $2.51 (around Rs 211).
OpenAI, the company behind ChatGPT, acknowledged the findings and said it’s constantly working to prevent misuse while maintaining the chatbot’s helpfulness. OpenAI also noted that studies like the one from UIUC help them improve ChatGPT’s security against potential abuse.
In recent months, AI voice cloning scams have surged. Bharti Airtel’s chairman, Sunil Mittal, recently reported being shocked at how realistic an AI-cloned recording of his voice sounded.
Leave a Reply