Online fraudsters not only impersonate relatives, police officers, etc., but also manipulate artificial intelligence (AI) tools to create hundreds of scam scenarios to attack users.
Cybersecurity expert Ngo Minh Hieu said that fraudsters manipulate AI, creating hundreds of scam scenarios in minutes - Photo: VU TUAN
According to social enterprise Chongluadao.vn, fraudsters have used AI to generate malware, write scam scripts, and transform sounds and images through deepfake.
According to cyber security expert Ngo Minh Hieu (Hieu PC) - representative of Chongluadao.vn, a dangerous trick is to "trick AI" into loading malware. "They create fake audio or image files, embedding malware that the AI does not recognize. When the AI system processes it, the malware activates and takes control," said Hieu PC.
For example, he used AI to fake voices and images of relatives via FaceTime to trick people into transferring money.
Recently, the frequency of fraud has increased significantly due to the support of AI tools. Controlling AI, using AI to commit fraud, fraudsters overcome all language and geographical barriers. Forms of fraud are becoming more sophisticated and dangerous.
Expert Hieu PC analyzed that no matter what tool is used to commit fraud, cyber criminals always have a scenario. This is the information his team of associates drew from receiving and processing hundreds of online fraud reports.
Common forms of fraud are impersonating relatives, impersonating employees of state agencies, police, electricity companies... More sophisticated are scenarios that lure victims into investment traps, performing tasks or dating...
Cybersecurity experts say that the first thing to do to avoid having your image faked is to not share your personal images on social networks in public mode. Calls and messages asking for money transfers, clicking on links, or providing OTP codes are 99% scams.
Hacker's Tricks to Attack AI
According to cybersecurity experts, an "adversarial attack" is a trick that hackers use to "trick" AI. This is a form of fake information that makes AI misunderstand or be exploited. As a result, the AI injects malicious code into the system or carries out the commands given by the fraudster.
Fraudsters exploit this weakness to bypass AI, especially AI protection systems (such as antivirus software, voice recognition, or banking transaction checks).
Source: https://tuoitre.vn/lua-dao-mang-lua-ca-ai-tao-kich-ban-thao-tung-tam-ly-20250228163856719.htm
Comment (0)