What do you think about the trend of hackers "weaponizing" AI to carry out cyber attacks and fraud?

Dr. Nguyen Tuan Khang: According to IBM's 2024 X-Force Threat Intelligence Index, Asia-Pacific, including Vietnam, is the region that suffers the most cyber attacks in the world in 2023. Of which, manufacturing is the industry most affected by cyber attacks.

The main method of bad guys is still phishing attacks targeting vulnerable people and exploiting vulnerabilities to install malware. In addition, the emerging trend in 2024 is cyber attacks involving artificial intelligence (AI).

Wired's report points out that many bad guys are using generative AI to help guide hacks, create fraudulent chatbots, or fake images and videos of other people's faces and voices using Deepfake.

However, along with this trend, information security systems are also starting to integrate AI features, such as watsonx. Artificial intelligence can be exploited, but it can also replace humans in analyzing, monitoring, identifying numbers, predicting attack scenarios, thereby improving defense capabilities and minimizing information security risks.

W-nguyen-tuan-khang-ibm-1.jpg
Cyber ​​security expert Nguyen Tuan Khang. Photo: Trong Dat

Deepfake scams are becoming more and more common. With the rapid development of AI, how dangerous will these attacks be in the future?

Dr. Nguyen Tuan Khang: Basically, Deepfake is a technology that helps hackers create fake digital identities, thereby impersonating others. Deepfake will be a dangerous problem because this technology is becoming more and more sophisticated.

To combat Deepfakes, the first thing to do is to determine whether a person’s image or voice is AI-generated. There is currently no universal tool that can detect Deepfakes immediately because attackers are constantly developing new models.

In addition to Deepfake detection, there is another technique to deal with it, which is using technology to analyze behavior. From an organizational and business perspective, it is necessary to develop a system that combines both of these techniques.

In recent times, there have been cyber attacks where hackers have secretly planted malware in the company's system. The malware lies in wait and analyzes all activities, thereby creating a fake identity to carry out malicious intentions. With the development of Deepfake technology, combined with the ability to create videos created by AI, these types of attacks will be much more dangerous in the future.

With the escalation of Deepfake cyberattacks, how can we protect the elderly, children and other vulnerable groups from scammers?

Dr. Nguyen Tuan Khang: The elderly and children are often attacked by scammers using a technique called social engineering. This is a term that describes attacks through the manipulation of human behavior.

Hackers can now use AI in combination with data collection, mining, and analysis to identify people who are likely to be scammed and then find ways to attack. In addition to raising awareness in the community, we must also accept that situations where users are scammed will occur, and must use technology to detect and prevent it.

W-online-fraud-1.jpg
Warning about cases of impersonating police officers to scam money transfers by Thanh Luong Ward Police (Hanoi). Photo: Trong Dat

Recently, there was a case where a bank employee suspected that an old woman who came to transfer money had signs of being scammed. This person then promptly stopped the transaction and reported it to the authorities. The IT systems of banks now have technology to replace humans in such tasks.

The role of technology is that even if the sender is known to be the real person, the system will still prevent this behavior if it is suspected that this behavior is manipulated by someone else. Such tools are called fraud and forgery mitigation systems.

Is it time for Vietnam to have sanctions to manage AI, and put AI research, development, and use into a framework?

Dr. Nguyen Tuan Khang: Sanctions for AI management have been mentioned for a long time, however, there are still many controversies. For example, the parking lot in my area has an AI system to recognize license plates, but there were still thefts. At that time, the controversy began to arise about whose fault it was. Should the apartment owner, the security guard or the unit developing the AI ​​system be responsible?

Since then, the building has changed its rules, stating that residents can opt to use AI to recognize license plates for convenience, but they have to accept the risks. Those who agree will be able to use the automatic doors, those who don’t will have to park their cars the old way. We need to have sanctions like that.

Similarly, IBM once developed an AI system to help prevent cancer. When the system prescribes a medicine but the patient still cannot be saved after taking it, is it the doctor’s fault or the AI’s fault?

I think AI regulation needs to be specific, stating clearly what can and cannot be done when developing AI applications. To make the world safer, the most basic regulation we can do is to require large money transfers to be biometrically authenticated. In such a situation, people who lose their identity information can completely avoid losing money.

Thank you sir.

Deepfake scams, fake faces, voice will increase in 2024 According to VSEC's forecast, cyber attacks using AI, including Deepfake scams to fake faces and voices, will increase in 2024.