Cybercriminals are leveraging generative artificial intelligence (AI) tools like ChatGPT to craft phishing emails for businesses and others, according to a report by cybersecurity firm SlashNext. In a survey of more than 300 cybersecurity professionals in North America, nearly half said they had encountered a phishing attack targeting a business, and 77% of them said they had been targeted by a bad actor.
SlashNext CEO Patrick Harr said the findings reinforce concerns about how generative AI is contributing to the rise of scams. Fraudsters often use AI to develop malware or social engineering scams to increase their success rates.
According to reports, an average of 31,000 online scams occur every day.
ChatGPT's launch in late 2022 coincides with the timeframe in which SlashNext saw a spike in phishing attacks, Harr added.
Citing an Internet crime report from the US Federal Bureau of Investigation (FBI), the trick of sending fake emails to businesses caused about $2.7 billion in damage by 2022.
While there has been some debate about the true impact of generative AI on cybercrime, Harr believes that chatbots like ChatGPT are being weaponized for cyberattacks. For example, in July, SlashNext researchers discovered two malicious chatbots called WormGPT and FraudGPT, which were used as tools for cybercriminals to carry out sophisticated phishing campaigns.
Hackers are using generative AI and natural language processing (NLP) models to commit phishing, says Chris Steffen, director of research at Enterprise Management Associates. By using AI to analyze old information, articles, and mimic government or corporate documents, phishing emails become extremely convincing and difficult to distinguish.
To combat the rise in attacks, people need to raise security awareness and be on the lookout for suspicious emails or activity. Another solution is to deploy email filtering tools that use AI and machine learning to prevent phishing. Organizations also need to conduct regular security audits, identify system vulnerabilities and weaknesses in employee training, and promptly address known issues to reduce the risk of attack.
Source link
Comment (0)