Artificial intelligence (AI) like ChatGPT has been a global sensation since the beginning of 2023, but it's not always used for positive purposes. Recently, a security expert discovered a way to instruct ChatGPT to generate malicious code during testing.
Aaron Mulgrew, a security expert at Forcepoint, shared the risks of writing malicious code using OpenAI's bilingual chatbot. Although ChatGPT is designed to prevent users from requesting AI to design malware, Aaron found a vulnerability by creating commands (prompts) for the artificial intelligence to write code line by line. When combined, Aaron realized he had in his hands an undetectable data-stealing execution tool, so sophisticated that it rivals even the most advanced malware currently available.
Each individual line of code generated by ChatGPT, when combined, can become sophisticated malware.
Mulgrew's discovery serves as a wake-up call about the potential for exploiting AI to create dangerous malware without the need for any hacking groups, and without the creators even writing a single line of code themselves.
Mulgrew's malware is disguised as a desktop application, but it can automatically activate on Windows devices. Once inside the operating system, the malware "infiltrates" all files, including Word documents, image files, and PDFs, to search for data to steal.
Once it obtains the information it needs, the program breaks it down and embeds it into image files on the computer. To avoid detection, these images are uploaded to a folder on Google Drive cloud storage. The malware becomes incredibly powerful because Mulgrew can fine-tune and enhance its features to evade detection through simple commands entered into ChatGPT.
Although this was the result of a private test by a security expert and no attacks were executed outside the test area, cybersecurity experts still recognized the danger of activities using ChatGPT. Mulgrew stated that he himself did not have much experience in programming, but OpenAI's artificial intelligence was still not powerful or intelligent enough to stop his test.
Source link






Comment (0)