Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT users at risk of information theft

Báo Kinh tế và Đô thịBáo Kinh tế và Đô thị30/09/2024


ChatGPT's long-term memory feature is a new feature introduced by OpenAI in February 2024 and expanded in September.

Recently, security researcher Johann Rehberger recently revealed a serious vulnerability related to this feature.

It is known that this new feature helps chatbots store information from previous conversations. Thanks to that, users do not have to re-enter information such as age, interests or personal views every time they chat. However, this has become a weakness for attackers to exploit.

ChatGPT users at risk of information theft
ChatGPT users at risk of information theft

Johann Rehberger showed that hackers could use a technique called prompt injection—inserting malicious instructions into the memory, forcing the AI ​​to obey. These commands would be delivered through untrusted content such as emails, documents, or websites.

Once these fake memories are stored, the AI ​​will continue to use them as real information in conversations with users, which could lead to the collection and misuse of users' personal data.

Rehberger provided a specific example by sending a link containing a malicious image that caused ChatGPT to store a false memory. This information would affect ChatGPT's future responses. In particular, any information the user entered would also be sent to the hacker's server.

Accordingly, to trigger the attack, the hacker only needs to convince ChatGPT users to click on a link containing a malicious image. After that, all of the user's chats with ChatGPT will be redirected to the attacker's server without leaving any trace.

Rehberger reported the bug to OpenAi in May 2024, but the company only considered it a security flaw. After receiving evidence that user data could be stolen, the company released a temporary patch on the web version of ChatGPT.

While the issue has been temporarily fixed, Rehberger notes that untrusted content can still use Prompt injection to insert fake information into ChatGPT's long-term memory. This means that in certain cases, hackers could still exploit the vulnerability to store malicious memories to steal personal information long-term.

OpenAI recommends that users regularly check ChatGPT's stored memories for false positives, and the company also provides detailed instructions on how to manage and delete memories stored in the tool.



Source: https://kinhtedothi.vn/nguoi-dung-chatgpt-co-nguy-co-bi-danh-cap-thong-tin.html

Comment (0)

No data
No data

Same tag

Same category

36 military and police units practice for April 30th parade
Vietnam not only..., but also...!
Victory - Bond in Vietnam: When top music blends with natural wonders of the world
Fighter planes and 13,000 soldiers train for the first time for the April 30th celebration

Same author

Heritage

Figure

Business

No videos available

News

Political System

Local

Product