Researchers at Google have made a concerning discovery regarding Open AI LP’s ChatGPT, a popular chatbot. They found that personal information from real people can be accessed through ChatGPT queries, potentially exposing sensitive data such as names, email addresses, and phone numbers. Despite the chatbot’s training to respond to queries without replicating information, Google researchers were able to extract over 10,000 unique verbatim memorized training examples using specific keywords. This vulnerability raises questions about the security of machine-learning systems and highlights the need for a new security analysis. With over 180 million users and 1.5 billion visits to its website, this issue deserves attention and further investigation to protect user privacy.
Google Researchers Exploit ChatGPT, Gathering Personal Data
Date:
Updated: [falahcoin_post_modified_date]