close
close

topicnews · September 20, 2024

Data leak at Star Health Telegram, Samsung ban and more

Data leak at Star Health Telegram, Samsung ban and more

1. A bug in ChatGPT lets user chat titles leak

In March 2023, a bug allowed users to view other users’ chat titles using ChatGPT. Although OpenAI CEO Sam Altman denied any access to chat content, it still angered users about their privacy as the bug output titles such as “Development of Chinese Socialism.” Within hours, OpenAI disabled the chatbot and promised a follow-up investigation into the technical findings.

2. Google Gemini Advanced data leak

Google’s Gemini AI was found to have a weakness, especially when used with Google Workspace or Gemini API. The chatbot accidentally revealed private data such as passwords when asked in a certain way. But when asked directly, it refused to reveal the passphrase. However, when asked indirectly, it revealed sensitive information. This weakness was very significant considering how secure Google AI is with data.

3. GPT-3.5 Turbo reveals personal information

Recently, in late 2023, it was discovered that researchers were able to extract personal email addresses, including 30 of New York Times employees, from GPT-3.5 Turbo. This breach made it possible to bypass the model’s privacy query limitations, but most importantly, it immediately became clear that ChatGPT could reveal private information if sufficient adjustments were made. This implies that alarms had already been raised about how such generative AI tools could reveal confidential information.

4. OpenAI data leak and hacker attack

In the second case, OpenAI was accused of a hacking attack in which a hacker stole users’ conversations, leaving their proposals and presentations of confidential deals visible to anyone. Despite the good protocol followed by users, the incident was traced back to Sri Lanka. This is the second major leak since the security flaw that exposed OpenAI’s payment information in the March 2023 incident.

5. Samsung bans ChatGPT due to sensitive data leaks

Samsung has banned the chatbot tool from accessing its internal files after confidential technical codes belonging to one of its employees were stolen. The company therefore reviewed and strengthened its security practices for the use of AI systems, given the risk of data leaks containing confidential information to some AI tools. It expressed the growing potential to raise alarm in organizations regarding the use of AI tools in corporate structures and data protection.