1. March 2023 Data Exposure Incident
In March 2023, OpenAI reported a significant data breach due to a bug in the redis-py library. Approximately 1.2% of ChatGPT Plus subscribers had personal data exposed for a brief period, including names, email addresses, payment info (excluding full credit card numbers). OpenAI patched the issue promptly and notified affected users.
2. Regulatory Actions on Privacy
In December 2024, the Italian data protection authority fined OpenAI €15 million for violations of European privacy laws, including inadequate legal grounds for processing user data and lack of transparency. The ruling followed an earlier temporary ban of ChatGPT in Italy, prompting OpenAI to adjust data usage consent options.
3. Inference of Sensitive Data
Research from ETH Zurich demonstrated that ChatGPT could infer personal details like race, occupation, or location from casual user interactions. Although unintentional, this raises concerns about how user data might be leveraged or inferred without explicit consent.
4. Prompt Injection Vulnerabilities
In May 2024, cybersecurity researchers disclosed that ChatGPT (versions 4 and 4o) was vulnerable to prompt injection attacks. These attacks could be used to silently collect user data or monitor interactions when the memory feature was enabled. This exposed a new frontier of risk in AI-assisted privacy breaches.
5. February 2025 Alleged Credential Leak
A threat actor in early 2025 claimed to possess login credentials for over 20 million ChatGPT accounts, offering them on the dark web. OpenAI stated that no confirmed evidence of a platform-wide breach was found, but the incident prompted renewed scrutiny of account security practices.
6. April 2025 – Misdirected Data Incident
A user reported that, after uploading a photo of a plant to ChatGPT for care advice, the AI responded with another individual’s personal documents, including a CV and club membership data. This raised urgent questions about the handling and isolation of uploaded content.
7. SSRF Security Vulnerability
A server-side request forgery (SSRF) vulnerability was disclosed in March 2025. It allowed potential redirection of users to unsafe URLs. While there was no evidence of mass exploitation, it underscored the importance of ongoing security patching.
8. Persistent Sessions After Password Resets
Some users noted that sessions remained active even after a password reset, indicating possible flaws in session termination and account access control. This issue is under review by the platform’s security team.
Recommendations for Users
- Do not share sensitive information (e.g., medical, financial, or identification data) in chats.
- Change passwords regularly and use two-factor authentication.
- Delete or minimize stored chat history when possible.
- Monitor for updates on security incidents or new features impacting data privacy.
While ChatGPT provides powerful assistance, its use involves important trade-offs between utility and data security. Awareness and caution are essential for safe interaction.

Fun Fact : This article was generated by AI then curated by team at Bold Voices.












Leave a comment