DeepSeek AI Chatbot Exposed: 1M Sensitive Records Leaked, Misinformation Raises Concerns

Image Credit: Sergey Zolkin | Splash

DeepSeek, a Chinese-developed AI chatbot, having surged in popularity but is now under scrutiny for serious privacy risks and its role in spreading false information.

[Read More: DeepSeek AI Faces Security and Privacy Backlash Amid OpenAI Data Theft Allegations]

DeepSeek AI Chatbot: A Brief Overview

Launched in January 2025, DeepSeek AI is an AI-powered chatbot that has rapidly gained popularity, surpassing 2 million downloads within weeks. Its rise to the top of the iOS App Store's free app rankings highlights its appeal.

[Read More: DeepSeek’s R1 Model Redefines AI Efficiency, Challenging OpenAI GPT-4o Amid US Export Controls]

Data Breach Puts Sensitive Information at Risk

On January 30, 2025, cybersecurity firm Wiz revealed a significant security lapse in DeepSeek AI's data management practices. Wiz discovered that a publicly accessible ClickHouse database belonging to DeepSeek was left unsecured, exposing over a million lines of sensitive information. This data included user chat histories, API keys, backend operational details, and other critical internal data.

[Read More: OpenAI's Data Leak: Unveiling the Cybersecurity Challenge]

Unsecured Database Allowed Unauthorized Access

Wiz's investigation identified that the database was accessible without any authentication, allowing unrestricted access to its contents. This lack of security measures meant that anyone with knowledge of the database's location could potentially retrieve, modify, or delete data.

[Read More: AI Data Collection: Privacy Risks of Web Scraping, Biometrics, and IoT]

User Data, API Keys, and Internal Details Left Unprotected

The exposed data encompassed a wide range of sensitive information, including:

  • User Chat Histories: Logs of interactions between users and DeepSeek's AI assistant, potentially containing personal or confidential information.

  • API Keys and Secret Keys: Credentials that could allow unauthorized access to DeepSeek's internal systems or third-party services.

  • Backend Operational Details: Information about the internal workings of DeepSeek's infrastructure, which could be exploited to identify further vulnerabilities.

[Read More: AI Scams Take Over 2024: Top 10 Threats and How to Stay Safe]

What the Data Breach Means for Users

The exposure of such sensitive data underscores significant vulnerabilities in DeepSeek's data management and security protocols. Unauthorized access to API keys and backend details could allow malicious actors to exploit DeepSeek's systems, potentially leading to data breaches, service disruptions, or unauthorized data manipulation. Moreover, the exposure of user chat histories raises serious privacy concerns, as personal and potentially sensitive user information was left unprotected.

[Read More: AI-Powered Global Gambling Scam Exposed: Over 1,300 Fake Sites Targeting Victims Worldwide]

How DeepSeek Responded to the Security Breach

Upon being alerted by Wiz, DeepSeek promptly secured the exposed database. However, the incident highlights the critical importance of implementing robust security measures, especially for companies handling large volumes of sensitive data. Regular security audits, proper authentication protocols, and continuous monitoring are essential to prevent such exposures.

[Read More: Cisco AI Defense: Tackling Security Risks in Enterprise AI Systems]

DeepSeek AI Chatbot’s Misinformation Problem

A comprehensive study by NewsGuard, an organization specializing in assessing information reliability, evaluated DeepSeek's performance in responding to news and information prompts. The findings were published on January 29, 2025.

  • Accurate Responses: The audit found that DeepSeek's chatbot achieved an accuracy rate of 17% in delivering news and information.

  • Repetition of False Claims: In 30% of cases, the chatbot reiterated false information.

  • Vague or Unhelpful Responses: Approximately 53% of the time, DeepSeek's answers were either vague or not useful.

These results culminated in an overall fail rate of 83%, ranking DeepSeek’s chatbot the lowest among the 11 chatbots tested, highlighting significant challenges in delivering reliable information.

[Read More: Top 10 AI Chatbots You Need to Know in 2025]

Comparing AI Models in Fact-Checking

A similar study was conducted in June 2023, comparing multiple AI chatbots to assess their fact-checking capabilities. Titled "News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4, Bing AI, and Bard in News Fact-Checking", the study by Kevin Matthe Caramancion found that AI models demonstrated moderate proficiency, with an average score of 65.25 out of 100. OpenAI's GPT-4 stood out with a score of 71.

[Read More: AI Breakthrough: OpenAI’s o1 Model Poised to Surpass Human Intelligence]

Censorship in Chinese AI Limits Fact-Checking Accuracy

Censorship within Chinese-developed applications, such as DeepSeek AI, significantly impacts their fact-checking capabilities. These platforms often adhere to stringent government regulations that restrict discussions on politically sensitive topics, leading to self-censorship and the omission of critical information. For instance, when users inquired about events like the Tiananmen Square protests or the status of Taiwan, DeepSeek either refused to respond or provided heavily censored answers.

This enforced censorship limits the scope of information these AI models can access and disseminate, thereby compromising their ability to provide accurate and comprehensive fact-checking. By excluding or altering information on sensitive subjects, the models may inadvertently propagate misinformation or present biased perspectives, undermining their reliability as fact-checking tools.

[Read More: Why Did China Ban Western AI Chatbots? The Rise of Its Own AI Models]

License This Article

Source: NY Post, WIZ, NewsGuard, arVix, The Guardian

TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

DeepSeek vs. ChatGPT: AI Knowledge Distillation Sparks Efficiency Breakthrough & Ethical Debate

Next
Next

Italy Bans DeepSeek AI: First Nation to Block China’s AI Over Privacy Issues