The Chinese AI company, DeepSeek, has paused new user sign-ups for its DeepSeek-V3 chatbot, citing an ongoing “large-scale” cyberattack aimed at disrupting its services.
DeepSeek quickly caught the spotlight last week after introducing an advanced AI model that reportedly rivals or surpasses the models of major US companies at a much lower cost.
Following the announcement of their new model, there was a massive sell-off in the US stock market, signaling the intensifying competition in the AI sector.
Rising Popularity Attracts Cyberattacks
After the unveiling of its highly competitive AI model, DeepSeek has seen a dramatic increase in attention from cybercriminals, or possibly, as some speculate, corporate adversaries.
The cyberattack coincides with DeepSeek surpassing ChatGPT to become the top downloaded app on the App Store, prompting the company to halt new user registrations.
“Due to large-scale malicious attacks on DeepSeek’s services, registration may be busy. Please wait and try again, a message on DeepSeek’s website reads. “Registered users can log in normally. Thank you for your understanding and support.”
Details of the CyberAttack
While specific details about the nature of the cyberattack have not been disclosed, DeepSeek is suspected to have been the victim of a distributed denial-of-service (DDoS) attack targeting its API and web chat platforms.
A DDoS attack involves overwhelming a service with more traffic than it can handle, which depletes the system’s resources and causes significant disruption until the attack is stopped or mitigated.
Interestingly, new users can still access DeepSeek by logging in through their Google accounts, which requires sharing personal information such as names, email addresses, language preferences, and profile pictures with DeepSeek.
Security Flaws Exposed Amidst Attacks
DeepSeek is also facing intense scrutiny from the cybersecurity community after a prominent cybersecurity firm, KELA, reported successfully “jailbreaking” the DeepSeek model to produce dangerous outputs.
The report from KELA’s AI Red Team highlighted that they could manipulate the model in various scenarios to generate harmful outputs, including crafting ransomware, forging sensitive documents, and providing detailed guides for producing toxins and explosives. Read more here.