Delving into the Dangers of DeepSeek

Photo: AGPhotography via Adobe Stock
DeepSeek exploded onto the AI scene in late January of this year. The company’s model demonstrated that the People’s Republic of China (PRC) had nearly closed the gap with U.S. AI companies. Its claims to deliver AI more cheaply, with greater energy efficiency, and without using high-end chips rattled the stock market since it suggested that many of the competitive advantages U.S. AI companies supposedly had may not exist. At the same time, DeepSeek raised alarms around the world about its security risks. Thus far, Italy, Taiwan, Australia, and South Korea have blocked or banned access to the app on government devices due to national security concerns regarding the app’s data management practices. In the United States, some federal agencies like NASA and the U.S. Navy have instructed employees against using DeepSeek due to national security concerns.
DeepSeek’s arrival also comes at a time when U.S. President Donald Trump is reenvisioning U.S. AI leadership. In his first weeks in office, Trump revoked the Biden administration’s executive order on AI regulation, requested a new AI action plan within 180 days, and pushed for greater AI leadership from the private sector. While the future of U.S. AI policy is still being determined by the new administration, DeepSeek presents risks that may affect the administration’s calculus of balancing innovation and security. These same risks also present challenges to the United States’ partners and allies, as well as the tech industry.
DeepSeek’s open-source structure means that anyone can download and modify the application. While open-source models can be made secure when built with strong safety guardrails, DeepSeek’s design allows users to alter not only its functionalities but also its safety mechanisms, creating a far greater risk of exploitation. The absence of robust safeguards leaves the model exposed and makes it particularly vulnerable to jailbreaking, where attackers can bypass what little safety infrastructure exists to force the model to generate harmful content. This vulnerability was highlighted in a recent Cisco study, which found that DeepSeek failed to block a single harmful prompt in its security assessments, including prompts related to cybercrime and misinformation. In comparison, OpenAI’s GPT-4o blocked 86 percent of harmful prompts, while Google’s Gemini blocked 64 percent. Further research indicates that DeepSeek is 11 times more likely to be exploited by cybercriminals than other AI models, highlighting a critical vulnerability in its design.
Western companies such as OpenAI, Anthropic, and Google, take a more controlled approach to reduce these risks. They implement oversight through their application programming interfaces, limiting access and monitoring usage in real time to prevent misuse. Companies like Open AI and Anthropic invest substantial resources into AI security and align their models with what they define as “human values.” They have also collaborated with organizations like the U.S. AI Safety Institute and the UK AI Safety Institute to continuously refine safety protocols through rigorous testing and red-teaming.
While Western non-PRC companies focus on building secure AI systems that emphasize transparency, accountability, and long-term safety, the PRC’s approach appears to be driven by a Chinese Communist Party (CCP) imperative to create competitive models as fast as possible. Rather than ensuring robust security at every stage of development, DeepSeek’s model sacrifices these protections for the sake of the CCP’s desire for speed and influence, increasing its potential for misuse.
The consequences of these vulnerabilities are significant. Research has consistently shown that while AI systems from leading U.S. firms can increase the efficiency of cyber operations, they have not enabled novel offensive capabilities. Malicious cyber actors have misused Western AI models from OpenAI, Anthropic, and Google to streamline certain attacks, such as crafting more convincing phishing emails or refining existing malicious code, but these models have not fundamentally altered the nature of cyberattacks.
DeepSeek’s lack of safety guardrails and open-source design, on the other hand, allow malicious actors to perform actions that Western models still largely prevent. DeepSeek enables users to generate fully functional malware from scratch, including ransomware code, without requiring technical expertise. DeepSeek would enable malicious cyber actors to level up their efforts, easily scaling their operations and automating attacks that would otherwise require more expertise and time. Security researchers at Check Point confirmed that criminal cyber networks are actively using DeepSeek to generate infostealer malware, extracting login credentials, payment data, and other sensitive information from compromised devices. Hackers have also exploited the model to bypass banking anti-fraud systems and automate financial theft, reducing the technical expertise needed to commit these crimes.
Beyond its design risks, DeepSeek is the latest tool in the PRC’s cyber espionage toolkit to obtain more comprehensive intelligence and support the country’s strategic and geopolitical objectives. The platform’s Terms of Service state that DeepSeek is “governed by the laws of the People’s Republic of China in the mainland.” DeepSeek’s Privacy Policy states that user data is stored in the PRC and governed by PRC law. Given that PRC law mandates cooperation with PRC intelligence agencies, these policies provide the PRC with great flexibility to access DeepSeek user data without the legal process that would be required in a rule-of-law country. The platform’s web page for account creation and user login also contains code linked to China Mobile, a company banned in the United States for its ties to the PRC military. Furthermore, SecurityScorecard identified “weak encryption methods, potential SQL injection flaws and undisclosed data transmissions to Chinese state-linked entities” within DeepSeek. China’s President Xi Jinping announced his interest in controlling data technologies in a 2013 speech, and DeepSeek presents an innovative platform to accelerate his pursuit of data dominance.
These developments pose risks and challenges for the administration. Trump’s actions aim to realign U.S. AI and regulatory policy to spur greater innovation and national competitiveness. Unless the administration is thoughtful and careful in drafting a new AI policy, however, it threatens to undermine safety and responsibility, impede the United States’ ability to confront the PRC about its irresponsible development of AI and create unintended complications for AI companies. The current gap in U.S. federal AI policy guidance under Trump creates a patchwork approach to AI regulation that presents regulatory roadblocks for companies and impedes the country’s ability to present itself as a strong international leader in AI development and data governance. Industry would benefit from a federally led approach to AI development, involving action by Congress and the White House to preempt state regulation and adopt sensible, consensus-driven steps for industry to take in developing cyber-safe AI. In declining this approach, Trump’s AI innovation push runs the risk of undermining safety and security and creating a regulatory morass that hampers, rather than helps, U.S. AI development. Further, once harms are directly attributed to DeepSeek, it limits the administration’s options for addressing these issues with the PRC.
While it is highly unlikely that the White House will fully reverse course on AI safety, it can take two actions to improve the situation. First, the administration should preserve a narrow government role in assessing the cybersecurity implications of AI models. The administration is clearly—and lamentably—abandoning efforts to ensure that AI “address[es] risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias.” The White House can still, however, allow the federal government—whether it is the National Institute of Standards and Technology or another agency—to evaluate the cybersecurity vulnerabilities and associated threats that each model presents. This action would help to ensure that we have a common understanding of which models work as a force multiplier for malicious cyber actors. Second, Trump should make a formal determination that DeepSeek presents a significant threat to the national security of the United States and ban it under the law that Congress passed to address TikTok. Such an action would not only address the threat that DeepSeek poses here in the United States, but it would also set an example internationally.
Speaking of the international situation, for U.S. allies and partners, DeepSeek presents a political quagmire reminiscent of Huawei: Privately, they recognize the risks that this app poses to their privacy, security, and digital sovereignty, but publicly, they hesitate to act for fear of incurring Beijing’s wrath. While DeepSeek already faces significant problems in the European Union, other governments will likely hesitate to take action against it. In this regard, as unfortunate as it is that DeepSeek has no safety guardrails, this fact presents an opening: Governments outside the United States can prohibit any AI models that fail to take safety into account or otherwise threaten privacy, security, or digital sovereignty. In assessing safety, one factor to consider is whether the app was produced in a country that requires—without legal process—the app developer to cooperate with government requests. Such a move would show that such governments are serious about promoting responsible AI and protecting their citizens from potential harm.
Matt Pearl is the director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Julia Brock is the program manager and research associate with the Strategic Technologies Program at CSIS. Anoosh Kumar is an intern with the Strategic Technologies Program at CSIS.