Guarding What Matters: A Nonprofit Perspective on the DeepSeek Dilemma
By Dellen Burk-Flores
Working in the nonprofit sector, we often juggle limited resources, tight budgets, and the need to adopt technologies that help us make a bigger impact. But what happens when the very tools we rely on to advance our mission turn out to be wolves in sheep’s clothing? Enter DeepSeek—an AI platform that seemed like a well-timed gift too good to be true… is it actually a Trojan horse in disguise?
"Free" Technology vs Custom Solutions
Nonprofits are no strangers to the allure of free or low-cost tech solutions. At dsmHack, we understand this appeal firsthand—that’s why our annual charity hackathon is dedicated to creating safe, customized, and thoughtful solutions for nonprofits at no cost to them. We believe technology should empower, not exploit. That’s why every tool we build is designed with the nonprofit’s mission at its heart… ensuring organizations can focus on what they do best: serving their communities with confidence and integrity.
At dsmHack, it is our responsibility to safeguard information. Our volunteers collaborate closely with nonprofit leaders to develop solutions that prioritize security and sustainability—without hidden strings attached. We uphold strict ethical boundaries and carefully limit the data we access to ensure the organization’s safety is at the heart of every decision we make.
So how do we feel about the DeepSeek AI model? Keep reading to find out…
A Trojan horse made of computer parts generated by Dellen’s MidJourney Bot in Discord.
The Hidden Cost of Convenience
DeepSeek’s shiny exterior masks a deeper reality. Beneath its sleek interface, it harvests keystrokes, tracks device IDs, logs user inputs, and maps our digital habits. In China, no company is truly private. DeepSeek is legally required to share its data with the government. For nonprofits handling sensitive information about vulnerable populations, donors, and community partners, this isn’t just a breach of privacy—it’s a violation of trust.
Furthermore, like other AI models, DeepSeek is prone to hallucinations—generating false information and presenting it as fact. For nonprofits, especially those in the health, legal, and financial sectors, the risk of sharing inaccurate information can have serious consequences. Accuracy isn’t just a matter of credibility; it directly impacts the well-being, trust, and safety of the communities these organizations serve.
As an open-source model, DeepSeek’s hosted version presents an additional layer of risk. While open-source platforms offer transparency and collaboration, they can also expose critical vulnerabilities when not properly secured. Bad actors can exploit weaknesses, inject malicious code, and compromise security on a scale that proprietary software typically guards against. This openness, without rigorous safeguards, turns a promising tool into a ticking time bomb.
Security tests have exposed DeepSeek as more than flawed; it is a liability. Sensitive data, including internal logs and software keys, remain exposed. Even basic cybersecurity measures can’t contain the risks. For organizations that depend on confidentiality, this is unacceptable… and potentially catastrophic.
According to a recent report by Qualys, DeepSeek failed over half of the jailbreak tests designed to assess its security resilience. These tests reveal glaring vulnerabilities in its ability to prevent unauthorized data access, allowing malicious actors to easily bypass safeguards and manipulate the system. The failures aren’t just technical oversights—they are fundamental system flaws that expose critical weaknesses in DeepSeek’s architecture. The lack of robust security measures means that even casual bad actors could exploit the platform, putting nonprofit data at significant risk.
Lastly, Ethical AI isn’t just a buzzword—it’s a necessity, especially when the technology interacts with sensitive data and vulnerable communities. The AI Code of Ethics emphasizes principles like transparency, accountability, privacy, and security. This code serves as a guiding framework to ensure systems are developed and deployed responsibly.
No AI system is foolproof, but DeepSeek fails in ways that are, quite frankly, egregious. Bias, security gaps, and ethical challenges persist in all man-made systems, which is why continuous evaluation, improvement, and ethical reflection are critical. DeepSeek not only ignores these ethical imperatives but actively undermines them through opaque data practices and insufficient security measures.
The Illusion of Cost-Effectiveness
DeepSeek boasted about its minimal development costs, claiming efficiency that seemed too good to be true—and it is. The figures conveniently omit critical expenses like infrastructure, robust security protocols, and comprehensive data management systems. It's akin to claiming you've built a state-of-the-art community center for $100,000 while conveniently forgetting the cost of land, utilities, safety inspections, and long-term maintenance. The real price of DeepSeek isn't reflected in its budget sheets—it's hidden in the risks it transfers to its users.
What DeepSeek doesn’t advertise is its true revenue model: data mining. While the platform appears free, it profits from the vast amounts of information it collects from users. This data becomes a stolen commodity, sold or leveraged for financial gain. In essence, the system isn’t designed to serve your organization; it’s designed to extract value from it. This hidden cost undermines the illusion of savings, revealing a business model that profits from compromising the very data nonprofits strive to protect.
Why Nonprofits Can’t Afford to Be Naive
Our work is built on trust—with donors, beneficiaries, and the communities we serve. When that trust is compromised, the ripple effects are profound. The lesson from DeepSeek is clear: convenience must never come at the cost of security. Even with VPNs, burner accounts, and firewalls, if the core system is compromised, we are all vulnerable.
What We Can Do Now
Audit Your Tech Stack: Review every tool your organization uses, especially those that are free or low-cost. Understand where your data goes and who controls it. Here’s an example of how open-source code can be full of hidden surprises.
Prioritize Security: Invest in secure platforms, even if they come with a price tag. The cost of a data breach far outweighs the savings from free software.
Educate Your Team: Cybersecurity isn’t just an IT issue; it's an organizational responsibility. Train staff to recognize red flags and practice safe digital habits. When using Artificial Intelligence, ensure they understand the associated risks and how to implement safeguards to prevent the exposure of sensitive information.
Educate Yourself on Local Policies: CHAPTER 715 of the Iowa Code of Conduct is a great place to start.
Final Thoughts
In our pursuit of impact, we can’t lose sight of the fundamental need to protect the data entrusted to us. Every new tool tests how well we balance convenience with security. Let’s choose wisely. In the end, it’s not about being paranoid—it’s about being prepared. Because when trust is your currency, you can’t afford to gamble with it.