Protecting Privacy in a Post-Roe AI World
Putting Consent, Safety, and People First in the Age of Health Tech
This post is for nonprofit and SRH leaders who want to explore AI responsibly, without compromising safety, equity, or care.
Picture this: a teenager living in a trigger-law state that has banned abortion uses a chatbot late at night to ask how to access abortion care. They assume the chat is private. But behind the scenes, their messages and metadata are stored, unencrypted, and potentially accessible to vendors, platforms, or law enforcement.
In states where abortion is banned or criminalized, even searching for or discussing abortion access can be risky. Digital conversations that are poorly secured could be subpoenaed or misused as evidence, particularly in cases where people self-manage abortions or help others do so. That kind of vulnerability isn’t just a design flaw. It is a real threat to bodily autonomy and legal safety.
AI is already showing up in sexual and reproductive health spaces. Chatbots answer sensitive health questions. Scheduling tools predict no-show rates. Data dashboards track contraceptive supply levels in clinics. At its best, AI can help overwhelmed teams do more with less, especially in underfunded and overstretched environments.
But not every tool is built with care or with your community in mind. The decisions we make today about AI will not just shape our tech stack. They will shape who gets access, who gets left behind, and whose safety is put at risk. That is why ethics, privacy, and equity cannot be afterthoughts. They need to be part of the design from the start.
Privacy Is a Reproductive Rights Issue. Full Stop.
After Roe v. Wade was overturned, digital privacy became a frontline concern for SRH providers and advocates. Suddenly, data that once seemed harmless, like a user’s messages with a chatbot or the dates entered into a period tracker, could be weaponized.
In Nebraska, a teen and her mother were charged after Facebook messages about acquiring abortion pills were obtained by police through a search warrant. That case set off alarms across the reproductive justice field. In states with abortion bans, apps that track menstrual cycles, store sexual health histories, or record appointment bookings may be subpoenaed or sold to third parties.
To protect users, organizations must:
Audit every AI system that touches personal data
Encrypt everything, always
Avoid collecting more than you need
Stay away from third-party plug-ins that mine user behavior
Write privacy policies in plain language
In a post-Roe world, digital privacy isn’t just a technical issue. It is a matter of safety, dignity, and reproductive freedom.
Bias Is Built In. That Is Why We Have to Build Against It.
AI systems do not start from scratch. They learn from existing data. And existing data reflects a society shaped by racism, sexism, ableism, and more.
One widely used healthcare algorithm assigned lower risk scores to Black patients with the same health conditions as white patients, simply because it used past healthcare spending as a proxy for need. This ignored systemic barriers to care.
Fertility tracking apps have also been found to underperform for users with irregular menstrual cycles. This includes many people with PCOS, endometriosis, and hormonal disorders. These conditions disproportionately affect Black, Latinx, and disabled communities.
To design against bias:
Use diverse, representative training datasets
Test across multiple user identities and health contexts
Build equity checks into your development process
Engage people with lived experience of exclusion
Bias will not fix itself. We have to design it out, deliberately and continuously.
Transparency Is Not a Feature. It Is a Foundation.
In 2023, the mental health platform Koko used an AI chatbot to respond to users without their knowledge. More than 4,000 people unknowingly chatted with a machine. The backlash was swift.
In SRH, where trust is fragile and stigma is real, transparency is not optional.
Make it obvious:
Say “This chat is powered by AI.”
Do not bury it in fine print
Give users a clear opt-out or human contact option
Explain what data is stored and for how long
Transparency builds trust. Without it, your tech risks undermining the very care it was meant to support.
Real Equity Starts with Listening
When Planned Parenthood built "Roo," its chatbot for teens, they did not guess what young people wanted. They asked.
They interviewed high school students around the country. The feedback was clear: anonymity mattered. So did a tone that felt respectful and real. One student said, “I do not want it to sound like a parent or a principal. I want it to sound like someone who gets it.”
That input shaped everything, from the bot’s language to how it handled sensitive questions.
Build equity by:
Co-designing tools with your users
Compensating community members for their input
Testing with diverse users, not just internal staff
Creating feedback loops and advisory boards
Equity does not just mean good intentions. It means shared power, accountability, and responsiveness.
Practical Guardrails You Can Put in Place Now
You do not need to be a tech company to lead with integrity. Start here:
1. Create an internal AI ethics policy
Follow the World Health Organization’s guidance to define principles like safety, non-discrimination, consent, and fairness.
2. Run an AI risk assessment
Use the CDC’s digital tools framework to check for bias, legal exposure, or privacy gaps before you launch.
3. Audit and improve continuously
Do not launch and leave. Monitor outcomes. Build in user feedback. Adjust as needed.
When ethics are baked into your process, not patched on afterward, your technology becomes part of the solution, not another layer of harm.
Up Next in the Series
If you are still figuring out where AI fits into your work, you are not alone. Many SRH organizations are in the early stages, curious about what is possible but unsure where to start or how to decide whether AI is the right tool for a specific challenge.
That is exactly where we are headed next.
In our upcoming post, we will help you build confidence around the basics:
How do you know if AI is the right fit for a problem you are trying to solve?
What kinds of tools are out there, and which ones actually support mission-driven work?
What questions should you ask before piloting anything?
We will walk through a simple decision-making framework and share real-world examples that show how SRH organizations can test small ideas without overcommitting to big investments.
Whether you are AI-curious or cautiously skeptical, this next step will help you build clarity and move from hype to helpful.
Make sure you are subscribed so you do not miss it.
Have you started using AI in your SRH work? Hit reply or leave a comment. We would love to hear what is working (and what is worrying you).

