AI and Privacy in Therapeutic Work

The Problem

As therapists, we work with some of the most sensitive information people share. Medical records. Trauma histories. Family secrets. The kind of material that, if exposed, could genuinely harm someone.

Current AI tools mostly require sending this data to cloud servers owned by companies like OpenAI, Anthropic, or Google. Even with privacy policies, that’s a vulnerability I’m not comfortable with.

What I’m Exploring

I’m deep in the technical weeds learning about:

Local LLMs - Language models that run entirely on your own computer. No internet connection needed. Tools like Ollama, LM Studio, and various open-source models.

On-Device Processing - The technical architecture required to keep everything local while still being useful for clinical work.

Privacy-First Design - How to build tools that are secure by default, not as an afterthought.

Current Tools I’m Testing

  • Claude Code - For building small applications
  • Local LLMs - Experimenting with models that can run offline
  • Obsidian - For note-taking with local storage

The Learning Curve

I’m learning to code as I go. Making small, malleable applications. Reading people like Ethan Mollick and Simon Willison who are thinking carefully about practical AI use.

The goal isn’t to become a software engineer. It’s to understand the technology well enough to build (or advocate for) tools that respect the fundamental privacy needs of therapeutic work.

Questions I’m Sitting With

  • What’s the minimum viable AI assistance that would actually help clinicians without compromising privacy?
  • How do we balance convenience with security?
  • What does “good enough” security look like in a therapeutic context?
  • Can local models match cloud models for the specific tasks therapists need?

More to come as this exploration continues.