Image: SFD Media LLC
Meet My Co-Therapist, Its Name is Borg.
If you’ve ever left a therapy session thinking, “I’m not quite sure I understood that,” you’re not alone. These days many clients don’t wait until the next session to figure things out. They open a chatbot and ask for clarity. In seconds, they get an answer that seems to make sense—and then comes the uneasy feeling:
Is this shortcut helping you grow or undermining the whole point?
Secretly Loving It or Losing Your Soul?
As a therapist I have daily conversations with clients about AI, and many have used a chatbot between sessions. What they get from these “conversations” is immediate and the quality seems good, but they have mixed feelings about it. They’re concerned about their data and privacy, and, quite frankly, sometimes asking, “If AI is so good, why am I paying you?”
So is AI in therapy a tool, a threat, or both? And where’s the line?
On one hand, there are undeniable benefits. Clients who live in rural areas, who work odd hours, or appreciate at least the appearance of anonymity online may find themselves looking for an always-on, cost-efficient, nonjudgemental confidant who won’t show up as a room parent in the same kindergarten class. This human-like relationship remembers the details you give it and consolidates all the information it’s been fed.
But cracks appear quickly.
Dr. Cyborg doesn’t catch the nuance in a long pause or the inflection in your voice when you say, “yeah” but your vocal tone rises with hesitation. AI with all its “wisdom” is also programmed with bias, including a high degree of agreeableness and a default to siding with you. That means its outputs carry risks of distorted information, hallucination (this is the term that’s used when AI has totally “made up something that is inaccurate or non-existant”), or plain bad advice. And then there’s the most human concern of all: confidentiality. Who owns those late-night confessions once they’re typed into the cloud?
Illinois recently forced the issue by becoming the first state to regulate AI in therapy— although the American Psychological Association (APA) has been all over this since at least mid-2023—and it’s now illegal for AI programs to deliver therapy without a licensed professional overseeing them. Chatbots can’t make diagnoses or set treatment plans on their own. Therapists can use AI as an administrative helper for things like scheduling, note-taking, and organizing referrals, but clients must give informed consent, and violations come with fines of up to $10,000 per case.
Illinois’ move to draw clear boundaries on AI in therapy isn’t just about protecting clients, it’s also about reflecting what many therapists are already wrestling with behind the scenes.
The Other Side of the Couch
Clinicians are split. Some see AI as a promising tool, a way to efficiently document sessions and review literature while optimizing their time and expertise. Others worry about blurred boundaries, agency, privacy, and accuracy.
Dr. Jodi Halpern, a bioethicist, recently said, “psychotherapies … are based on developing vulnerable emotional relationships with a therapist. I’m very concerned about having an AI bot replace a human in a therapy that’s based on a vulnerable emotional relationship.” Kirstin Aschbacher, PhD, an associate professor and data scientist at the University of California San Francisco, agreed and expanded on this point: “Somewhere along the way, we have lost touch with the unexpected power of deep attunement to others, in a non-judgmental, curious, and caring way that gives rise to healing.”
Without the human to temper the information to ensure alignment with the therapeutic needs, and to identify a correlating intervention, AI tries to please the user by finding an answer, and can offer nothing that the AI can’t find on the internet or from the sources it was fed.
As Shannon Vallor so succinctly put it, AI tools simply, “generate new content that looks or sounds right. Whether it’s right is another matter altogether.”
Digital Answers Come at a Price
So, what does responsible use look like? Consider using the “stranger on the bus” test: Would you tell your deepest darkest struggles to a stranger sitting next to you on a bus? Don’t just fork over your information unless you know the rules, the costs, and the benefits. Ask questions like: Where is your information stored, how is it being used, and is a human reviewing it? Any health care provider that uses AI should be able to give you those answers—and also give you a chance to say “Nope!” if that’s how you feel about it.
But AI shouldn’t be looked at as a threat, and the future of therapy isn’t about choosing humans or machines. It won’t replace the comfort of a human nod, the warmth of shared silence, or the feeling of being truly seen. But it can sharpen the tools in your therapist’s hands and open doors to care that once felt locked.
The danger comes when we stop questioning, hand over our privacy, and forget that machines are built to serve profit—not people.
Stay curious.
Stay skeptical.
And above all, remember: The strongest tool in any therapy—AI or not—will always be your own voice.
*******
MEDICAL ADVICE DISCLAIMER
DISCLAIMER: This website does not provide medical advice. For health or wellness-related content, SFD Media LLC emphasizes that information about medicines, treatments, and therapeutic goods (including text, graphics, and images) is provided for general information only. No material on this site is intended to substitute for professional medical advice, diagnosis, or treatment. Users are advised to independently evaluate and verify the accuracy, reliability, and suitability of the information before relying on it. You should not rely on the content as a substitute for professional medical advice. Consult with a physician or other health care professional for any health concerns or questions you may have. SFD Media LLC is not responsible for any action taken based on the information provided on this website. The use of any information provided on this website is solely at your own risk.
0 Comments