Using AI Instead of Therapy: What Works, what’s risky, and how to use it safely

Start Your Journey With Us Now Call 304-270-8179 or Click HERE to text with us

Written By: Undefeated Healthcare Editorial Team

Reviewed By: Chase Butala MS LPC, LCPC

12/30/2025

Introduction — why people are turning to AI for mental health

Large language models and purpose-built mental-health chatbots (like Woebot, Wysa and other conversational agents) are now widely available on phones and web browsers. They offer immediate, low-cost, always-on conversational support and skill-building exercises (CBT-style prompts, mood tracking, coping strategies). For some users this fills a real access gap: roughly half of people who could benefit from therapy do not receive it, and AI can act as a low-barrier entry point. Recent trials and reviews show promise for short-term symptom reduction in some contexts — but important limits remain. 





What AI-as-therapy looks like for different people (concrete examples)

  • The busy professional: Uses an AI chatbot for 10–15 minute CBT-style exercises between meetings (thought records, breathing prompts) when they don’t have time for a session. This can reduce immediate distress and reinforce skills learned in therapy.

  • The isolated young adult: Finds comfort in nightly conversations with a conversational agent that provides validation and sleep hygiene tips. This may reduce loneliness but risks creating emotional dependence if used as the only source of support.

  • The person with mild-to-moderate anxiety or depression: Complements therapy with AI exercises and mood tracking; randomized trials suggest some chatbots reduce symptoms short-term when designed with therapeutic frameworks.  

  • Someone with complex trauma or suicidality: May be tempted to use AI for coping — but this is high risk because current chatbots can fail to detect crisis cues reliably and can produce inappropriate or harmful responses. Professional oversight is critical.  





What the evidence shows (short summary with sources)

  • Meta-analyses and randomized trials report that some therapy-focused chatbots can reduce anxiety and depressive symptoms in short interventions and certain populations. These tools often blend CBT techniques and coaching-style prompts.  

  • Commercial products (Wysa, Woebot and others) publish clinical evidence and ongoing trials; evidence quality varies by study design, duration, and population.  

  • At the same time, researchers and professional bodies warn about misinformation, “hallucinations” (fabricated or unsafe content), biased outputs, and the lack of robust safeguards for crisis situations. Regulators and psychology organizations are increasingly scrutinizing AI tools.  





How AI can be helpful — and practical strategies to get the most from it

AI can be a useful adjunct — not a replacement — when used intentionally. Try these strategies:

  1. Use AI for skills practice and structure — ask for CBT exercises, guided breathing, or behavioral activation tasks and then do them.

  2. Be specific with prompts — the clearer and more focused your question, the better (e.g., “Help me make a 15-minute grounding exercise for panic symptoms” beats “help me calm down”). AI answers are shaped heavily by input.

  3. Treat outputs as hypotheses, not diagnoses — view suggestions as starting points to test, not definitive clinical advice.

  4. Keep a record — save useful exercises, wording that resonates, or patterns the AI notes about your mood. Bring those notes to a therapist.

  5. Set boundaries — limit use (for instance, not during crisis) and avoid using AI as the only support for severe or persistent symptoms.

  6. Verify facts and safety instructions — if AI gives specific medical/safety guidance, confirm it with a clinician or authoritative source.

These approaches use AI’s strengths (repeatable practice, accessibility) while reducing risks from vague prompts or overreliance. (See studies showing better outcomes when AI tools follow evidence-based CBT structure.) 





Key risks and limitations

  • Crisis detection and safety gaps: Many systems are not certified crisis responders and may fail to escalate or follow safe protocols reliably. People at risk of self-harm must contact emergency services or a crisis line, not rely on AI.  

  • Bias & feedback loops: AI reflects the data and prompts it was trained on — and the user’s own wording. That means you can receive responses that confirm your framing (confirmation bias) or reinforce unhealthy narratives. AI may underrepresent cultural nuance or produce biased suggestions.  

  • Hallucinations & misinformation: Large models sometimes fabricate facts, attribute false studies, or give unsafe “medical” advice. Always verify clinical recommendations.  

  • Emotional dependence & boundary blurring: Because AI is always available and nonjudgmental, some users can form attachments or avoid seeking human care, which can worsen outcomes for complex conditions. Recent reporting and professional warnings document these harms.  

  • Regulatory and privacy concerns: Rules vary by place and product. Some jurisdictions have begun restricting AI-only therapy, and regulators (FTC, state boards) are paying attention.  





Why you should bring AI insights to a therapist

AI can help you notice patterns, practice skills, and generate questions — but a licensed clinician provides clinical judgment, nuance, diagnosis, evidence-based treatment planning, and risk management that AI cannot reliably deliver. Bringing your AI-generated notes, prompts that helped you, or concerning outputs into sessions lets a therapist validate, correct, and integrate those insights into safe, effective care. Professional organizations advise treating AI as a tool to augment — not replace — professional mental health services. 





Quick practical checklist (what to do right now)

  • Use AI for short exercises and tracking, not for crisis care.

  • Save and bring meaningful AI outputs to your therapist.

  • If AI provides self-harm or medical advice, stop and consult a clinician or emergency services.

  • Cross-check factual claims from AI against reputable sources.

  • If you notice increased reliance on AI or worsening symptoms, contact your mental health provider.





Final takeaways

AI offers powerful, accessible ways to practice mental-health skills and lower short-term barriers to support — and early research shows promise for specific, evidence-based chatbot programs. However, there are real risks: bias, hallucination, safety gaps, and emotional dependence. Use AI thoughtfully and always involve a licensed therapist (or crisis services when appropriate) to interpret findings, manage risk, and build long-term change. The best outcome combines the accessibility of AI with the clinical oversight of trained professionals. 








Next
Next

Self-Forgiveness: How to Stop Being Your Own Worst Critic