The Ethical Minefield of AI: A Guide for Clinicians
Your clients are using AI mental health tools between sessions — whether you know about it or not. Here's the ethical framework every clinician needs before recommending, encountering, or addressing these tools.
One of your clients — maybe more than one — is using an AI companion between your sessions. They may have mentioned it. They likely haven't. In the absence of a clinical conversation about it, they are making their own determination about what the tool is, what it's for, and whether you need to know.
This is not a hypothetical future problem for the profession. It is a present one. And the ethical framework most clinicians are working from was not built for it.
A Rapidly Emerging Liability Landscape
The legal and ethical scaffolding around AI in clinical settings is still being constructed in real time, but the outlines are becoming clear — and they place significant responsibility on the practicing clinician.
A 2025 paper in JMIR Mental Health conducting a scoping review of ethical challenges in conversational AI and mental health care found that clinicians are increasingly exposed on three fronts: informed consent (whether clients understand what AI tools are and are not capable of), responsibility attribution (who is liable when an AI tool produces a harmful output during a therapeutic process), and duty of care (whether a clinician has a professional obligation to address AI use they become aware of).
A 2023 analysis in Frontiers in Digital Health argued that the framing of AI mental health tools as "your robot therapist" creates a category confusion with serious ethical weight — clients who believe they are in a therapeutic relationship with an AI tool may be providing disclosure, forming attachment, and making decisions based on that assumption. The paper notes that one cannot give fully informed consent to a process one does not understand.
The Informed Consent Question
Here is the question the literature is increasingly forcing: if you recommend an AI tool to a client as a between-session support, what is your informed consent obligation?
A 2018 analysis in JMIR Mental Health examining direct-to-consumer digital psychotherapy applications found that most platforms effectively limit their own liability through terms of service — but that this does not transfer the ethical obligation away from the recommending clinician. The clinician who recommends a tool retains a duty to ensure the client understands what they are consenting to, including the data being collected, how it is used, and what the tool is not capable of doing in a crisis.
A 2025 paper in Healthcare (MDPI) offering ethical decision-making guidelines for mental health clinicians in the AI era identified informed consent and client autonomy as the two pillars most frequently compromised by current AI mental health tools — and recommended that clinicians develop a documented consent process specific to any AI adjunct they incorporate into treatment.
The Duty of Care Problem in Crisis Moments
The most acute ethical exposure is not the routine between-session journaling app. It is the moment when a client in distress turns to an AI tool instead of, or before, calling a crisis line or contacting you.
This is not theoretical. Research documenting cases of AI tools mishandling suicidal ideation — including a Brown University study published in late 2025 that identified mishandling of crisis situations as one of fifteen distinct ethical violation categories across major AI chatbot platforms — makes clear that the failure mode is consistent and documented.
The ethical question for the clinician: if a client disclosed to you that they had told an AI companion they were thinking about suicide, and the AI responded inadequately, what is your professional and legal position? Do your session notes reflect that you addressed AI use? Is there a documented clinical rationale for why you did or didn't address it?
These questions do not yet have settled answers. But they are being asked in professional liability contexts now, and the clinicians who have not addressed them proactively are more exposed than those who have.
A Working Framework
The literature does not argue that AI tools have no place in clinical support. What it argues is that the clinician's ethical obligations do not disappear because a third-party tool is involved. A practical framework, drawn from the 2025 guidance reviewed here, suggests three minimum standards.
First, ask. Make AI tool use a standard intake and check-in question, the same way you ask about medications or alcohol. Normalize the conversation so clients know they can disclose it without judgment.
Second, document. If you discuss a client's AI tool use — or deliberately choose not to restrict it — record the clinical reasoning. This is not bureaucratic self-protection. It is the same documentation standard you apply to any clinical adjunct.
Third, distinguish. There is a meaningful clinical difference between an AI tool that supports structured reflection without crisis features and one that positions itself as a therapeutic relationship. The research supports adjunct use for low-acuity applications. It does not support unsupervised high-acuity use. Your recommendation should reflect that distinction.
At EQ, we built our platform explicitly around clinician partnership rather than clinician replacement. The data we generate is emotional metadata — never the content of what a client writes. The clinician remains the responsible party. We are the instrument, not the practitioner.
The ethical minefield is real. But navigating it thoughtfully is how the profession earns the trust that the technology alone cannot provide.
Learn more about the EQ clinician partnership model at my-eq.com/partner-with-eq.
Citations:
Rahsepar Meadi, M., Sillekens, T., Metselaar, S., et al. (2025). Exploring the ethical challenges of conversational AI in mental health care: scoping review. JMIR Mental Health.
Khawaja, Z., & Belisle-Pipon, J.C. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health.
Martinez-Martin, N., & Kreitmair, K. (2018). Ethical issues for direct-to-consumer digital psychotherapy apps: addressing accountability, data protection, and consent. JMIR Mental Health.
Pillay, Y. (2025). Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era. Healthcare (MDPI).
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research.