BLOG

Point/Counterpoint: Empowering Patients vs. ‘Do No Harm’ in the Age of AI

{{brizy_dc_image_alt imageSrc=

Artificial intelligence is playing a growing role in psychiatry and mental health care—providing chatbot therapy to patients, changing clinician workflows, and influencing the therapeutic alliance. How much of that is a good thing—and where do we go from here?


Introduction:

As psychiatrists practicing in an era of rapidly evolving technology, we are responsible for meeting the clinical needs of the present and training the workforce of the future. As a past chair of APA’s Council on Communications and a career medical educator, I have found that the voices of our trainees are often the most nuanced when considering the role of emerging technology in shaping practice, training, and professionalism. For this inaugural column in Psychiatric News’ new “Point/Counterpoint” series, we have recruited two inspiring psychiatry residents to debate the ethics of artificial intelligence and psychiatry.

Should psychiatrists take a cautionary approach to AI due to the risk of disseminating misinformation and failing to observe clinical safeguards for patients in crisis? Or should we embrace AI as a pathway for patient autonomy and shared decision-making? We invite readers to consider the pros and cons and come to their own conclusions.

Empowering Patients:

In medicine, we live in a state of managed tension, guided by two companion virtues: non-maleficence (the obligation to “do no harm”) and beneficence (the imperative to “do good”). Psychiatrists know this delicate balance well. We must at times involuntarily hospitalize a patient or administer medication against their will. This stripping of autonomy is justified only through a greater beneficence: the patient’s ultimate return to stability.

This same tension now sits at the heart of the debate over AI. Picture this now-common scenario: Your patient on a six-month waitlist has been using an AI chatbot for ad-hoc therapy. On the surface, it’s working, helping them manage their panic attacks. However, it has also systematically reinforced their medication anxiety, and they’ve abruptly stopped their lithium. When this comes to your attention, what do you do? How about when a cutting-edge model provides well-reasoned counterarguments that directly challenge your recommendations?

The temptation to recoil is strong. It is also, I believe, a mistake. Our primary duty must be beneficence, which requires that we respect patient autonomy and engage with the world they live in. As clinicians, we are fallible, overworked, subject to cognitive biases, and not available 24/7. We should welcome new tools that help us be better. With proper, clinician-led guardrails, embracing AI is not a threat; it’s the most significant advancement for beneficence in decades. It enhances patient autonomy, mitigates burnout, and helps us communicate more empathetically.


Argument 1: Patient Autonomy:

If the internet first challenged the doctor-patient paradigm, AI is the force that will decisively reshape it. Patients are moving from passive recipients to active, informed participants. The era of “Dr. Google” was a chaotic prologue, flooding patients with information that was frequently useless, false, or terrifying. Is this different? Yes, it’s dramatically different: The new generation of AI offers personalized education and triage that is rigorously benchmarked. One frontier model, GPT-5, scored above 95% on average across USMLEs 1-3 (Yang, et al., 2025). This far exceeds the average physician’s scores, and while “hallucinated” inaccuracies are still possible, it demonstrates a level of medical knowledge that is undeniably sophisticated.

The result is a fundamentally different encounter. Bolstered by AI, patients arrive more informed and ready for true shared decision-making. Now, instead of debunking a vague, internet-fueled fear like “will this medication make me a zombie?,” we can immediately engage in a productive discussion about valid concerns, such as akathisia or metabolic side effects. This democratizes medical knowledge, shifting the old paternalistic hierarchy to a collaborative partnership. We may fear that this jeopardizes the therapeutic alliance, but the opposite is true.

We practice in an environment of fractured trust. While trust in the U.S. health care system plummeted from 71.5% to 40.1% post-pandemic (Perlis, et al., 2024), “my provider” remains the single most trusted source for health information (Edelman, 2025). Simultaneously, the majority of physicians agree that misinformation has increased, creating a toxic “fog” that our patients must navigate (The Physicians Foundation, 2023).

In this context, where patients are drowning in misinformation and trust us personally but not the system, a vetted, evidence-based AI is not the enemy; it is our ally. It is a force multiplier for rational thought, corroborating our counsel long after the patient has left the office. The alliance is broken when we act as gatekeepers, dismissing the patient’s own research. It’s strengthened when we partner with them, collaboratively reviewing the information—even AI-generated—within the safety net of our clinical judgment.

Argument 2: Burnout

AI also functions as a direct instrument of beneficence by acting as an antidote to burnout. It can liberate clinicians from the crushing administrative burdens that steal time from human-centered care. AI-powered scribes can save clinicians thousands of hours per year on documentation (Tierney, et al., 2025). Those hours represent the restoration of our practice, of human connection, of eye contact. Executed thoughtfully to offload administrative tasks, AI has the power to revive the soul of medicine while improving efficiency and extending reach.


Argument 3: Empathetic Communication

Perhaps counterintuitively, AI can help us become more effective and empathetic communicators. A 2025 systematic review found that in 13 of 15 studies, AI chatbots were perceived as significantly more empathetic than human health care professionals (Howcroft, 2025). Yes, the AI isn’t “feeling,” and we have to be cautious about sycophantic responses, but it is a “co-writer” able to execute the principles of empathetic communication that we, busy or fatigued, may forget.

Home early clinical data are measuring up: A 2025 randomized controlled trial for a generative AI therapy bot demonstrated a 51% reduction in depressive symptoms, with users reporting a “therapeutic alliance” comparable to human therapy (Heinz, et al., 2025). Consequently, the American Psychological Association has issued new ethical guidance on AI, underscoring that while these tools are promising, they require rigorous human oversight (American Psychological Association, 2025).

The Imperative to Act:

The ethos of “do no harm” is not an obstacle to AI but a solvable design challenge. The risks of misinformation, algorithmic bias, and data privacy are real. But we don’t manage these risks by rejecting the tools. As a clinician who develops and implements AI-based technologies for both patients and clinicians, I don’t see a mysterious black box. Rather, I see a new clinical tool that, like any other, demands deliberation, governance, and clinician leadership.

At our most conservative, we could adopt a “harm reduction” approach and acknowledge that AI is here and it’s time to reckon with it. But I would argue that our duty of beneficence demands more than a passive stance; it calls for a proactive role today involving active oversight, rigorous validation, and clinician-led systems. This mirrors how we guide a patient on internet research: We partner with them to maintain the clinical safety net.

The paternalistic “doctor-knows-all” model is over. If we fight the AI tide, we risk being seen as luddites, acting not in our patients’ best interest but in the self-interest of preserving our own supremacy. The risks of AI are manageable; the risk of failing our duty of beneficence by forgoing AI’s benefits is far greater.

REFERENCE