Please use this identifier to cite or link to this item:
http://hdl.handle.net/10071/36775| Author(s): | Santos, J. M. Shah, S. Gupta, A. Mann, A. Vaz, A. Caldwell, B. E. Scholz, R. Awad, P. Allemandi, R. Faust, D. Banka, H. Rousmaniere, T. |
| Date: | 2026 |
| Title: | Evaluating the clinical safety of large language models in response to high-risk mental health disclosures |
| Journal title: | Practice Innovations |
| Volume: | N/A |
| Reference: | Santos, J. M., Shah, S., Gupta, A., Mann, A., Vaz, A., Caldwell, B. E., Scholz, R., Awad, P., Allemandi, R., Faust, D., Banka, H., Rousmaniere, T. (2026). Evaluating the clinical safety of large language models in response to high-risk mental health disclosures. Practice Innovations. https://doi.org/10.1037/pri0000316 |
| ISSN: | 2377-889X |
| DOI (Digital Object Identifier): | 10.1037/pri0000316 |
| Keywords: | Large language models Crisis intervention Ethics Mental health |
| Abstract: | As large language models increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular large language models—Claude, Gemini, DeepSeek, ChatGPT, Grok 3, and LLAMA—to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in a global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or kept the conversation open. These findings suggest that while large language models show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings. |
| Peerreviewed: | yes |
| Access type: | Open Access |
| Appears in Collections: | CIES-RI - Artigos em revistas científicas internacionais com arbitragem científica |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| article_117447.pdf | 349,6 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.












