ChatGPT is one of the most advanced and rapidly evolving large language model-based chatbots, thus we have explored the Use and Limitations of ChatGPT in Mental Health Disorders. It excels in everything from handling simple questions to performing complex medical examinations. While current technology cannot replace the expertise and judgment of skilled psychiatrists, it can assist in early detection of mental problems, patient evaluations, differential diagnoses, psychotherapy and in planning and conducting medical research. This technology has been applied to classify psychiatric disorders using neuroimaging data, develop models based on electroencephalograms, and utilize a range of patient characteristics for diagnosing and predicting mental disorders. These deep learning models have shown good diagnostic accuracy suggesting the possibility of combining genetics and registry data to predict both mental disorder diagnosis and disorder progression in a clinically relevant, cross-diagnostic setting prior to clinical assessment. Also, new technologies can assist clinicians by allowing them to focus more on direct patient care and alleviate the high clinical workload and bureaucratic tasks- such as handling admissions and managing paperwork- that have been linked to burnout in earlier research. Chatbots can be also beneficial in psychotherapy. The therapist’s emotions and the emotional alignment between therapist and client are crucial factors influencing the process and outcomes of therapy. A study conducted during the COVID-19 pandemic showed that technology can offer an effective method, providing at least a first level counseling support structure. This implies that GPT models may potentially develop cognitive empathy over time, making it possible for ChatGPT to achieve a notable level of accuracy in identifying users’ emotions, nevertheless it is important to make systematic testing to ensure a non-superficial comparison between human and artificial intelligences.

Illustration by Mariana Sofía Jiménez.
The role of AI in the prevention and early detection of mental problems can also be very significant. Patients frequently turn to ChatGPT to seek information about their symptoms, possible diagnoses, and treatment options. Ensuring privacy and adhering to professional, ethical, and legal standards is crucial when processing training data. This is especially important in mental health settings, where disclosing sensitive personal information increases the risk of data misuse and the potential for harmful advice. Current uses of ChatGPT in mental health care are constrained by its design as a general chatbot, rather than a specialized psychiatric tool. Despite this, the model proves useful for handling routine psychiatric and administrative tasks. The absence of clinical reasoning and experience in ChatGPT can lead to the omission of important clinical details in patient summaries and medical records. Thus, the most prudent approach is to employ AI systems as supplementary tools for mental health professionals, ensuring they are used under close supervision to uphold the safety and quality of patient care.

As GPT technology evolves, it holds significant promise for psychiatry, including integration into diagnostics, psychotherapy, and early detection of mental health issues. To deploy these advancements responsibly and effectively, it is crucial to develop and refine professional ethical standards and practice guidelines.
Info:
Article was presented at the Information Society 2024, 7-11 October 2024, Ljubljana, Slovenia
Doi: 10.70314/is.2024.chtm.6
Full text:
https://is.ijs.si/wp-content/uploads/2024/10/IS2024_-_CHATGPT_in_MEDICINE_paper_6-1.pdf