ChatMED in Medicine conference during Information Society 2024 – Ljubljana, Slovenia

News

The “ChatMED in Medicine” session, held during the Information Society 2024 Multiconference from October 7–10 in Ljubljana, Slovenia, brought together researchers, clinicians, and AI experts to explore the transformative role of Large Language Models (LLMs) in healthcare. The event showcased innovative approaches, critical discussions, and new methodologies aimed at integrating AI into medical diagnostics, communication, and patient care.

Highlights of the Conference

  1. Innovative Research Showcased:
    1. Topics ranged from AI-augmented diagnostic systems and ethical communication frameworks to privacy-preserving AI deployment through federated learning. The presented works reflected the diverse and cutting-edge potential of AI in healthcare.
  2. First Best Paper Award:
    1. The inaugural Best Paper Award was presented to Alexander Perko for his groundbreaking research on “Using Combinatorial Testing for Prompt Engineering of LLMs in Medicine”. His work introduced a novel testing methodology for optimizing AI prompts to enhance the reliability of LLM responses in medical settings. The study not only addressed the critical issue of AI “hallucinations” but also proposed a robust validation framework, paving the way for safer and more effective AI applications in healthcare.
  3. Featured Research
  • AI-Augmented Diagnostic Systems​:
    • Kennedy Addo presented a study exploring how deep learning algorithms improve diagnostic accuracy for diseases like cancer, cardiovascular conditions, and neurological disorders. A proposed framework seamlessly integrates AI with clinical workflows to enhance real-time decision-making, underscoring the role of AI in reducing diagnostic errors and enriching clinical insights.
  • Cultural Sensitivity in LLMs for Healthcare Communication​:
    • The work by Gordana Petrovska Dojchinovska and colleagues emphasized the importance of tailoring LLMs like GPT-4, LLaMA, and GatorTron to local linguistic and cultural needs. This ensures patient-doctor communication is both accurate and culturally respectful, addressing challenges like bias and ethical concerns.
  • Prompt Engineering for Reliable Medical Queries​:
    • Alexander Perko and Franz Wotawa introduced a combinatorial testing methodology to optimize prompts for ChatGPT in medical contexts. This framework aims to minimize risks associated with misleading AI outputs by generating diverse query variants and validating responses against benchmarks.
  • Standards for LLM Use in Clinical Diagnosis​:
    • Researchers from the University Clinical Center Niš examined the potential and current limitations of LLMs in clinical diagnostics. Their framework highlighted the need for robust, continuous evaluation and the integration of real-world data to enhance AI’s safety and effectiveness in healthcare.
  • The “HomeDOCtor” App: Extending GPT-4 for Slovenian Healthcare​:
    • Matic Zadobovšek and collaborators developed the “HomeDOCtor” app, which integrates verified Slovenian medical knowledge into GPT-4. This app provides accessible health counseling, demonstrating how LLMs can be fine-tuned for regional healthcare needs while emphasizing data privacy and system reliability.
  • Testing ChatGPT for Medical Diagnostic Tasks​:
    • A semi-automated evaluation compared ChatGPT’s performance in diagnosing medical symptoms against established medical expert systems. The study highlighted areas where LLMs align with professional diagnostic tools and where improvements are necessary.
  • Applications in Mental Health​:
    • Research from the University Clinical Center Niš explored ChatGPT’s applications in psychiatry, including routine administrative tasks, psychotherapy support, and diagnostic assistance. While emphasizing the irreplaceable role of human therapists, the study underscored AI’s potential in easing clinician workloads.
  • Federated Learning for Secure AI Deployment in Healthcare​:
    • Zlate Dodevski and colleagues proposed a federated learning framework to address privacy and data-sharing challenges in training LLMs for healthcare. This approach enables collaborative model development across institutions while maintaining stringent data security standards.

The session underscored the importance of interdisciplinary collaboration and ethical AI development in healthcare. With its diverse presentations and recognition of outstanding research, the “ChatMED in Medicine” session set a high standard for innovation and responsibility in advancing AI for better patient outcomes.

Congratulations again to Alexander Perko for his exceptional contribution to the field!

To top