An online survey of UK GPs by the BMJ has revealed that one in five are using generative AI tools such as ChatGPT in clinical practice, despite a lack of training.
In a bustling general practice in East London, Dr. Sarah Bennett hastily finishes her last patient of the day. As the clock ticks towards five, she pulls up ChatGPT on her computer, an AI tool she has only recently started using. “It helps me summarize cases and draft patient letters,” she admits, a mix of relief and unease coloring her voice. Like Dr. Bennett, one in five of her GP colleagues across the UK are turning to generative AI—tools that promise efficiency but are still shrouded in uncertainty.
The Rise of AI in Healthcare
A recent survey published by the BMJ revealed that 20% of General Practitioners (GPs) are utilizing generative AI tools like ChatGPT in their clinical practice. Conducted in February 2024, the survey sampled 1,006 GPs, with 205 reporting active use of these digital assistants. Of those, nearly a third employed AI for generating documentation and a similar number for suggesting differential diagnoses.
Dr. Charlotte Blease, a key author of the study and an associate professor at Uppsala University, states, “GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning.” However, she cautions that the benefits come with risks, such as embedding subtle errors and biases, akin to navigating a minefield.
What’s Driving Adoption?
- Efficiency Gains: GPs are under immense pressure to manage workloads, and AI tools can streamline administrative processes.
- Lack of Training: Despite their growing use, GPs reported minimal training on AI tools, raising questions about proficiency and reliability.
- Support in Clinical Reasoning: Many GPs find AI helpful for decision support, albeit with caveats regarding accuracy.
Among the respondents, the most popular tool was ChatGPT. Dr. Bennett’s experience aligns with findings detailed by Dr. Blease, who noted that GPs value the AI’s capability to handle routine tasks. “Generative AI can condense vast amounts of information, yet the nuances of patient care may still elude it,” Dr. Blease adds, highlighting the need for caution.
The Risks of AI Integration
The burgeoning use of AI in clinical settings is not without its challenges. Experts are warning against an uncritical embrace of these technologies. Dr. Hafez Jamali, a healthcare technology researcher, suggests that “the risks associated with generative AI extend beyond technical errors.” He emphasizes patient privacy concerns, noting, “Doctors may inadvertently expose sensitive patient data to tech companies.” Dr. Blease echoes this sentiment, urging the medical community to safeguard patient information while leveraging AI’s benefits.
The survey’s findings also resonate with a broader study commissioned by the Health Foundation, which showed that 76% of NHS staff and 54% of the public support the use of AI in patient care. However, this enthusiasm underscores the complexity involved in integrating these technologies responsibly into clinical workflows.
Voices of Caution
Despite the potential for AI integration, figures such as Dom Cushnan, director of AI, imaging, and deployment at NHS England, remain cautiously optimistic. “Our health systems must ensure that AI tools are evidence-based and appropriate for clinical use,” he stated during a conference on digital health. Cushnan voiced concern over the excitement surrounding generative AI, advocating for a rigorous validation process before these tools can find their place in healthcare.
Ensuring the Integrity of AI Tools
As the usage of generative AI tools grows, the medical community is tasked with an important dual responsibility: to educate themselves and their trainees not only about the advantages but also the risks these tools pose. The potential for algorithmic biases and errors remains a pressing concern, as Dr. Bennett reflects on her newfound reliance on AI: “It’s like having an intern with you, but you need to double-check their work.”
This sentiment underscores a broader need in the medical field: the establishment of guidelines and policies surrounding the use of AI technologies. Dr. Blease articulates this necessity, stating, “The medical community must find ways to provide clear guidance on these tools, emphasizing both their utility and the ethical implications involved.”
Moreover, as surveys like the one from the BMJ show a shift towards the acceptance of AI, what remains essential is a balanced perspective—one that acknowledges the limitations of generative models and the paramount importance of patient safety. The excitement about AI’s potential should not overshadow the vigilance required to maintain integrity in healthcare practices.
For Dr. Bennett, the journey with AI is just beginning. “I see the potential, but we’re still finding our way. It’s like every new tool in medicine; we have to learn together, but cautiously,” she muses. As the landscape evolves, maintaining a focus on patient-centered care while exploring innovative solutions will be crucial—making it clear that the future of healthcare lies at the intersection of technology and compassion.
Source: www.digitalhealth.net

