Orrich Lawson | Getty Images

Hype, hope, and hunch are suddenly everywhere when it comes to artificial intelligence. But turbulence technology has long made waves in healthcare. IBM Watson’s failed foray into healthcare (And the long-held hope that AI tools might one day beat doctors detection of cancer from medical images) To Realized problem of algorithmic racial bias.

But behind the public strife of fanfare and failure lies the chaotic reality of a largely untold rollout. has been working on attempts to introduce AI tools. new research Led by researchers at Duke University. Posted online as a preprint, this study opens the door to these messy implementations and delves into the lessons learned. In a startling revelation from his 89 professionals involved in deployments at 11 medical institutions, including Duke Health, Mayo Clinic, and Kaiser Permanente, the author finds out when health systems attempt to deploy new AI tools. Assemble a working framework that can be followed. .

And new AI tools are popping up all the time. just last week, JAMA internal medicine research It turns out that ChatGPT (version 3.5) definitively beats doctors in providing quality, empathetic answers to medical questions people post. subreddit r/AskDocsA good response, subjectively judged by a panel of three physicians with relevant medical expertise, is that AI chatbots such as ChatGPT respond to medical messages submitted through online patient portals. It suggests that it may help tackle the growing burden.

This is no small feat. Increased messages from patients are associated with higher rates of physician burnout. According to the study authors, an effective AI chat tool could not only alleviate this exhausting burden, but also relieve the burden on doctors so they can focus their efforts elsewhere. , can also reduce unnecessary clinic visits and promote patient adherence and adherence to medical guidance. Improve overall patient health outcomes. In addition, improved messaging responsiveness can help improve patient engagement by providing more online support to patients who are less likely to schedule an appointment, such as patients with mobility issues, work restrictions, or fear of medical bills. It can improve fairness.

Real AI

As with the many possibilities of AI tools for healthcare, that sounds like a great deal. However, there are some major limitations and caveats to this research that make the real potential of this application harder than it seems. It doesn’t necessarily represent the kinds of questions you would ask a doctor you know and (hopefully) trust. Also, the quality and variety of answers that volunteer doctors provide to random people on the Internet may not match what they provide to their own patients with whom they have established relationships.

But even if the core results of this study were to hold on to real doctor-patient interactions via a real patient portal messaging system, chatbots would need a lot more to reach their lofty goals. a preprint study led by Duke.

To save time, AI tools must be well integrated into the clinical applications of healthcare systems and the workflows established by individual physicians. A clinician will need reliable, in some cases, round-the-clock technical support in case something goes wrong. Physicians also need to establish a balance of confidence in their tools. So it’s the kind of balance where you don’t blindly pass AI-generated responses to patients without review, but you know you don’t have to spend a lot of time editing responses. Tool usefulness.

And after managing all of that, the healthcare system must establish an evidence base that the tools are working as expected in the particular healthcare system. , systems and indicators should be developed to track outcomes such as health status.

These are heavy demands in an already complex and cumbersome healthcare system. Preprint researchers state in the preface:

Utilizing the Swiss cheese model of pandemic defense, there are currently huge holes in all layers of the healthcare AI ecosystem, and the widespread proliferation of poorly performing products is inevitable.

The study identified eight frameworks based on the implementation steps that executives, IT leaders, and front-line clinicians take when making decisions. This process includes: 1) Identify and prioritize issues. 2) Identify how AI can help. 3) Develop methods to measure AI results and success. 4) Figure out how to integrate it into your existing workflow. 5) Validate the safety, efficacy and fairness of AI in healthcare systems prior to clinical use. 6) Deployment of AI tools with communication, training and trust building. 7) monitoring; 8) updating or retiring tools over time;



Source

Share.

TOPPIKR is a global news website that covers everything from current events, politics, entertainment, culture, tech, science, and healthcare.

Leave A Reply

Exit mobile version