In 2011, IBM entered the healthcare space with a bold promise to transform medicine with its artificial intelligence system, Watson. They quickly forged alliances with the biggest names in healthcare, including Sloan Kettering, Mayo Clinic, CVS Health, and Johnson & Johnson. Breathless claims of Watson’s potential made headlines everywhere, spurred on by a winning appearance on Jeopardy. The future was here, or so it seemed.

A decade later, Watson is for sale. What went wrong?

The story of IBM Watson is a cautionary tale for any technology that puts marketing before results. IBM led with a utopian vision that they couldn’t back up with evidence, technology, or the resources to make it work. And that’s a shame because as the most visible clinical AI in the world, it overshadows the real-world impact clinical AI is already having on the world.

We are now far beyond the early days of clinical AI that IBM Watson entered into. Over the last decade, evidence has emerged that proves the ability of clinical AI to improve patient outcomes, raise the quality of care, and lower costs. Now, depending on which study you cite, somewhere between 70-90% of hospitals have an AI strategy in place, whether they’ve adopted it already or plan to do so in the future.

Indeed, when implemented correctly, AI can empower clinicians to make more informed decisions for their patients that ultimately saves lives. The key phrase here is “when implemented correctly.” IBM brought the big ideas, but putting them into practice was an afterthought. Predictably, their top-down approach was met with resistance.

Doctors spend roughly a decade of their life learning how to be a doctor, and then they continue to learn throughout their careers. It’s only natural for them to be skeptical that an AI system would know their patients better than they do. I should clarify here that clinical AI is not designed to replace the judgment of clinicians, but rather to augment it with new information they may not have been aware of or had at their disposal. Despite this, misconceptions about AI’s role in healthcare persist, and the idea that AI replaces doctors is another source of resistance.

If there’s anything we’ve learned from years of AI implementations, it’s that failing to anticipate this resistance is a recipe for failure. Tech giants from IBM to Apple to Facebook have a habit of focusing on the revolutionary potential of their technology. But when it comes to healthcare, the finer details of how it works in practice cannot and should not be glossed over. IBM was right about AI’s transformative potential for healthcare, but that potential depends on AI being communicated and understood by its end users.

To be sure, building that understanding requires thorough education and training. But in my experience, what’s more important is trust. AI is often perceived as a black box, with little transparency into how it comes up with its insights and recommendations. For AI to be effective, clinicians need to trust that these insights will help them do their job better. This trust can only be built by taking the time to listen and understand clinicians’ goals, concerns, motivations and frustrations. Clinicians need to be involved from the start in any AI implementation, it can’t be imposed on them without their buy-in.

Cultivating clinician champions is another effective strategy for building the trust and confidence necessary for a successful AI implementation. These champions are the clinical leaders who are involved early on in the implementation, and who can advocate on behalf of AI and influence their peers. After all, clinicians are more likely to trust the peers they work with every day than a technology vendor from the outside.

Clinicians also need to trust that the AI they use is free from bias and won’t worsen existing inequities in healthcare. This is certainly a valid concern — a common clinical AI algorithm recently made headlines for prioritizing care for white patients over black patients. Bias can’t be an afterthought, it should be actively considered in AI’s development and implementation and communicated to users. To prevent bias, AI should use data representative of the populations it is used on. It should also incorporate data on social determinants of health into its analysis to help clinicians understand the social and economic causes of existing health inequities and how to mitigate them in their patients.

Another key to a successful AI implementation is to understand how clinicians operate on a day-to-day basis. Clinicians are already spending two hours with their EHR for every hour with patients, they don’t need another administrative burden taking up their time. AI should complement clinicians’ existing workflows rather than adding to them. Otherwise, AI can breed resentment and frustration.

Finally, clinicians need to see that AI is delivering the results they were promised. When they see the impact in terms of fewer unplanned admissions, readmissions, sepsis cases, or whatever other metric the AI is targeted towards, it’s easier to trust the AI. These results won’t happen overnight, but when they do, clinicians will be reassured that the AI insights they use each day are making a difference.

At the end of the day, IBM Watson’s demise was the inevitable result of putting the cart before the horse. They asked care teams to trust their technology without putting in the work necessary to build that trust. But although the era of clinical AI may have started with Watson, it certainly doesn’t end with Watson. As AI becomes more ubiquitous in healthcare with each passing month, there are important lessons to be learned here about what it takes to make the idealistic promises of AI a reality.

Photo: ipopba, Getty Images

Source link


Please enter your comment!
Please enter your name here