Bringing AI into clinical practice:
observations from the domain of radiology

Bomi Kim
PhD candidate, KIN Center for Digital Innovation
   

    

“We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.”―Geoffrey Hinton, “Godfather of Deep Learning”, in 2016

Those five years have passed and deep learning sure has made dazzling technological advances. Still, how come we see so few real-life examples of deep learning applications in radiology, let alone deep learning outperforming radiologists? (To be fair, Geoff Hinton did take back his words later.) What are some of the remaining hurdles in bringing AI, not only deep learning, in clinical practice?

 

 

To provide a perspective on these questions, I combine insights from our recent study on the AI discourse of the radiology community [1] and from my two-year-long field observation in a tertiary hospital with a pioneering position on AI implementation in Europe.

While AI technology itself is far from being complete, we find more hurdles outside the scope of the technology. Below I summarize some hurdles in AI implementation from the above-mentioned two studies and derive practical lessons for both AI companies and healthcare institutes that wish to implement AI.

What are some hurdles in bringing AI into clinical practice?
  1. Technological solutions that are detached from real problems at work

Currently, surprisingly many AI applications are based on an insufficient understanding of how radiologists work in practice and what they need or want. Enticed by novel technical possibilities, some companies (unfortunately) set out to solve problems that do not exist or cannot easily bring value or be scaled up in practice. Technical solutions that are not fully grounded in the actual work of radiologists not only do not end up in the workstations, but also risk years of effort, hard work and investment going to waste in the worst case scenario.

  1. Roadblocks in the chain of communication and decision making

Implementing an AI application in clinical practice is often a large-scale, inter-organizational project which entails communicating and collaborating with a complex web of stakeholders. Managers and IT specialists of the hospital may initiate the process with the AI company, but for an optimal integration into the pre-existing workflow, end-users (clinicians) need to get on board as well as the clinical front-end system provider—which is the PACS (picture viewing application) in radiology. Having such a complex, dispersed network of stakeholders means that the project can hit various roadblocks and lose its momentum easily.

  1. Mismatch between expectations and current capabilities of AI

Radiologists’ expectations on AI vary greatly: some fear that AI will automate their work, while others believe that AI is no different from the computer-aided diagnosis and detection systems they have seen in the past decades. Yet others think AI will be a cure-all for all the problems they encounter at work. Currently most radiologists lack opportunities to encounter AI at work and their exposure levels to the topic via conferences, webinars, news and scientific articles differ significantly.

Expectations on both extremes make the change process difficult and costly for the organization. On the one hand, you have pessimistic expectations that resist the change and implementation of a new technology while on the other, you have overblown expectations which are soon deflated and quickly turned into distrust when they see what AI presently has to offer.

How can these hurdles be overcome?
  1. Grounding the technological solution in actual work practices

To make sure that the AI application is not detached from the work practices of clinicians, AI companies should find viable use cases from the beginning. The best way to do this is to involve ‘end’-users from the ‘beginning.’ See the below quote of a radiologist who validated a certified AI application only to learn that it offered little clinical value:

“You really need the input from the clinical perspective (…) how it really works on different data, in a different hospital, in a different setting, (…) how to get it implemented? How would we [clinicians] like to use it? Because you can make an algorithm, but if you don’t know how to implement it, don’t get some grip on the ideas around that, then it’s a loss of effort”.

To have clear answers to such questions, it is necessary to have a deep understanding of how clinicians work. Preferred methods such as interviews and surveys can reach their limits given how not all problems can be neatly verbalized. Complementing them with on-site work observations can be very helpful to gain further insights on the situated work practices of clinicians.

Furthermore, AI companies should acknowledge the significant variations in work practices across countries, cultures, and types of institutes. Primary care institutes are likely to have different patient cases and a different tempo of work compared to tertiary institutes or cancer institutes, which can be crucial in implementing an AI application. The same exact application can be a viable investment in one context, while not in another.

  1. Managing the chain of communication and decision making

To prevent the often large-scale, inter-organizational AI implementation project from stalling, it is very important for organizations to have dedicated personnel to have oversight on and coordinate the whole process. This includes seemingly petty but in practice extremely crucial tasks such as sending out reminders, clarifying responsibilities, continuously bringing everybody on board, and knowing whom to ask for what. The dedicated personnel can also run multiple steps of the implementation process in parallel to keep a schedule.

  1. Educating end-users to ground their expectations on AI 

To prevent expectations from diverging and being detached from what the technology currently has to offer, organizations should educate end-users on AI and foster trust. Since in many hospitals AI has little presence in the everyday work of clinicians, additional efforts need to be made. Education can take the form of presentations by the leadership, lunch seminars, or demo sessions with AI companies.

What seems exceptionally effective in aligning expectations with “grounded” technological possibilities is to showing concrete examples of what AI can do in their everyday work context. For example, demonstrating exactly what the output of the application is going to look like and how it will be incorporated into their existing workflow seems to be an effective tool to converge expectations and foster trust on AI, as one manager notes:

“If we can show them some kind of how this would work in clinical practice, maybe that also makes them rest assured. (…) I think it’s an expectation management. Also to show there is no malevolent force willing to take their work and this kind of stuff”.

Approaching AI implementation as a socio-technical process

If there is one message I would like readers to take from this blogpost, it is that implementing AI is not merely a technological, but also an organizational and cultural challenge. Hence, for a successful implementation of AI in clinical practice, AI companies and healthcare institutes should pay close attention to a wider scope of ‘the social’, beyond just ‘the technological’.

 

——————————————-

[1] Kim, B., Koopmanschap, I., Mehrizi, M. H. R., Huysman, M., & Ranschaert, E. (2021). How does the radiology community discuss the benefits and limitations of artificial intelligence for their work? A systematic discourse analysis. European Journal of Radiology, 136, 109566.