Communtiy feedback: Shadow AI is worse than expected... physicians trust AI over their co-workers
Last week, I wrote about ungoverned ChatGPT adoption in hospitals. I received an incredible amount of feedback - here are the best parts.
When I mentioned shadow AI in my last post (healthcare professionals who simply download and use ChatGPT for clinical tasks), I wasn’t sure if people would care.
160k impressions and 500 reactions on LinkedIn later, I learned that shadow AI is in a much more severe state than I anticipated. I’ll share and comment on the best bits of feedback below.
1) Healthcare professionals trust LLMs more than their peers
Greg Pollock, Research Director at cyber risk company Upguard, shared an impressive survey. First, they asked employees across industries about their shadow AI usage. But more importantly, they also asked for trust - would you rather trust the AI or your co-worker?
I’ll use his words:
Overall use of unapproved AI tools was comparable to other industries (around 75% of respondents) but in Healthcare they were less likely to be part of a daily workflow compared to other industries. That frequency of use may increase if other industries are taken as examples of what’s to come.
When asked to rank trust in AI tools vs coworkers, managers and search engines, workers in Healthcare skewed the strongest toward managers and AI tools. They even said they trust peers less than those working in Finance.
As for how to make money off shadow IT, the gold standard is Microsoft’s enterprise license agreement. Let the doctors do whatever they want, monitor usage at the network level, and at the end of the year OpenAI comes back with a bill... for the hospital.”
His last point is important: If OpenAI decides to intensify their sales efforts towards hospitals, they could directly monetize this shadow adoption. They could guarantee responsible use and install guardrails for a tool that’s already rolled out - a convenient offer for hospitals.
So, will OpenAI do that? With their latest change of usage policies for ChatGPT, it seemed like they had turned away from healthcare for a moment. Their policies now specifically state not to trust ChatGPT with medical advice. Some people prematurely concluded ChatGPT now wouldn’t give medical advice anymore (spoiler: it still does). However, it’s fair to assume this is just a legal maneuver and OpenAI won’t ignore such a lucrative market long-term.
2) Hospitals are reacting - some have officially banned ChatGPT
Some readers reminded me that top-down, ChatGPT use is often already banned. I’m not aware of any ChatGPT bans in German hospitals, but in Australia it seems to be the case:
I wonder if these bans are effective at all? I am very skeptical. Humans are humans and will ignore them, providing physicians a better alternative seems like the ideal solution. Many people in the community seemed to agree. I’ll quote Arne from Clinomic here:
In my personal network, using ChatGPT isn’t really a taboo among healthcare professionals anymore. It rather shows you’re proactive and exposing yourself to the inevitable AI trend.
3) Is healthtech like fintech? And which LLM is best?
Two further feedback points were brought up repeatedly:
Many pointed out that shadow AI is not a healthcare problem. Across industries, shadow AI usage is surprisingly similar. Especially in financial services, which is also highly regulated and risk-averse. I’ll quote Upguard’s survey once again:
Others asked: Is ChatGPT even the best tool to use in hospitals? What about the other free LLM tools like Perplexity? It’s a relevant question - do healthcare professionals actively debate this question? Do they have a favourite? If anyone has data on the “market share” between ChatGPT, Gemini, Perplexity & co in hospitals, please let me know.
My takeaway: This tension between secret ChatGPT usage and slow official procurement needs to be resolved. In the US, OpenEvidence seems to be the winner in this space. Last I heard, they processed 8.5m (!) requests by registered physicians per month, earning them a $ 3.5b valuation.
Can someone actually build an “OpenEvidence for Europe”? I don’t mean product-wise, but in terms of viral distribution…
Speak soon,
Lucas






