AI Diagnosed Patients 4x Better Than Doctors
A few recent conversations revealed just how far behind most people are on what AI can already do. Especially in fields you wouldn’t expect.

Over the last couple of weeks I've had an unusual number of conversations about AI with people who aren't "in AI".
- 20% denial
- 70% cluelessness
- 10% enthusiasm
All of them at least a few years behind what's possible, although some do use ChatGPT's image model - yay!
I think the biggest surprise were the uses in medical field (especially to those in medical field).
Here's what I told them:
AI Beat Doctors at Their Own Game, By a Lot

Microsoft's AI system diagnosed complex medical cases correctly 85% of the time.
Human doctors did just 20%.
That's not a typo. The AI was four times better at figuring out what's wrong with patients than experienced physicians with 5-20 years of practice.
How They Tested It
The researchers used real cases from the New England Journal of Medicine. The kind of super complex cases that stump multiple specialists. Think rare diseases, weird symptoms, cases that take teams of doctors to solve.
Unlike those multiple-choice medical tests AI usually aces, this was the real deal: start with a patient's symptoms, ask follow-up questions, order tests, and work your way to a diagnosis. Just like doctors do.
It's Also Cheaper
An this is wild... The AI didn't just get more diagnoses right. It also spent less money on tests to get there. While doctors often order excessive tests (a known problem costing millions per year), the AI was more strategic about what it needed.
What This Means for You
If you're one of the 50 million people already asking health questions on search engines and AI tools, this tech could (at some point) give you much better answers.
For your doctor visits, imagine having an AI assistant that can consider every possible diagnosis at the same time. No more "I'm not sure, let me refer you to a specialist" when dealing with complex symptoms. And then those specialists not considering parts of your body outside of their area of expertise.
But AI can't replace that human touch, right?

Here's the part that challenges assumptions: AI might be better at the "human" side of medicine as well.
A study published in JAMA found that when healthcare professionals compared ChatGPT responses to doctor responses for patient questions, they preferred the AI 78.6% of the time. The AI scored 21% higher on quality and was rated 9.8 times more empathetic than human doctors.
In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.
So much for bedside manner being limited to humans.
Scope & Caveats:
- The human physicians in Microsoft study were isolated (on purpose) from colleagues, reference texts, or digital tools to make a clean head-to-head comparison. Real-world clinicians rarely work that way.
- The cases are among the most difficult published. Performance on routine primary-care problems is still untested afaik.
- MAI-DxO is a research demo, not a cleared medical device. Regulatory trials, safety validation, and prospective studies are still ahead.
I think my point here is how much further along AI is in some areas than people think.
Cheers, Zvonimir