AI In Medicine

Self-taught systems beat MDs at predicting heart attacks:

All four AI methods performed significantly better than the ACC/AHA guidelines. Using a statistic called AUC (in which a score of 1.0 signifies 100% accuracy), the ACC/AHA guidelines hit 0.728. The four new methods ranged from 0.745 to 0.764, Weng’s team reports this month in PLOS ONE. The best one—neural networks—correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms. In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved. That’s because prediction often leads to prevention, Weng says, through cholesterol-lowering medication or changes in diet.

To be honest, while it’s statistically significant, I’d have expected a bigger improvement than that. And it’s not clear how useful it is if the recommendations aren’t science based, as prescribing cholesterol-reduction or diet change generally aren’t.

3 thoughts on “AI In Medicine”

  1. Only 75% hit rate? You’re right that isn’t very good. That suggests that they have a “Garbage In Garbage Out” problem. The records they are feeding the AI didn’t ask the right questions and like you say we need better statistic on what recommendations actually work for what type of person.

    1. The bigger improvement is effectiveness. Such a system could realtime process patient diagnostic data and directly support medical staff. It wins in terms of speed, cost and its ability to be continuously improved.

  2. I’d tweak the predictions so it hits the same false positive rate, but fewer false negatives. I’m guessing the current guidelines have way too many false negatives given the consequences of a false negative. So even better would be to make the case for an even more sensitive announcement of “high risk of heart attack” for the test.

Comments are closed.