Again.blog

Unearned generalizations are less applicable

September 22, 2020

Suppose you hear Eric Topol mention that IBM’s Watson—which you remember from Jeopardy!—has fallen significantly short of its medical aspirations. That’s neat, huh? Everyone is talking about how AI is going to take over the world, but IBM spent a bunch of money on this supercomputer and it’s still bad at medicine. You take this generalization, become pessimistic about AI, and feel compelled to mention Watson’s failings when discussing related topics, but this is all you know: Watson sucks at being a doctor.

This generalization doesn’t help you reason about hindrances of AI progress, whether other teams are making better progress, which approach IBM took and how it could be improved, or whether a robot will greet you at your doctor’s appointment next decade. Additionally, you can’t elaborate when queried, so it’s not even a particularly compelling piece of information.

In contrast, this generalization is wonderfully useful for Dr. Topol. It is the result of an in-depth study of the field. He knows the nuances. The generalization acts merely as an anchor for his knowledge and reasoning about things like the current AI in medicine, unworkable approaches, speed of progress, etc.

Earning generalizations is necessary for real learning.