AI, global health, and racism

All models are wrong. Some are useful.

Mark Shrime, MD, PhD

--

Back in 1976, the British statistician George Box wrote,

All models are wrong, but some are useful

Back then, artificial intelligence wasn’t a thing. Large language models, like those behind ChatGPT or Google Bard, were the stuff of imagination. And we were decades away from being able to ask Midjourney to create a picture of Jeff Bezos as a K-pop star.

Source: Midjourney discord, daily theme, 17 August 2023

AI and the white savior

Simpler times.

But… now, with a few keystrokes, we can transform the erstwhile richest man in the world into whatever we want. Which means journalists can write breathless thinkpieces about the dangers of AI, while others extol its messiahood. Today, I want to do neither of those and look, instead, at AI for what it is: a model. One that’s wrong, and also useful.

Let’s start with a fairly shocking paper published last week by Alenichev, Kingori, and Grietens published in Lancet Global Health.

First, some background.

Global health has a colonialism problem. In the way that we talk about our work, we have been guilty of stereotypes, including those that imply that global health is something that happens to other people, over there; those…

--

--

Mark Shrime, MD, PhD

Author, SOLVING FOR WHY | Global surgeon | Decision analyst | Climber | 3x American Ninja Warrior Competitor