There are real-world problems that can only be solved using AI and machine learning. I get that. If you want to provide actionable odds on the next pitch in baseball, you’re going to need a system that can build models from billions of stats in realtime. Not a great place for a fallible human, even a rainman or a team of them.
However, my concern is when we are trying to use machine learning to solve problems that already can be solved with teams of experts.
The headline here is: How an AI Sent an Early Warning about the Coronavirus Outbreak
That, of course, is total bullshit.
When you read a little deeper in the article, you see this:
However, AI doesn’t eliminate the need for epidemiologists and virologists who are fighting the spread on the front lines. For example, BlueDot uses epidemiologists to confirm its algorithm’s results.
So, AI is the lead here, and not the epidemiologists confirming the models??
There is established science around epidemiology and disease, and a whole class of humans who use data and algorithms to predict these things.
So scientists create and validate those models, but the AI is somehow getting credit? The idea that scientists haven’t been warning about and watching for this kind of respiratory disease for years is utter inanity.
Attributing to AI what humans are accomplishing contributes to the moral confusion people have around bots and AI, and I would argue that articles like this are unethical because they make AI sound like it has something to say about these matters of deep expertise.
Finding patterns is what AI does well. Warning humans about the spread of disease does not require some new generation of tools to detect these patterns, it requires our governments to stop prevaricating and focus on protecting their citizens, from covid-19 and economic woes.
We don’t need to listen to AI, we need to listen to the CDC.
Oh, and wash your hands.