An article that caught my attention recently which spurred many late-night, ramen-fueled link hopping binges is titled The World After Coronavirus written by author of the acclaimed book Sapiens, Yuval Noah Harari. Unlike the flood of articles that enumerate the impact COVID-19 is having second-to-second in our daily lives, Yuval focuses on the world we will inhabit coming out of this pandemic. He paints COVID-19 as a catalyst which fast-forwards the usually lethargic march of history.
Decisions that in normal times could take years of deliberation are passed in a matter of hours. Immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger. Entire countries serve as guinea-pigs in large-scale social experiments. What happens when everybody works from home and communicates only at a distance? What happens when entire schools and universities go online?
The truth is many of these “emergency measures” will be here to stay even after the crisis is alleviated because there will always be the next crisis to prevent. Of course none of these measures are inherently harmful or negative, but the expedited process of implementing them in emergency can lead to unintended, long-term consequences if we aren’t careful.
In the article he chooses to focus on this epidemic as an “important watershed in the history of surveillance”. It could normalize the use of mass surveillance tools in countries that have refrained from them so far, and will be one of the first times “under the skin” (i.e., body temperature) surveillance is deployed on a mass scale. This could be the turning point for governments choosing between policing an ignorant population or promoting a self-motivated and well-informed one.
AI Solutionism and Algorithmic Determinism
This got me thinking about what milestone decisions have been accelerated in the tech industry due to COVID-19 whose repercussions will reverberate long beyond this crisis. The main one I’ve observed is the decision to rely exclusively on machine learning (ML) and AI to power many of the human processes that were previously only ML-aided or entirely completed by humans. This cursory decision to automate entire processes necessitated by the lack of resources caused by COVID-19 can set a dangerous precedence for reliance on these nascent ML models long after the virus is eradicated.
AI Solutionism, the philosophical idea of using AI to solve every problem imaginable and de facto religion of the Silicon Valley, has been one of the polarizing topics within the tech community for some time. This debate has been raging on for years now with even pop culture icons weighing in, most recently in Childish Gambino’s new album (highly recommended) on the track Algorhythm which states, “Life, is it really worth it? The algorithm is perfect.”
The debate has always been between the “utopian belief that our algorithmic saviour has arrived” and the “dystopian notion that AI will destroy humanity”. But the question is not “can AI solve our problems,” but “should we use AI to solve our problems,” because not everything which can be automated should be.
Give a small boy a hammer, and he will find that everything he encounters needs pounding.
In this article aptly titled AI Solutionism, Dr. Polonski describes the perils of using machine learning, at least in its current fledgling form, to solve every problem. He gives the anecdote of the disastrous recommendation to utilize AI in the court rooms of America to calculate criminal risk scores that help judges make more “data-driven decisions”. At first glance, this sounds harmless and even beneficial, but the AI-generated risk score was found to amplify structural racial discrimination.
Simply adding a neural network to a democracy does not mean it will be instantly more inclusive, fair or personalized.
All these machine learning models and, so called, artificial intelligences may not be as “intelligent” as you think. They aren’t magic, and no matter how capable they are at approximating latent patterns or structures in nature, they are only as good as the data.
And the truth is: human biases bleed into our algorithms.
I’m not saying the particular programmer which created the model is biased, but rather the underlying data it is trained on is biased. The world isn’t a fair, egalitarian utopia no matter how much we fight for it to be. So when we train AI to calculate criminal risk scores against the historical backdrop of systemic, institutionalized racism in our country, how can we expect it to be anything but racist?
The more we treat these machine learning models as black boxes, the more dangerous it is. Dr. Polonski details in another article, titled Algorithmic determinism and the limits of artificial intelligence, how these algorithmic biases can only amplify our existing biases and deepen social divisions. Many of us are totally unaware that we are already delegating countless daily decisions to artificial intelligence. Where to eat. What songs to listen to. What articles to read. Which items to shop for. All these AI driven decisions hidden under the veneer of convenience and utility.
These machine learning models can only use data from previous actions to predict our needs in the future. And this is quite problematic, because as Polonski says, ”[machine learning] tends to reproduce established patterns of behavior, providing old answers to new questions … precluding our need for experimentation and exploration, while ignoring the multiplicity of our identity.” And thus societal progression becomes an ouroboros, a snake content with swallowing its own tail, refusing to shed its skin.
Pressure is to evolve, take a bite of the apple (Ooh)
Donald Glover
We crush it into the sauce, how do we know the cost?
How do we know the truth without feeling what could be false? (Ooh)
Freedom of being wrong, freedom of being lost
Ethical AI
Now that the transition to algorithmic decision making has been expedited by COVID-19, it is more important than ever coming out of this emergency to invest resources into ethical AI. There are already people working on important problems like setting up processes to audit algorithms for bias and defining fairness. However, part of the responsibility falls into our hands in terms of education, awareness, and pushing for changes in policy that enforce accountability around machine learning applications.
The battle for ethical AI has even been brought to the world’s largest machine-learning conference, with groups pushing for “machine-learning papers to include a section on societal harms, as well as the provenance of their data sets”. Large tech companies like Facebook, Google, and Microsoft also have a large role to play in this, and it should be a requirement for them to set up diverse ethics boards with actual decision-making power.
These are the type of processes we need solidly in place before our everyday lives become increasingly dominated by algorithmic decision making. “We shape our algorithms; thereafter, they shape us.” So let’s make sure these algorithms paint a prettier picture than the one we live in now.