We live in a world of data. Many aspects of our lives depend on it. Yet, many people are challenged by data literacy, unable to decipher the meaning hidden in plain sight. Meanwhile, machine automation is advancing at a rapid pace, so too is the public mistrust in AI and algorithmic decision-making. Humans and machines each make their own mistakes, but together they complement each other’s strengths. It’s time we rethink our approach to human vs machines to achieve the best of both worlds. Let’s look at a few scenarios.
In 2005 an 80-year-old man checked himself into the urgent care clinic of a big city hospital. We’ll call him Peter (although that was not his real name). He had a fever and a serious cough, but the doctors were optimistic that his pneumonia was manageable. The staff took him up to a ward to be given intravenous antibiotics.
At the same time, just a few curtains down the same corridor, another patient arrived. This man, a diabetic, had a nasty skin infection and he too needed to be admitted. The doctor ordered a fingerstick test to check his glucose levels and he was sent up to a ward.
In the confusion of a busy walk-in center full of people needing attention, a duplicate of Peter’s barcoded wristband ended up on the diabetic man’s wrist. So, when his glucose results were ready, they were automatically (and erroneously) uploaded to Peter’s electronic medical notes.
Meanwhile, over on the other ward where the real Peter lay, an Intern doctor noticed his gigantically high blood sugar reading. She began to write up an equally gigantic order of 10 units of insulin – the correct thing to do, based on the test result. Little did she know that in Peter’s perfectly normal, non-diabetic bloodstream, that would have been enough to drop him into a coma, and would almost certainly become fatal.
But something gave the Resident doctor pause. Peter wasn’t down as a diabetic, he had no symptoms of diabetes, and there was no reason for him to have suddenly become hypoglycemic. Knowing the magnitude of the decision to administer the drugs, she re-ran the test, and saved Peter’s life in the process. Humans making the final decision was essential to a good outcome in this case.
Automation, when used naively and combined with non-validated data, can have the pernicious effect of weakening human vigilance. But human instincts also have a habit of letting us down. Half a century earlier, a young Danny Kahneman was charged by the Israeli army to decide which new recruits would become successful military commanders. He set about observing them while they performed a difficult group activity – using a plank of wood to get everyone over a wall without touching it – watching to see who showed natural signs of leadership. With Kahneman convinced that he had done an excellent job of assessing their abilities, the recruits were sent off to perform their roles as fighter pilots, armored units, and infantry soldiers.
But then Kahneman decided he should validate his predictions. Checking back to see how the ‘stars’ had performed in officer training, he found they had often been woefully disappointing. Meanwhile, some of the recruits he had labelled as inadequate went on to excel. His assessments, based on all the nuance and experience of human judgement, had proved to be totally worthless.
Kahneman subsequently tried to come up with an objective way to measure the characteristics he thought might be relevant for military success. He wrote a list of the things he thought should be important – pride, sense of duty, capacity for independent thought and so on – and devised a questionnaire to quantify each characteristic. It was a sincere attempt to strip away the psychological biases in human decision-making, and in doing so he hit upon a system that actually worked, one that is still used by the Israeli army today.
All the evidence in the intervening decades backs up Kahneman’s findings: humans are a mess of indecision, bias, presuppositions, mood, stress, fatigue, and mistakes. An automated process might not be perfect, but letting people have a final say will likely lead to more errors, not less. In a world where we struggle to understand data effectively, combining human and machine capabilities can create a sharper focus to how we view the world around us and clearly see the opportunities it presents to us.
Combining human and machine capabilities can create a sharper focus to how we view the world around us.
So how do you square the two? How do you choose between humans, who excel at their understanding of context and nuance but cannot make consistent decisions, and automated processes, which are far better at being objective but don’t understand the decisions they’re making?
The answer comes in recognizing that, while humans and machines are flawed, they are flawed in different ways. When it comes to combining them, you could start, naively, by thinking about the technology first, and expect human operators to fill in the gaps of what the system can’t yet do. Or (better) you can do things the other way around.
The contrast between the technology-first and human-first approaches is well illustrated by the development of driverless cars in the last few years. Humans aren’t very good at paying attention for long periods of time, and driverless cars with human monitors have struggled to live up to their early promise. Meanwhile, collision avoidance systems – which largely use much of the same technology – are a good example of building a system around the human, allowing the driver to remain alert and in control, but stepping in as an emergency backup when the driver has missed something or fails to react under pressure.
A partnership can go further, by freeing up the cognitive load on the human while still allowing them to sense check the results and take context and nuance into account. Think of sat nav systems that offer users three options to choose from, thus reducing the risk of a system directing people to drive off cliffs or out into the ocean (both of which really happened with earlier systems).
We live in a world where data is relevant to everything and is everywhere.
We live in a world where data is relevant to everything and is everywhere. Looking at how we use it, how we work with data to drive action and positive change is a great place to start. This is the version of the approach to automation that I’m hoping to see more of going forward:
The time for the rhetoric of humans vs machines has passed. Instead, we need to move toward a future where we focus on exploiting each other’s strengths and embracing each other’s weaknesses, and aim to create better partnerships between humans and machines.
Hannah Fry is Professor in the Mathematics of Cities at the Centre for Advanced Spatial Analysis at UCL. She is a best-selling author, award-winning science presenter and the host of popular podcasts and television shows. Hannah writes for the New Yorker, and her book Hello World – How to be human in the age of the machine won the 2020 Asimov Prize.
Professor in the Mathematics of Cities, UCL
Professor of IT & Management, Babson College
Senior Policy Advisor, Global Foundation for Cyber Studies and Research
Professor of IT & Management, Babson College
Senior Editor, Harvard Business Review
Chief Technology Officer, CarGurus