As any or all the V's increase, data rapidly starts to escape human abilities to process, work with and react to. We need intelligent machines to help lift the cognitive burden of working with these massive datascapes, to help us with the high speed decision making that is now part of everyday business practices.
Andrew Ng pointed out in a recent Harvard Business Review article how artificial intelligence (AI) or machine learning can assist businesses today:
“If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”
We can have machines step in and take over these simple cognitive tasks in the same way we created mechanisms to take on physical tasks. But as the task chains get more complex, decision dependencies creep in. Handing over decision control to algorithms accelerates decision making, but with that speed and scale the impact of errors and biases can be magnified. We may have handed over the control of the decision, but rarely the responsibility for the action or effect.
At the end of the day it’s people who feel the impact of errors and it’s a person who is held accountable. Madeleine Clare Elish calls these situations ‘moral crumple zones’. It’s the point where we have handed control over to the machine and it fails, leaving the human to absorb the impact of that failure. These can be small or large, from the human customer service representative that has to ‘soak up’ the emotions of a customer where an automated decision has gone against them – “computer says no”. To the mass misidentification and misclassification of individuals, revealing underlying bias in the model. Or even the non-driving, ‘driver’ held responsible for an accident caused by a ‘self-driving’ car. If we continue to use intelligent machines as high velocity, automated decision systems then we must build in oversight and the ability for humans to act and intervene if we are to avoid being caught in the moral crumple zones of the models.
But what if we didn’t think of AI as the unquestionable voice of logic or a way to relieve us of decision making. What if we instead saw it as a creative tool? Recently there has been many reports of machines creating art. But in almost all these cases it’s really that the machine has applied a process or technique to create a set of artifacts that reflect an artistic style. Even with the most elegant and sophisticated of these examples the machine does not drive the work. The machine did not ‘decide’ to initiate it. The human has the creative intent, sets the project in place, trains the machine. Art and creativity are ultimately about intention. That’s why even when we want to hand over our decisions to machines, the impact of the decisions must be our responsibility as the intent was ours in the first place.