Volume, Variety, Velocity

What do the Three V's of data mean 15 years after their introduction by Doug Laney?

When Gartner’s Doug Laney introduced the 3 V's of data (volume, variety and velocity) back in 2001, he framed the need for machines to step in and help us understand our data.

As any or all the V's increase, data rapidly starts to escape human abilities to process, work with and react to. We need intelligent machines to help lift the cognitive burden of working with these massive datascapes, to help us with the high speed decision making that is now part of everyday business practices.

Andrew Ng pointed out in a recent Harvard Business Review article how artificial intelligence (AI) or machine learning can assist businesses today:

“If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

We can have machines step in and take over these simple cognitive tasks in the same way we created mechanisms to take on physical tasks. But as the task chains get more complex, decision dependencies creep in. Handing over decision control to algorithms accelerates decision making, but with that speed and scale the impact of errors and biases can be magnified. We may have handed over the control of the decision, but rarely the responsibility for the action or effect.

At the end of the day it’s people who feel the impact of errors and it’s a person who is held accountable. Madeleine Clare Elish calls these situations ‘moral crumple zones’. It’s the point where we have handed control over to the machine and it fails, leaving the human to absorb the impact of that failure. These can be small or large, from the human customer service representative that has to ‘soak up’ the emotions of a customer where an automated decision has gone against them – “computer says no”. To the mass misidentification and misclassification of individuals, revealing underlying bias in the model. Or even the non-driving, ‘driver’ held responsible for an accident caused by a ‘self-driving’ car. If we continue to use intelligent machines as high velocity, automated decision systems then we must build in oversight and the ability for humans to act and intervene if we are to avoid being caught in the moral crumple zones of the models.

But what if we didn’t think of AI as the unquestionable voice of logic or a way to relieve us of decision making. What if we instead saw it as a creative tool? Recently there has been many reports of machines creating art. But in almost all these cases it’s really that the machine has applied a process or technique to create a set of artifacts that reflect an artistic style. Even with the most elegant and sophisticated of these examples the machine does not drive the work. The machine did not ‘decide’ to initiate it. The human has the creative intent, sets the project in place, trains the machine. Art and creativity are ultimately about intention. That’s why even when we want to hand over our decisions to machines, the impact of the decisions must be our responsibility as the intent was ours in the first place.

The three V's of data have new implications in 2016, 15 years after they were first introduced:

So how about focusing on that intent? After all that’s why new, interesting, challenging, innovative things happen. Let’s consider leveraging the machine for its ability to explore and iterate around a problem, not simply supply a binary answer. There’s been some amazing work in this area, often called generative design. This is where a discreet problem space is explored algorithmically at the hands of the designer. A great example of this is Autodesk and Bionic studio’s redesign on the Airbus 320 partition. The design team used algorithms based on natural processes to generate and iteratively adapt to improve on the partition structure; creating a wall that’s 45% lighter than the original and still achieves the same load bearing strength. It’s that active engagement with the machine as a tool rather than a black box decision switch which makes it interesting.

Finding ways in which we can use intelligent machines, not to admonish us from decisions, but instead help us to rapidly explore new ways of working and fresh ideas is a far more interesting future, than simply being left accountable for the mess.

Photo credit: Dan Ruscoe via Foter.com / CC BY


In this article:

You might also like

Keep up with the latest insights to drive the most value from your data.

Get ready to transform your entire business with data.

Follow Qlik