I recently hosted a roundtable breakfast discussion on ethics, specifically the ethical challenges that arise through the capture, storage, mining, and exploitation of more data than we had before.
Most of the people were senior executives from banks and insurance companies, and I think that most agreed that while ‘Big Data’ is the latest industry buzz phrase, most of the financial companies in the world have been dealing with vast quantities of data for a very long time.
So the idea is nothing new, but the way organizations are able to use their data is developing - due to the rise in sophisticated analytical platforms, the proliferation of massive data storage capacity, and the realization that if you are going to keep all this information then you might as well do something with it.
So where does ethics come into this? Surely our ethical standards have not changed in the last 20 years? Are we more or less ethical than we used to be? I think not.
The challenge seems to come from the fact that we can now ask some incredibly personal questions about our customers, which are derived from the data we store about them.
Those questions can give insight into their personal circumstances and preferences; where they shop, where they socialize, what they like to eat or drink, how healthy their lifestyles are, how safely they drive, and where they might want to go on holiday.
Used responsibly, this information can benefit the individual and offer them more relevant goods and services.
Controversially, the same information could be used in a way that is detrimental to the individual. For example, it would be possible to derive a profile of an individual which indicated an unhealthy lifestyle.
Would it then be ethical to price an insurance policy using a profile which negatively impacted the consumer? Surely people have the right to buy burgers and cakes without worrying about the future impact on their insurance.
What about the data available through social media - how can that be used in a responsible way?
Companies would theoretically have access to vast amounts of data about my lifestyle - from postings on Facebook, or bike rides on Strava, even hotels I’ve stayed in and reviewed on TripAdvisor.
All of this information allows my life to be profiled, and I’m hoping that the decisions all go in my favor. But somehow I doubt they will.
Many companies have appointed ‘Ethics Officers’ who in theory can help to address some of these issues. Of course, many of these people come from legal departments, and in many cases they are more concerned that their employers operate within the law, than to a certain ethical standard.
This poses many questions about who's responsibility it is to define what is ethical? Should there be a policy which is agreed centrally and then followed by all? Should we agree on principles and allow people to use good judgement? Is what’s legal necessarily ethical? Has the law caught up with advances in technology? Should companies publish their ethical policies?
There are more questions than answers, some things that are legal may be unethical. Some things that are ethical may not be legal.
What is clear is that companies need to define standards of ethics, and in tandem they need to have a data strategy which sets out what information should be stored, how long for, how it is used, secured, and governed.
If companies are going to keep detailed records about my life, then I expect them to treat those records with care, assuming they want to retain my business.
With big data comes big responsibility.