For years, that relationship has sat in the background as companies collected, stored and used our data without much challenge from us. But, that has changed. As the use of data becomes increasingly apparent in our digital world (for instance, from the ads and content we are served), we have naturally become more attuned to and interested in how it is used, and how we want our relationship with it to develop moving forward.
There is one big barrier to that development, however, and that is trust. It pervades any conversation around data, particularly as artificial intelligence (AI) and machines play an ever-expanding role in our lives. The question on everyone’s lips is: With the advancing role of AI, is our relationship with data poised to change in relation to machines?
To help answer that question, we roped in Professor of Emergent Technologies, Dr. Sally Eaves, to share her perspective in Qlik’s new Active Intelligence magazine.
Unsurprisingly, her answer is that it’s complicated – “it’s ‘Yes’ with respect to the human-machine interface evolving from information system to automation to autonomous agent (to varying degrees). In other words, a move from master-servant to teammates or partners bringing together complementary strengths. But it is ‘No’ with respect to the question of intent. I would argue that, in its current state, AI is not close to having its own intentions or mental states.”
Despite the duality of the answer, Sally provides clarity around AI “trustworthiness” in the shape of three domains: “the technology, the system it is in, and the people behind/interacting with it” along with five key pillars within these domains: “the capacity for AI development and decision-making to be human-led, trainable, transparent, explainable and reversible”.
For me, there are two particular pillars that stand out. The first is transparency. Trust comes from transparency and consistency. Good governance, good lineage and good data underpin Active Intelligence, the state of continuous intelligence from real-time, up-to-date information designed to trigger immediate actions. Ultimately, you can’t trust the output if you don’t understand the input.
The second is the human-led element, which my colleague Elif Tutuk spoke to Sally about for the article. Elif said that “we need to get human trust into analytics and data and provide good collaboration between data producer and consumer.” This human trust is also key to enabling the collaboration with the machines mentioned above and is something that will ultimately unlock the future of data and data analytics.
To build human trust into analytics and data and ensure that the trust relationship between humans and machines is a positive one, Active Intelligence needs to take center stage.
But, don’t just take my word for it – click here to read Sally’s article in full where she explains in more detail what it is going to take to change how we move forward with data from “It’s complicated” to “In a relationship.”