The Sliding Doors for Responsible AI

By James Fisher

Headshot of blog author James Fisher

James Fisher

6 min read

In this blog series, I have been exploring the “sliding doors” – or divergent paths - organizations can take with data and analytics. Sometimes, grabbing the wrong door means missing out on creating the most value with your data. But in some instances, it can also lead you on a more serious path of breach of compliance with regulations.

We have certainly seen this with The General Data Protection Regulation (GDPR), the privacy law enacted by the European Union in 2018 to regulate how organizations handle personal data. It once impacted my favored airline British Airways, which faced a fine of £20 million for a data breach that compromised the personal and financial details of over 400,000 customers…

And now the newest kid on the legislation block is the EU AI Act, the world’s first comprehensive AI law, promising stiff penalties for companies that fail to comply.

Does this mean all the AI fun is behind us? Hardly – AI continues to have great value opportunity for organizations, and as I have previously advocated, it should always be used responsibly. But what does that look like in an era of new regulations? Let’s explore where things can go wrong first…

Wrong door

When ChatGPT first exploded on the scene, it wasn’t long before seemingly everyone was playing… and not quite realizing the potential risks of including private, confidential data in the tool. For example, there was the case of an executive cutting and pasting his company's 2023 strategy document into ChatGPT, asking it to create a PowerPoint deck, and the case of the doctor who input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. Common and simple use cases for ChatGPT where no one paused to weigh the risks of breach of confidential data…

In fact, even before AI-specific laws being passed, not creating or using AI responsibly could spell serious trouble for organizations. Existing laws around fair use of the intellectual property and personal data of others came into play in terms of how foundation models were formed, particularly those using public data which is either proprietary or personal in nature. Though this information was public, it does not mean it can be used by third parties for their own purposes (e.g., building for-profit AI models by scraping the public web). And while generative AI tools such as ChatGPT can be incredibly valuable, generative AI can hallucinate and produce inaccurate results: the output should generally have a human review and alteration prior to use. Air Canada learned this recently when its chatbot went rogue by providing erroneous guidance on policies and discounts to its customers – when the correct information was available right on their website! This led them to withdraw chatbots from their service and issue a customer refund. Even law firms went down the wrong path, including earlier last year a New York lawyer who was caught using fictitious ChatGPT-generated cases in briefs – and who claimed not realizing that the information generated could be incorrect; and much more recently, a full year after the release of ChatGPT to the public, a similar case of a lawyer in Canada citing fake cases.

While I do think some lessons have already been learned over the last year, the new EU AI Act and other regional laws that will certainly follow will now require organizations to take a harder look at their practices. With generative AI starting to move from hype to reality inside organizations, and with increased usage and value creation leading to increased scrutiny, I would say this new legislation is welcome.

Right door

If the rise of generative AI taught us one thing last year, it is that things can move really fast, often leaving organizations in catch-up mode: waiting for the regulatory landscape to settle is not an option. Similar to how GDPR set the global tone for other global privacy laws it’s expected that the EU AI Act will be the “NorthStar” in the AI regulatory space. Trends are certainly emerging: responsible and transparent model formation, accuracy testing, use limitation, bias elimination, and human oversight.

So, let’s start with what we know about the EU AI Act. I asked my colleague Roy Horgan, Privacy Counsel and Data Protection Officer at Qlik, to provide his assessment. “It’s a lengthy document at 300 pages…” he said, “But, at a high level, it regulates specific uses of AI as well as compliance of large language models (“LLMs”) like ChatGPT which can be used for a variety of purposes. For specific uses, these are bucketed into (i) prohibited (e.g., biometric systems used to infer characteristics like race) which are banned, (ii) high risk (e.g., ranking CVs), which have the heaviest compliance burdens (e.g., registration), (iii) limited risk (e.g., chatbots), which have notice requirements and (iv) no-risk (e.g., spam filters) which are free from compliance obligations. LLMs are subject to obligations like testing and information requirements like disclosing what data the model was formed from. Timing is subject to change, but likely end of 2024 for the prohibited uses, and the other parts coming mostly in summer 2025.”

With this knowledge in hand, here are some steps I recommend you take, to ensure you stay on the right path.

  1. Do you have an AI policy for your organization? If not, you should establish one. At Qlik, we have published a policy that sets out the parameters for use of AI. This policy applies to all Qlik employees and contractors, who are expected to comply with the rules of this policy when using generative AI at Qlik. These rules are designed to balance the benefits of using generative AI, while protecting Qlik’s confidential information, customer expectations and intellectual property and complying with applicable laws, regulations, and ethical standards.

  2. Are you considering implementing Data Products? If not, you should. In my previous post, I explained why leveraging a product-centric approach for data management can benefit your generative AI initiatives, to provide the necessarydata lineage and governance.

  3. Do you know which of your vendors in your supply chain use AI? Just as environmental, social and governance (ESG) information from suppliers is key for that new legislation, so too it will be for AI. While organizations have some time before the AI Act comes into force, a key part of compliance will be ensuring that the AI tools you organization uses are compliant and that your workforce is using them only for permitted purposes. Therefore I recommend you integrate this in your processes now, to ensure you start collecting this information.

  4. Have you made progress in establishing a trusted, AI-ready data foundation? If not, it is never too late to begin. I have written a number of times on this topic, including in a blog last year, where I outlined 5 essential ways to ensure that your data foundation is secure and ready for generative AI.

Ultimately, I can’t help but fall back into my feelings of “déjà vu” with GDPR. When that law was enacted, Qlik undertook comprehensive processes to ensure that our organization was compliant and could be trusted by our customers, and that they could use our products with confidence as well as leverage them in their own solution. At the same time, we worked with our customers on their own journey, to help them meet their compliance goals. I had the opportunity to discuss this recently with Qlik’s new AI council: the similarities (another EU law with global implications and high fines) and the differences (as every business has personal data of some sort, there is no escaping GDPR, but for those business who neither create nor rely on AI, they won’t be caught by the AI Act); and a looming complex regulatory landscape with other countries like the US likely to release their own laws, and each likely to use a very different approach when they do so.

As always, Qlik is keen to partner with you to ensure that your data foundation is trusted and ready for AI. To learn more, please be sure to register for Qlik Connect on June 3-5 in Orlando, where our AI council will join me on the main stage to discuss your pressing topics on AI, including how to maintain ethical integrity. You will also get the opportunity to take part in inspiring discussions through the many breakout sessions offered at the event. I hope to see you there!

In this article:


Ready To Get Started?