Trust & AI

At Qlik, five core values guide how we solve our customers' challenges and how we approach the world: we make an impact, we are genuine, we succeed together, we take ownership, and we lead with innovation.

We believe artificial intelligence (AI) can have a positive impact on organizations, industries, and the world. Our strategy is to develop innovative AI products and capabilities with ethical integrity and to enable our customers to embrace the complexity and harness the potential of AI in their organizations. We believe these goals are symbiotic, as AI can only be effectively developed and used with confidence within its ethical foundations.

We are deeply committed to the responsible development and deployment of our technology in ways that earn people’s trust.

Qlik’s principles for Responsible AI

Reliability

We design our products for high performance and availability so customers can safely and securely integrate and analyze data and use it to make informed decisions.

Customer control

We believe customers should always remain in control of their data and how their data is used so we design our products with fine-grain security controls, including down to the row (data) and object level.

Transparency and explainability

We design our products to make it clear when customers engage with AI. We strive to make clear the data, analysis, limitations, and/or model used to generate AI-driven answers so our customers can make informed decisions on how they use our technology.

Observability

We design our products so customers can understand lineage, access, and governance of data, analytics, and AI models used to inform answers and automate tasks.

Inclusive

We believe diversity, equity, inclusion, and belonging drive innovation and will continue to foster these beliefs through our product design and development.

AI governance at Qlik

  • AI policy – Qlik team members comply with our Principles for Responsible AI and processes to ensure the ethical and compliant use of any AI products developed by Qlik and any AI tools used within our organization.

  • AI governance product development review – There is an AI compliance review in Qlik's product development process to ensure any AI in our products is compliant with our Principles for Responsible AI and with relevant laws.

  • AI committee – We have an established, cross-functional team in place to ensure our AI strategy is effective and remains so in this fast-changing landscape.

  • AI council – We established an external council of renowned AI subject-matter experts from around the world to help guide Qlik's product direction and inform our roadmaps.

  • Privacy and security – Qlik has long-standing Privacy and Information Security programs to ensure our and our customers’ data is protected.

  • New law tracking – Qlik has in place a process and staff to monitor for upcoming legislation impacting our business, such as new AI laws, which assess these laws and adjust our AI compliance program accordingly.

Frequently Asked Questions

Will you let me know when I am using AI in Qlik products?

One of Qlik's Principles for Responsible AI is to be transparent when our customers engage with AI in our products so they can make informed decisions about how they use our technology. We mark AI-powered capabilities and features with a simple sparkle graphic or written note. We are currently reviewing our products to ensure we are clear everywhere customers use AI.

Will my data be used to train an AI model?

At Qlik, we believe customers should always be in control of their data, including who uses it and how it is used, and offer robust data quality, lineage, and governance capabilities backed by industry-leading security to help customers manage their data. Therefore, customer data will only be used to train an AI model if the customer has selected that option. Below are some examples:

  • Third-party large language models (LLMs) connectors – If a customer has chosen to integrate responses from a third party LLM into a Qlik product, Qlik will follow the customer’s LLM provider and the customer’s decisions regarding implementation of the LLM. From the Qlik side, when customers use the third-party LLM connectors available with our products, user-generated questions and context are encrypted and passed via API to the customer’s LLM on a bring-your-own-key basis to process. Answers are returned using the same APIs and directly integrated into the customer’s analytic application for user consumption.

  • Generative AI for unstructured data – Customers choose the unstructured data they want to use with Qlik Answers and this data is used to improve and refine answers generated for users and is exclusive to the customer's tenant. It is not used to improve the underlying LLM or to improve answers generated for other customers.

  • Predictive AI and AutoML – The data customers choose to use with Qlik AutoML is used for model selection and refinement over time, but only for the operation of these features within that customer’s tenant. It is not used to improve the underlying algorithms used by Qlik or to improve models created or used by other customers.

Will the use of AI result in my data being transmitted to a region different from where my tenant is hosted?

No. AI models reside in the same region the customer selected to host its tenant.

If you have further questions about Responsible AI at Qlik, please contact us at ai-compliance@qlik.com.