Every year, January 28th marks International Data Privacy Day. This day is a timely reminder of the importance of safeguarding data and enabling trust. Even more so now we live in a world increasingly dominated by AI systems. For AI models to be accurate, they must be trained on massive amounts of data. The last thing you want however is for that data to unnecessarily include sensitive personal and confidential business information. This raises critical questions about how this data is collected, used and protected. If not handled responsibly, it could ultimately reduce consumer trust in AI systems and cause compliance issues.
The world’s first comprehensive AI law, the EU AI Act aims to address these concerns by establishing a groundbreaking framework for developing and deploying AI systems. As well as mandating norms around AI like transparency and explainability, it regulates the formation of large AI Models. Critically, it also looks at the end-use of the AI system and takes a risk-based approach depending on the potential impact on people. While some end-uses have been prohibited, high-risk AI applications, such as those used in recruitment or law enforcement, are subject to strict regulations. Similar to the General Data Protection Regulation (GDPR), companies could face significant fines for non-compliance, as early as February 2025.
It's important to note the interplay between the EU AI Act and GDPR. The GDPR, with its focus on individual rights, complements the AI Act by "filling the gap" in scenarios where AI systems utilize personal data. Both regulations emphasize the importance of transparency, and accountability in systems which can significantly impact people. GDPR and similar privacy laws around the world have long regulated “automated decision making” – i.e., where automated systems (AI or otherwise) make autonomous decisions significantly impacting people’s lives, like whether a candidate proceeds to interview or a person’s loan application is accepted. If such automated decision-making systems involve AI, they are subject now to more regulation.
While regulations like the EU AI Act provide a crucial framework for responsible AI, it is essential for any organisation building AI applications to ensure that AI models are trained on trustworthy and unbiased data to avoid inaccurate, unfair or discriminatory results. The training data should also only use personal data, where it is lawful to do so, and measures such as opt-outs should be considered to respect individual’s rights.
The responsible use of data with AI is not just about complying with regulations; it's about building trust and ensuring fairness. According to a Qlik study, despite 88% of businesses knowing AI is fundamental to success, factors including a lack of trust, a lack of skills and data governance challenges are hampering AI projects.
At Qlik, data privacy is not just a legal necessity—it's a fundamental aspect of our values. Our dedication goes beyond compliance, creating a secure and trustworthy environment for our employees and our customers. Given the relevance of AI, these values are even more important so that we use AI in a responsible way and develop AI solutions which our customers can trust.
Data Privacy Day serves as a call to action for individuals and organizations alike to prioritize data privacy in the age of AI. By embracing responsible AI practices, we can harness the power of AI while safeguarding fundamental rights and fostering trust in this transformative technology.
Learn more about Qlik’s commitment to Trust & AI