As organizations move to the cloud to automate processes, customers are beginning to embrace data lakes. It’s easy to put data into a data lake, but it’s really hard to derive value from the data lake and reconstruct the data to make it analytics-ready. For instance, you can take transactions from a mainframe and deposit into a data lake, which is wonderful, but how do you create any analytic insights? How can you ensure all those frequently updated files added into the lake can be reconstructed into a queryable dataset? In the past, users have done this manually. With the prowess of Qlik Data Catalyst and Attunity, we are fully automating this process. As we take the Qlik Data Catalyst and make that the nexus of where the business side and the IT side come together, the DataOps approach leverages that catalog and extends it with collaboration, thus allowing users to easily find their data.
Zurich Insurance – an Attunity client, is one of the early innovators in applying automation to their data warehouse initiatives. Zurich had been moving to a modern data warehouse to better meet the analytics requirements, but they realized they needed a better way to do it than in the past. Traditional enterprise data warehousing employs a lot of people, building a lot of ETL scripts. When source systems change businesses don’t know about it until the scripts break or until the business users complain about missing links in their reports. Zurich turned to Attunity to automate the process of integrating, moving it to real-time, and automatically structuring their data warehouse. Their capability to respond to business users is a fraction of what it was. They reduced 45-day cycles to two-day cycles for updating and building out new data marts for users, and can now better meet the needs of their business users through automation.
For further insights on how we are automating insights and other topics, tune into my full discussion with Dana here.