2017 – Living in the Post-Fact Era

2016 was the year of the information activist but data literacy has moved from a nice-to-have to a must-have.

Every year at Qlik as Q4 winds down, we take stock of the year that was. 2016 saw numerous changes in the business intelligence, analytics and data visualization communities and we tapped one of our top internal experts to recap it all for you!

He is former Gartner analyst Dan Sommer, Senior Director of Global Market Intelligence at Qlik and I sat down with him to ask him a few questions about what we can expect to see in 2017.

Q: Dan, reflect for us on 2016, how does it set us up for 2017?

A: What we’ve seen is an explosion of data, and an increase in processing which has led to great opportunities for the chosen few who can handle that. 2016 was the year of the information activist. Those growing numbers of employees actively able to work with the huge amounts of information (data scientists, application developers, business analysts) are the new aristocracy in organizations. But it’s turned out to not be enough.

As information is exploding, many feel overwhelmed by it. Just when we should have been ushered into a new era of facts, the paradox is that the gap is widening between the data created and the capability of the broader populous to consume it. As we can’t consume too much information, many of us simply shut down, and go with our gut. The irony is, that some are worse at making data-driven decisions now, than when we had less data.

Q: We’ve heard the term “post-fact era” enter into the discussion this year, what exactly does it mean?

A: There are multiple examples in the personal and professional spheres, as well as in media and the political discourse, which has prompted the debate on the “post-fact era”. There are too many polluted data points, and “facts” spouted around, diluting them, and making them lose their meaning, leading to information bias, fatigue, and sometimes the most dangerous state of all – information ignorance.

In 2017, the counter reaction to “post fact era” will be that societies and organizations will wake up to the imperative that data literacy is needed for the broader populous in organizations – not just the information activists. 100 years ago, when we moved from rural areas to cities, and from manufacturing to services, literacy moved from a “nice to have” to a “must-have”. With the advent of the digital age, organizations are waking up to the fact that data literacy has moved from a “nice to have” to a “must-have”. It will be a perfect storm/inflection point in the coming year.

Q: Interesting, now back to the influx of data that you mentioned before: where will it all be coming from?

A: I would anticipate that in 2017, half of the data we analyze will be external, and generated in the cloud. As such, fragmentation will increase, and looking at data without adding external context will be diminishing in value. Combinations of data will trump big data. Combinations of data is where new business ideas lie. It’s also what can cross-check whether data is polluted or not.

Additionally, I would expect hybrid cloud and multi-platform to emerge as the dominant model in 2017. Pure-play cloud vendors will likely struggle in this environment. Because of data gravity, on-premise has long staying power, perhaps for decades to come. We’re currently seeing an accelerated move to cloud. But one cloud will not be enough, because tomorrow’s data and workloads won’t exist in just one. A focus on hybrid deployment and multi-environment will mean a mix of cloud and on-premise for data and workloads, which will necessitate in multiple clouds.

We sit down with #Qlik's @Dansommer who offers his take on 2017 #analytics & BI trends:

Q: With all of this change coming, can we finally call 2017 the year that traditional BI will be surpassed by modern BI?

A: Modern BI will become the new reference architecture. Data discovery has graduated to modern BI, and has become the norm for many organizations. Archaic reporting-first platforms are no longer complimentary and are increasingly being replaced. As a result, self-service will no longer be just for business analysts, it will scale more broadly. The death of legacy platforms will open up for more self-service. All employees in the organization will need to consume and have rich discovery from their data. If more people are enabled with great self-service technology, it will facilitate data literacy. This will also put bigger requirements on scale, performance, governance, security, etc. Self-service visualization vendors that can’t scale, and don’t have powerful engines behind them, won’t be able to keep up.

Q: Speaking of visualization, what growth do you see coming in that area in 2017?

A: Ideally, self-service visualization will become a commodity, accessible for all and freemium will be the new normal. If more people can “get on the bike”, it will bring many more people onto a personal analytics journey, increasing data literacy and ultimately information activism.

In 2017, the concept of visualization will move from “only visual analysis” to include the whole supply chain of data. This will mean that we’ll see visualizations also in unified hubs that take a visual approach to asset management, catalogues and portals, as well as visual self-service data preparation, underpinning the actual visual analysis. Further, more progress will be made in having visualization become a means of also communicating our findings. The net effect of this is that more people can do more with more in the data supply chain.

Q: Finally, the scope of advanced analytics has changed quite a bit over the past few years, where do you see it headed next?

A: Advanced analytics will continue to proliferate, but it’s dependent on experts, and should continue to be dependent of experts. It’s not a good idea to give a formula one car to someone that has just learned how to drive, therefore the algorithms (and not just the data) should be curated, and governed by the experts. However, many more should be able to benefit from those models once they are created, meaning that they can be brought into self-service tools.

There will also be an increased emphasis on custom analytic apps and analytics in the app. Everyone won’t, and can’t be both a producer and a consumer of apps. They should also have the ability to have rich explorations in data. Hence, the significance of meeting people where they are, with more contextualized and customized analytic applications, as well as analytics that reach us in our “moments”, will increase significantly. As such, open, extensible tools that can be customized by application and web developers will make further headway.

We want to thank Dan Sommer for his insights, make sure you are able to catch his upcoming webinar on January 11 at 1pm ET.

 

In this article:

You might also like

Keep up with the latest insights to drive the most value from your data.

Get ready to transform your entire business with data.

Follow Qlik