James Fisher recently wrote an excellent blog on how we’re now in the third generation of BI and Analytics. I’d like to zoom out and provide some further market context. The last quarter was intense for all of us who follow the data and analytics space, with a veritable cascade of industry news, mostly concerning M&A, but also an ongoing side-show of Hadoop distribution vendors showing signs of struggle. While intense, I can’t say it came as a surprise.
One of my favorite quotes, attributed to various people, is “the only constant is change”. But while change is always happening, I’d like to add a modicum, that the pace of change is also accelerating and decelerating. It comes at us like waves, or for those of you who are fans of business and market strategy – S-curves. It essentially means that as a new fundamental technology innovation is introduced, it’s often immature, and initially less efficient than the incumbent. It’s slow in the start, but gradually and then suddenly innovation accelerates. Scale happens, and as the technology reaches maturity, the pace of change gradually slows down. A new S-curve forms after it. In this context, what we’ve just witnessed is the end of the second S-curve, leading to contraction, a wave of market consolidation, and setting us up for a third wave of analytics. What’s also interesting, is what the pace of change enables. It essentially unlocks new possibilities on the data, compute, consumption, and analytics. As such, it doesn’t necessarily replace the incumbency (mainframes anyone?), but rather expands the possibilities.It may gradually replace over time though, or at least monopolize more of the spend, as replacement cycles happen. Let me spell it out a bit more clearly.
Below you see a chart, which you may have seen me talk about in various situations, on three S-curve waves.
Wave 1: “Hardware Scale”: About 10 years ago, as technology quickly pivoted from 32-bit to 64-bit, we saw the possibility of more powerful compute, and storage in-memory. Departments, or “shadow IT” could create their own analytic stacks, making analytics come closer to the business, and leading to more and more individualized self-service, culminating in more powerful tools for the “info activist”, like the business analyst or application developer. Qlik was one of the key innovators accelerating the paradigm shift here, which, at the end of the first S-curve, eventually led to incumbent and traditional BI market share leaders throwing in the towel while they were ahead. Stack vendors, battling it out for data supremacy at the time, happily picked them up in a market share buy, leading to market growth through cross-sell up-sell, but with innovation halting, as integration was prioritized.
Wave 2: “Infrastructure Scale”: The competitive landscape got re-drawn, with increasing domination from data discovery, and self-service BI. It proliferated in many places, but could it, perform, scale to the enterprise, and tap new data sources? At the same time, new technologies, in particular virtualization, enabled new data file systems like Hadoop, as clusters of nodes that could query huge data sets. It was the emergence of data lakes and the age of big data. The data scientist was the new rock-star. With the cloud, this gradually got easier, more manageable, and infinitely scalable. As data discovery matured into modern BI as the new reference architecture, and as CIOs finally took the leap to move mission critical data into massively scalable data centers in the cloud, we’d been expecting another consolidation at the second-wave.
Wave 3: “Workload Scale”: The next wave is about faster, more distributed compute. GPUs, 5G, IPV6, eventually/potentially quantum computing, mixed with micro-services and containerization orchestration like kubernetes, eventually/potentially blockchain, will provide a potent cocktail for workloads to run better, faster, and in a much more distributed fashion. Technology will enable new data, in various apps, and at the edge to get uncovered. In an increasingly distributed world, metadata, ontologies, catalogues will be the connective tissue for a fabric/system enabling just in time data delivery and dialogues. As this happens, data, and more importantly, recommendations and insights will come to us, contextualized, and in our moments. This, together with furthering data literacy for people both through technology and education, promises that the next wave will be the age of the right/small data, and consumerization.
Innovators dilemma, and the bravery of crossing the chasm: Qlik has been anticipating this change, and therefore hasn’t always done it the easy way. It has occasionally been disruptive, but the architectural heavy lifting and up-front work over the past years to provide a state of the art microservices architecture that can scale workloads to multiple clouds, cutting across multiple ecosystems, fueled by innovations and acquisitions in data democratization (Attunity, Podium, ABDI), Augmented Intelligence for data literacy (Cognitive Engine, Data Literacy Initiative) and Embedded everywhere (Qlik Core, Insight Bot) enables Qlik to, as James says in his blog, to cross the chasm to 3rd wave BI, but also setting Qlik up nicely for riding the highly regulated, postmodern data and analytic age, putting entirely new requirements on data ownership and privacy in a very distributed world.