In this second part of my architectural series, we will look at Qlik Replicate's zero-footprint architecture and how it is designed for scalability and flexibility.
In my previous post, we began reviewing the architecture of Qlik Replicate by looking at its support for both Full Load and CDC. Here I will discuss how we designed the product for scalability and flexibility.
A fundamental requirement underlying information availability is getting data to end users as rapidly as possible. As a result, the approach that enterprises take for data replication needs to emphasize both performance and continuous availability.
High-performance data replication is one of Qlik Replicate’s distinctive features. Since its inception as Attunity Replicate, our in-memory streaming technology remains revolutionary and results in replication speeds that are dramatically faster than other available solutions.
Qlik Replicate’s unique Zero-Footprint architecture is designed so that CDC processes can run without agents being placed on the source or target databases. This significantly eliminates overhead on mission-critical systems.
Qlik Replicate is a very low-impact application, thanks to its log-based capture and delivery of transaction data. The transaction log reader plays a central role in Qlik Replicate’s approach to enterprise information availability.
Designed for Scalability & Flexibility
Qlik Replicate uses in-memory streaming technology which results in faster replication speeds than other products on the market. By removing the delays caused by reading and writing from storage, enterprises can achieve very low latency when replicating data.
In addition, Qlik Replicate’s modular, multi-server, and multi-threaded architecture supports high-volume, rapidly changing environments. This approach makes it easy for IT teams to scale up and out, as organizational data requirements grow. Multiple replication servers can be installed, each of which can replicate a set of tables and run many replication tasks.
Qlik Replicate is also designed for maximum flexibility. While the transaction log reader can be installed on the replication server to achieve a zero-footprint impact, it can also be installed on the source database server. As a result, filtering of the source rows can be done on either the source database or replication server.
In my next post, I will wrap up our architectural review of Qlik Replicate by exploring how automation can dramatically simplify the user experience by automating the steps needed to replicate data as well as how you can monitor and manage multiple replication servers through a ‘single pane of glass’ view
If you'd like to learn more, read our Data Streaming (CDC) page, take a test drive of Qlik Replicate or contact us to continue the conversation and discuss a proof of value.