Apache Kafka is an open source stream processing platform that has rapidly gained traction in the enterprise data management market. Running on a horizontally scalable cluster of commodity servers, Apache Kafka ingests real-time data from multiple "producer" systems and applications -- such as logging systems, monitoring systems, sensors, and IoT applications -- and at very low latency makes the data available to multiple "consumer" systems and applications. The consuming systems can range from analytics platforms such as a data lake Hadoop system to applications that rely on real-time data processing such as logistics applications or location-based micromarketing applications. Open source streaming analytics engines such as Spark Streaming, Storm and Flink also can be applied to these message streams.

While Apache Kafka can be a powerful addition to enterprise data management infrastructures, it poses new challenges, including the need for IT teams to work with yet another set of APIs and the difficulties of pulling real-time data from diverse source systems without degrading the performance of those systems. Many organizations are finding that with Attunity Replicate they can leverage Apache Kafka capabilities more quickly and with less effort and risk.

Apache Kafka Automation with Attunity Replicate

Today, when IT managers are asked "What is data integration?" in the context of their own enterprise, the answers steer toward real-time integration between multiple source systems and multiple destination systems. For thousands of organizations worldwide, Attunity software is at the center of this many-to-many data integration – increasingly in combination with Apache Kafka.

With Attunity Replicate you can:

  • Use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. This point-and-click automation lets you get started on Apache Kafka initiatives faster, and maintain the agility to easily integrate additional source systems as business requirements evolve.
  • Ingest data into Apache Kafka from a wide range of source systems including all major database and data warehouse platforms
  • Leverage Attunity Replicate's agentless change data capture (CDC) technology to establish Kafka-Hadoop real-time data pipelines and other Apache Kafka based pipelines without negatively impacting the performance of the source database systems
  • Monitor all your Apache Kafka ingest flows through the Attunity console
  • Configure Attunity to notify you of important events regarding your Apache Kafka ingest flows

Apache Kafka and More, Through a Unified Data Integration Platform

While making it far easier to work with Apache Kafka stream processing technology, Attunity Replicate delivers additional value to your enterprise as an all-purpose, unified data integration platform. The same Attunity Replicate software that you use to implement real-time Apache streams can serve as a database migration tool within or between any of the major relational database systems (Oracle, SQL Server, IBM, MySQL, and so on); a unified platform for replicating data from production systems into an enterprise data warehouse; an easy and dependable way to move data from legacy mainframe systems into Hadoop; and much more. Attunity Replicate also supports high-performance, secure movement of on-premises data into the cloud; or movement across different cloud systems, with the use of encrypted multi-pathing.

With Attunity Replicate you can move data where you want it, when you want it – easily, dependably, in real time, and at big data scale.