Apache Kafka is a distributed streaming platform with plenty to offer—from redundant storage of massive data volumes to a message bus capable of throughput reaching millions of messages each second. These capabilities and more make Kafka a solution that’s tailor-made for processing streaming data from real-time applications.
Despite its name’s suggestion of Kafkaesque complexity, Apache Kafka’s architecture actually delivers an easier to understand approach to application messaging than many of the alternatives. Kafka is essentially a commit log with a very simplistic data structure. It just happens to be an exceptionally fault-tolerant and horizontally scalable one.
The Kafka commit log provides a persistent ordered data structure. Records cannot be directly deleted or modified, only appended onto the log. The order of items in Kafka logs is guaranteed. The Kafka cluster creates and updates a partitioned commit log for each topic that exists. All messages sent to the same partition are stored in the order that they arrive. Because of this, the sequence of the records within this commit log structure is ordered and immutable. Kafka also assigns each record a unique sequential ID known as an “offset,” which is used to retrieve data.
Kafka addresses common issues with distributed systems by providing set ordering and deterministic processing. Because Kafka stores message data on-disk and in an ordered manner, it benefits from sequential disk reads. Considering the high resource cost of disk seeks, the fact that firstly Kafka processes reads and writes at a consistent pace, and secondly reads and writes happen simultaneously without getting in each other’s way, combine to deliver tremendous performance advantages.
With Kafka, horizontal scaling is easy. This means that Kafka can achieve the same high performance when dealing with any sort of task you throw at it, from the small to the massive.
Apache Kafka Architecture – Component Overview
Kafka architecture is made up of topics, producers, consumers, consumer groups, clusters, brokers, partitions, replicas, leaders, and followers. The following diagram offers a simplified look at the interrelations between these components.
Kafka API Architecture
Apache Kafka offers four key APIst: the Producer API, Consumer API, Streams API, and Connector API.
Let’s take a brief look at how each of them can be used to enhance the capabilities of applications:
The Kafka Producer API enables an application to publish a stream of records to one or more Kafka topics.
The Kafka Consumer API enables an application to subscribe to one or more Kafka topics. It also makes it possible for the application to process streams of records that are produced to those topics.
The Kafka Streams API allows an application to process data in Kafka using a streams processing paradigm. With this API, an application can consume input streams from one or more topics, process them with streams operations, and produce output streams and send them to one or more topics. In this way, the Streams API makes it possible to transform input streams into output streams.
The Kafka Connector API connects applications or data systems to Kafka topics. This provides options for building and managing the running of producers and consumers, and achieving reusable connections among these solutions. For instance, a connector could capture all updates to a database and ensure those changes are made available within a Kafka topic.
Kafka Cluster Architecture
Now let’s take a closer look at some of Kafka’s main architectural components:
A Kafka broker is a server running in a Kafka cluster (or, put another way: a Kafka cluster is made up of a number of brokers). Typically, multiple brokers work in concert to form the Kafka cluster and achieve load balancing and reliable redundancy and failover. Brokers utilize Apache ZooKeeper for the management and coordination of the cluster. Each broker instance is capable of handling read and write quantities reaching to hundreds of thousands each second (and terabytes of messages) without any impact on performance. Each broker has a unique ID and can be responsible for partitions of one or more topic logs. Kafka brokers also leverage ZooKeeper for leader elections, in which a broker is elected to lead the dealing with client requests for an individual partition of a topic. Connecting to any broker will bootstrap a client to the full Kafka cluster. To achieve reliable failover, a minimum of three brokers should be utilized —with greater numbers of brokers comes increased reliability.
Apache ZooKeeper Architecture
Kafka brokers use ZooKeeper to manage and coordinate the Kafka cluster. ZooKeeper notifies all nodes when the topology of the Kafka cluster changes, including when brokers and topics are added or removed. For example, ZooKeeper informs the cluster if a new broker joins the cluster, or when a broker experiences a failure. ZooKeeper also enables leadership elections among brokers and topic partition pairs, helping determine which broker will be the leader for a particular partition (and server read and write operations from producers and consumers), and which brokers hold replicas of that same data.When ZooKeeper notifies the cluster of broker changes, they immediately begin to coordinate with each other and elect any new partition leaders that are required. This protects against the event that a broker is suddenly absent.
A Kafka producer serves as a data source that optimizes, writes, and publishes messages to one or more Kafka topics. Kafka producers also serialize, compress, and load balance data among brokers through partitioning.
Consumers read data by reading messages from the topics to which they subscribe. Consumers will belong to a consumer group. Each consumer within a particular consumer group will have responsibility for reading a subset of the partitions of each topic that it is subscribed to.
Basic Kafka Architecture Concepts
The following concepts are the foundation to understanding Kafka architecture:
A Kafka topic defines a channel through which data is streamed. Producers publish messages to topics, and consumers read messages from the topic they subscribe to. Topics organize and structure messages, with particular types of messages published to particular topics. Topics are identified by unique names within a Kafka cluster, and there is no limit on the number of topics that can be created.
Within the Kafka cluster, topics are divided into partitions, and the partitions are replicated across brokers. From each partition, multiple consumers can read from a topic in parallel. It’s also possible to have producers add a key to a message—all messages with the same key will go to the same partition. While messages are added and stored within partitions in sequence, messages without keys are written to partitions in a round robin fashion. By leveraging keys, you can guarantee the order of processing for messages in Kafka that share the same key. This is a particularly useful feature for applications that require total control over records. There is no limit on the number of Kafka partitions that can be created (subject to the processing capacity of a cluster).
Topic Replication Factor
Topic replication is essential to designing resilient and highly available Kafka deployments. When a broker goes down, topic replicas on other brokers will remain available to ensure that data remains available and that the Kafka deployment avoids failures and downtime. The replication factor that is set defines how many copies of a topic are maintained across the Kafka cluster. It is defined at the topic level, and takes place at the partition level. For example, a replication factor of 2 will maintain two copies of a topic for every partition. As mentioned above, a certain broker serves as the elected leader for each partition, and other brokers keep a replica to be utilized if necessary. Logically, the replication factor cannot be greater than the total number of brokers available in the cluster. A replica that is up to date with the leader of a partition is said to be an In-Sync Replica (ISR).
A Kafka consumer group includes related consumers with a common task. Kafka sends messages from partitions of a topic to consumers in the consumer group. At the time it is read, each partition is read by only a single consumer within the group. A consumer group has a unique group-id, and can run multiple processes or instances at once. Multiple consumer groups can each have one consumer read from a single partition. If the quantity of consumers within a group is greater than the number of partitions, some consumers will be inactive.
Kafka Internal Architecture in Brief
Assembling the components detailed above, Kafka producers write to topics, while Kafka consumers read from topics. Topics represent commit log data structures stored on disk. Kafka adds records written by producers to the ends of those topic commit logs. Topic logs are also made up of multiple partitions, straddling multiple files and potentially multiple cluster nodes. Consumers can use offsets to read from certain locations within topic logs. Consumer groups each remember the offset that represents the place they last read from a topic.
Partitions of topic logs are distributed across cluster nodes, or brokers, to achieve horizontal scalability and high performance. Kafka architecture can be leveraged to improve upon these goals, simply by utilizing additional consumers as needed in a consumer group to access topic log partitions replicated across nodes. This enables Apache Kafka to provide greater failover and reliability while at the same time increasing processing speed.
Kafka Architecture Advantages
There are many beneficial reasons to utilize Kafka, each of which traces back to the solution’s architecture. Some of these key advantages include:
Scalability and Performance
Kafka offers high-performance sequential writes, and shards topics into partitions for highly scalable reads and writes. As a result, Kafka allows multiple producers and consumers to read and write simultaneously (and at extreme speeds). Additionally, topics divided across multiple partitions can leverage storage across multiple servers, which in turn can enable applications to utilize the combined power of multiple disks.
With multiple producers writing to the same topic via separate replicated partitions, and multiple consumers from multiple consumer groups reading from separate partitions as well, it’s possible to reach just about any level of desired scalability and performance through this efficient architecture.
Kafka architecture naturally achieves failover through its inherent use of replication. Topic partitions are replicated on multiple Kafka brokers, or nodes, with topics utilizing a set replication factor. The failure of any Kafka broker causes an ISR to take over the leadership role for its data, and continue serving it seamlessly and without interruption.
Beyond Kafka’s use of replication to provide failover, the Kafka utility MirrorMaker delivers a full-featured disaster recovery solution. MirrorMaker is designed to replicate your entire Kafka cluster, such as into another region of your cloud provider’s network or within another data center. In this way, Kafka MirrorMaker architecture enables your Kafka deployment to maintain seamless operations throughout even macro-scale disasters. This functionality is referred to as mirroring, as opposed to the standard failover replication performed within a Kafka cluster. For an example of how to utilize Kafka and MirrorMaker, an organization might place its full Kafka cluster in a single cloud provider region in order to take advantage of localized efficiencies and then mirror that cluster to another region with MirrorMaker to maintain a robust disaster recovery option.
Kafka Architecture – Component Relationship Examples
Let’s look at the relationships among the key components within Kafka architecture. Note the following when it comes to brokers, replicas, and partitions:
- Kafka clusters may include one or more brokers.
- Kafka brokers are able to host multiple partitions.
- Topics are able to include one or more partitions.
- Brokers are able to host either one or zero replicas for each partition.
- Each partition includes one leader replica, and zero or greater follower replicas.
- Each of a partition’s replicas has to be on a different broker.
- Each partition replica has to fit completely on a broker, and cannot be split onto more than one broker.
- Each broker can be the leader for zero or more topic/partition pairs.
Now let’s look at a few examples of how producers, topics, and consumers relate to one another:
Here we see a simple example of a producer sending a message to a topic, and a consumer that is subscribed to that topic reading the message.
The following diagram demonstrates how producers can send messages to singular topics:
Consumers can subscribe to multiple topics at once and receive messages from them in a single poll (Consumer 3 in the diagram shows an example of this). The messages that consumers receive can be checked and filtered by topic when needed (using the technique of adding keys to messages, described above).
Now let’s look at a producer that is sending messages to multiple topics at once, in an asynchronistic manner:
Technically, a producer may only be able to send messages to a single topic at once. However, by sending messages asynchronously, producers can functionally deliver multiple messages to multiple topics as needed.
Kafka architecture is built around emphasizing the performance and scalability of brokers. This leaves producers to handle the responsibility of controlling which partition receives which messages. A hashing function on the message key determines the default partition where a message will end up. If no key is defined, the message lands in partitions in a roundrobin series.
These methods can lead to issues or suboptimal outcomes, however, in scenarios that include message ordering or an even message distribution across consumers. To solve such issues, it’s possible to control the way producers send messages and direct those messages to specific partitions. Doing so requires using a customer partitioner, or the default partitions along with available manual or hashing options.
The Value of Consumers in Kafka Architecture
Within Kafka architecture, each topic is associated with one or more partitions, and those are spread over one or more brokers. Each partition is replicated on those brokers based on the set replication factor. While the replication factor controls the number of replicas (and therefore reliability and availability), the number of partitions controls the parallelism of consumers (and therefore read scalability). This is because each partition can only be associated with one consumer instance out of each consumer group, and the total number of consumer instances for each group is less than or equal to the number of partitions. Adding more partitions enables more consumer instances, thereby enabling reads at an increased scale.
As a result of these aspects of Kafka architecture, events within a partition occur in a certain order. Inside a particular consumer group, each event is processed by a single consumer, as expected. When multiple consumer groups subscribe to the same topic, and each has a consumer ready to process the event, then all of those consumers receive every message broadcast by the topic. In practice, this broadcast capability is quite valuable.
The next examples show a few different techniques for beneficially leveraging a single topic along with multiple partitions, consumers, and consumer groups.
In this example, the Kafka deployment architecture uses an equal number of partitions and consumers within a consumer group:
As we’ve established, Kafka’s dynamic protocols assign a single consumer within a group to each partition. This is usually the best configuration, but it can be bypassed by directly linking a consumer to a specific topic/partition pair. Doing so is essentially removing the consumer from participation in the consumer group system. While it is unusual to do so, it may be useful in certain specialized situations.
Now let’s look at a case where we use more consumers in a group than we have partitions. This causes some consumers to stand idle. Kafka can make good use of these idle consumers by failing over to them in the event that an active consumer dies, or assigning them to work if a new partition comes into existence.
Next, let’s look at an example of a group that includes fewer consumers than partitions. The result in this example is that Consumer A2 is stuck with the responsibility of processing more messages than its counterpart, Consumer A1:
In our last example, multiple consumer groups receive every event from every Kafka partition, resulting in messages being fully broadcast to all groups:
Kafka’s dynamic protocol handles all the maintenance work required to ensure a consumer remains a member of its consumer group. When new consumer instances join a consumer group, they are also automatically and dynamically assigned partitions, taking them over from existing consumers in the consumer group as necessary. If and when a consumer instance dies, its partition will be reassigned to a remaining instance in the same manner.
Resourcing Consumers and Producers
In developing your understanding of how Kafka consumers operate within Kafka’s architecture and from a resource perspective, it’s crucial to recognize that consumers and producers do not run on Kafka brokers, and instead require their own CPU and IO resources. This resource independence is a boon when it comes to running consumers in whatever method and quantity is ideal for the task at hand, providing full flexibility with no need to consider internal resource relationships while deploying consumers across brokers. That said, this flexibility comes with responsibility: it’s up to you to figure out the optimal deployment and resourcing methods for your consumers and producers. This is no small challenge and must be considered with care. Leveraging highly scalable and elastic microservices to fulfil this need is one suggested strategy.
Apache Kafka offers a uniquely versatile and powerful architecture for streaming workloads with extreme scalability, reliability, and performance. To learn more about how Instaclustr’s Managed Services can help your organization make the most of Kafka and all of the 100% open source technologies available on the Instaclustr Managed Platform, sign up for a free trial here.