• Dev Rel
  • Apache Kafka
  • Technical
  • Redis
Redis™ Pub/Sub vs Apache Kafka®: Redis Pub/Sub Extras, Use Cases and Comparison With Apache Kafka

This is Part 2 of the Redis Pub/Sub vs Apache Kafka Series. Here Paul Brebner dives into Redis Pub/Sub extras, use cases, and comparison with Kafka.

1. Pattern-Based Subscription With the PSUBSCRIBE Command

Another command allows clients to subscribe to all the channels that match a pattern. Here’s an example of the PSUBSCRIBE command:

PSUBSCRIBE jab*

Reading messages… (press Ctrl-C to quit)

1) “psubscribe”

2) “jab*”

3) (integer) 1

An important difference to note is that the format of the results received by the subscriber is different, as it contains the type of subscription “pmessage”, the pattern, the matched channel name, and finally the message. 

For example, if we PSUBSCRIBE at the start of verse 2:

1) “pmessage”

2) “jab*”

3) “jabberwocky”

4) “Beware”

Note that a client can have both normal and pattern subscriptions, and may therefore receive duplicate messages if there is an overlap (this was impossible to test with the Redis CLI client, however).

2. The PUBSUB Command and Performance

Performance of the musical kind (live streaming replaced audiences during the pandemic, like in the heyday of “Radio Orchestras”). (Source: Shutterstock)

Finally, the PUBSUB command is useful for finding out about channels, e.g.:

  • To list channels:
    • pubsub channels
  • To show the number of subscribers on channels:
    • pubsub numsub channel [channel]
  • And to show the number of patterns on all channels:
    • pubsub numpat

Why does the number of subscribers and patterns matter? Well, because Redis Pub/Sub uses push-based message delivery, it becomes slower to deliver messages with increasing numbers of subscribers and patterns. The documentation says the time complexity of PUBLISH is O(Subscribers+Patterns) per channel (linear time proportional to the sum of subscribers and patterns).

Note that Pub/Sub has no way of increasing the scalability of message processing in a channel by enabling multiple subscribers to share the load. However, you could manually shard the messages by publishing them into different channels (e.g. LewisCarroll.Jabberwocky, LewisCaroll.TheHuntingOfTheSnark, etc.). 

I also wondered what happens if subscribers can’t keep up? This book indicates that this is indeed a problem if the backlog of messages to be delivered exceeds the available buffer length, but there are some configuration options to enable slow subscribers to be terminated.

Finally, I noticed one slightly odd thing about the return value of PUBLISH: it was sometimes 0! This was even though I had non-zero subscribers and the message was being delivered to them.

It turns out that in a Redis cluster, only clients that are connected to the same node as the publishing client are included in the count; but the cluster will ensure that all published messages are forwarded to subscribers connected to other nodes. I checked the pubsub commands as well, and they are also limited to reporting on subscriptions and patterns on just the same node.

3. Use Cases for Redis Pub/Sub

Redis Pub/Sub channels can have multiple subscribers, but too many may have a performance impact (unlike real radio which works perfectly for unlimited receivers). (Source: Shutterstock)

Redis Pub/Sub channels can have multiple subscribers, but too many may have a performance impact (unlike real radio which works perfectly for unlimited receivers).
(Source: Shutterstock)

What are appropriate use cases for the Redis Pub/Sub “connected” delivery semantics? 

  1. Real-time, low-latency, urgent messages: If messages are short-lived and age rapidly, so therefore are only relevant to subscribers for a short time window (basically “immediately”)
  2. Unreliable delivery/lossy messaging: If it doesn’t matter if some messages are simply discarded (e.g. redundant messages of low importance rather than uniquely critical “business” messages) due to unreliable delivery (failures in the network and subscribers, or failover from master to replicas, may all result in discarded messages)
  3. A requirement for at-most-once delivery per subscriber, i.e. subscribers are not capable of detecting duplicate messages and target systems are not idempotent
  4. If subscribers have short-lived, evolving, or dynamic interest in channels, and only want to receive messages from specific channels for finite periods of time (e.g. mobile IoT devices may only be intermittently connected, and only interested and able to respond to current messages at their location)
  5. If subscribers (and channels and publishers too!) are themselves potentially short-lived
  6. One or more subscribers per channel, but…
  7. There are only a small number of subscribers and patterns per channel

4. Redis Pub/Sub Compared With Apache Kafka

(Source: Shutterstock)

(Source: Shutterstock)

My original plan was to write a blog to compare Redis Streams with Apache Kafka, but having jumped into Redis Pub/Sub first, I thought it worth doing an initial comparison with Kafka here. And I’m also not the first person to compare what look like dissimilar fruits (more an “Apples-to-Dragon Fruit” comparison), so I’m not creating a novel precedent.This isn’t intended to be an exhaustive comparison, it’s just a few things that come to mind, with a focus on whether Kafka can do something similar to Redis Pub/Sub, rather than the full power of Kafka.

So, can Kafka do:

  1. Replicated delivery to multiple subscribers?

Yes. This corresponds to multiple consumer groups in Kafka. A message sent to a Kafka topic with multiple consumer groups is received by one consumer in each group. However, to ensure that only a single consumer per consumer group gets each message, in Kafka you would have a sole subscriber per consumer group. 

  1. “No key” message delivery?

Redis Pub/Sub messages don’t have a key, just a value (although in Redis the channel is really the key). 

Yes, as keys are optional in Kafka. If there’s no key then Kafka uses a round-robin load balancing algorithm to distribute the messages sent to a topic among the available consumers in each group. If there’s only 1 consumer then that consumer gets all the messages.

  1. Unreliable message delivery? 

Yes, Kafka can do this as well, as Kafka consumers can choose what offset (or alternately time) to read from, enabling tricks like replaying the same messages, reliable disconnected delivery from the last read message, and skipping messages etc. Kafka consumers poll for messages, so each time they poll they can choose to read from the next (unread) offset, or alternatively they can skip the unread messages and start reading from the end offset (using seekToEnd()), and only read new messages.

This is certainly not the normal model of operation for Kafka, but it is logically possible and fits several use cases and operational requirements (e.g. if consumers are getting behind they can catch up by skipping messages etc.).

  1. Low-latency “instant” message delivery?

Redis Pub/Sub is designed for speed (low latency), but only with low numbers of subscribers  —subscribers don’t poll and while subscribed/connected are able to receive push notifications very quickly from the Redis broker—in the low ms, even < 1ms as confirmed by this benchmark

Also note that some blogs report that Redis Pub/Sub performance is sensitive to message size—it works well with small messages, but not large ones!

Average Kafka latency is typically in the low 10’s of milliseconds (e.g. the average producer latency was 15ms-30ms reported in our partition benchmarking blog).

Kafka also wasn’t designed for large messages, but it can work with reasonably large messages (even up to 1GB), particularly if compression is enabled and potentially in conjunction with Kafka tiered storage.

  1. Throughput?

This blog also points out that to maximize Redis throughput you need to pipeline the producer publish operation, but this will push out the latency, so you can’t have both low latency and high throughput with Redis Pub/Sub. 

Redis is mostly single-threaded, so the only way to improve broker concurrency is by increasing the number of nodes in a cluster. 

On the other hand, Kafka consumers rely on polling (and potentially batching of messages), so the latency will be potentially slightly higher, typically 10s of ms. However, scalability is better due to Kafka consumer groups and topic partitions, which enable very high consumer concurrency backed by broker concurrency (multiple nodes and partitions). 

  1. Durability and Reliability?

And just a reminder that Redis Pub/Sub isn’t durable (the channels are in-memory only), but Kafka is highly durable (it’s disk-based, has configurable replication to multiple different nodes). 

Kafka also has automatic failover for consumers in groups—if consumers fail, others take over (but watch out for rebalancing storms). And Kafka Connect enables higher reliability by automatically restarting connector tasks (for some failure modes). Given that Redis Pub/Sub doesn’t have the concept of subscriber groups you are on your own here and would need to handle this differently—perhaps running Redis subscriber clients in Kubernetes Pods with automatic restarts and scaling etc.

So what can we do to improve things if we need guaranteed message delivery and better scalability? “Stay tuned” for the next blog in this series when we will take a look at Redis Streams vs. Apache Kafka! 

Get in touch today to learn how Instaclustr can streamline Kafka deployment for your application.

Contact us