-
- Popular
Improving Apache Kafka® Performance and Scalability With the Parallel Consumer: Part 2
In the second part of Improving Apache Kafka® Performance and Scalability With the Parallel Consumer, we continue our investigations with, a trace of a “slow consumer” example, how to achieve 1 million TPS in theory, some experimental results, what else do we know about the Kafka Parallel Consumer, and finally, if you should use it in production.
Learn MorePaul BrebnerMay 04, 2023 -
- Popular
Improving Apache Kafka® Performance and Scalability With the Parallel Consumer: Part 1
Apache Kafka® is a high-throughput, low-latency distributed streaming platform. It enables messages to be sent from multiple distributed producers via the distributed Kafka cluster and topics, to multiple distributed consumers. Here’s a photo I took in Berlin of a very old machine that has a similar architecture; I’ll reveal what it does later.
Learn MorePaul BrebnerApril 20, 2023 -
- Technical
Exploring Karapace—the Open Source Schema Registry for Apache Kafka®: Part 6—Forward, Transitive, and Full Schema Compatibility
This is part 6 of Exploring Karapace, how does Apache Kafka's schema registry allow backward, forwards and transitive compatibility?
Learn MorePaul BrebnerMarch 23, 2023 -
- Technical
Exploring Karapace—the Open Source Schema Registry for Apache Kafka®: Part 5—Schema Evolution and Backward Compatibility
So what happens when the unchangeable forms (schemas) meet the inevitability of change? Let’s dip our toes in the water and find out.
Learn MorePaul BrebnerMarch 10, 2023 -
- Technical
Exploring Karapace—the Open Source Schema Registry for Apache Kafka®: Part 4—Auto Register Schemas
In the previous blog, we demonstrated that the process of sending messages via Avro and Karapace from Kafka producers to consumers works seamlessly, although what exactly is going on under Karapace’s exoskeleton is perhaps a bit opaque (e.g. the communication between producers and consumers and Karapace isn’t visible at this level of the code, and the way that the record value data is actually serialized and deserialized also isn’t obvious), but it just works so far which is a good start. Let’s see what happens if we now try and introduce some exception conditions, as this may help us understand “Kafka Crabs” auto settings.
Learn MorePaul BrebnerFebruary 22, 2023 -
- Technical
Exploring Karapace—the Open Source Schema Registry for Apache Kafka®: Part 3—Introduction, Kafka Avro Java Producer and Consumer Example
1. Introducing Karapace As we saw in Part 1 and Part 2 of this blog series, if you want to use a schema-based data serialization/deserialization approach such as Apache Avro, both the sender and receiver of the data need to have access to the schema that was used to serialize the data. This could work...
Learn MorePaul BrebnerFebruary 09, 2023 -
- Technical
Exploring Karapace—the Open Source Schema Registry for Apache Kafka®: Part 2—Apache Avro IDL, NOAA Tidal Example, POJOs, and Logical Types
This is the second part of our “Exploring Karapace—the Open Source Apache Kafka Schema Registry” blog series, where we continue to get up to speed with Platonic Forms (aka Schemas) in the form of Apache Avro, which is one of the Schema Types supported by Karapace. In this part we try out Avro IDL, come up with a Schema for some complex tidal data (and devise a scheme to generate a Schema from POJOs), and perfect our Schema with the addition of an Avro Logical Type for the Date field—thereby achieving close to Platonic perfection (but that’s just an idea).
Learn MorePaul BrebnerJanuary 27, 2023 -
- Technical
Exploring Karapace—the Open Source Apache Kafka® Schema Registry: Part 1—Apache Avro Introduction with Platonic Solids
Despite Kafka being a schemaless open-source platform, Karapace offers support for schemas. The main scheme type offered by Karapace is Avro so In part 1 we are going to walk through Apache Avro to serialize and deserialize a Platonic Solids example.
Learn MorePaul BrebnerJanuary 18, 2023 -
- Technical
Apache Kafka® KRaft Abandons the Zoo(Keeper): Part 3—Maximum Partitions and Conclusions
In this final part of the Kafka KRaft blog series we try to answer the final question (to Kafka, KRaft, Everything!) that has eluded us so far: Is there a limit to the number of partitions for a cluster? How many partitions can we create? And can we reach 1 Million or more partitions?! ...
Learn MorePaul BrebnerJanuary 01, 2023