Recently, I needed to work my way through the details of how batch transactions are processed in Cassandra and also how they affect exposed metrics.  This article outlines the workflow of submitting a batch via the Java Cassandra driver and will hopefully be of use to others interested in this process.

This article summarises the process that Cassandra uses to action a BATCH statement (either logged or unlogged). It also details how BATCH transactions will affect exposed metrics (e.g. WriteLatency count).


  1. User submits a BATCH transaction to a coordinator node e.g.

    If performed using CQLSH, then the coordinator is whichever node CQLSH connects to.  If using the Java driver, if using SimpleStatement objects to populate the Batch, then a routing key cannot be automatically determined and would need to be manually calculated and set.  If using PreparedStatements however, then “the first non-null keyspace is used as the keyspace of the batch, and the first non-null routing key as its routing key” (i.e. assumes all statements in the BATCH affect a single partition).

  2. Batch statements converted to Mutations
    Within a batch, queries for the same partition key are rolled up into a single mutation (e.g. in the example above, assuming id is the primary key and time is the clustering key, then both statements would be combined into a single Mutation).  This also provides row-level isolation, for multiple queries affecting the same partition.This means that for metrics, the replica nodes will record a single local write, per partition mutation (for partitions that they manage).
  3. Logged batch
    Logged batches provide atomicity; all Mutations in the batch will be run until the entire batch has completed successfully.  This has a performance cost on the coordinator (and potentially backup batchlog nodes).

    1. Coordinator sends blob (group of Mutations) to up 2 other nodes
      Once the query statements have been parsed into Mutations, the coordinator sends that blob as a batchlog record to up to 2 other nodes.  Cassandra will prefer that these nodes are in different racks to the coordinator, but within the same DC.These batchlog records are written to the system.batchlog or system.batches (batchlog is the legacy name) table on the nodes.  The system keyspace uses LocalStrategy, so is individual for each node.In my testing, for a given coordinator, Cassandra will always pick the same nodes to store the batchlog backup.  The election process is randomized (see EndpointFilterin BatchlogManager) however, out of 6 BATCHes, the same node was chosen each time.
    2. Coordinator processes each Mutation
      Once the backup nodes have received the batchlogs, the coordinator actually starts processing each Mutation.
    3. Coordinator deletes batchlog record
      Finally, the coordinator deletes the batchlog entry after all the Mutations have been successfully processed (either replicas have acknowledged or hints have been written).
    4. If a statement in the BATCH fails
      A hint is written, but the BATCH itself is not “failed”.
    5. If the coordinator goes down, after writing batchlog
      Because up to 2 other nodes have the batchlog, they will run the Mutations contained in the batchlog record 10 seconds after it was created (i.e. wait long for enough that the coordinator should have actioned the batch.  If the coordinator hasn’t deleted the batchlog record within this time, then something must have gone wrong, so re-run it).  This setup relies on the statements within the BATCH being idempotent (timestamps are critical to this).
  4. Unlogged batch
    1. Coordinator processes each Mutation
      As opposed to logged batches, the coordinator skips writing backup batchlogs to other nodes and moves directly to processing each Mutation.


As of version 3.7, only the org.apache.cassandra.db:type=BatchlogManager/TotalBatchesReplayed/Count metric is made available.

However, when a BATCH is processed it will also:

  • The coordinator node will increment org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency/Count by 1.  This is regardless of the number of Mutations actually generated by the BATCH or how many nodes the coordinator has to coordinate.
  • Each replica node (including the coordinator, if relevant) will increment org.apache.cassandra.metrics:type=Table,keyspace=<keyspace>,scope=<table>,name=WriteLatency for each Mutation that it processes (i.e. each partition that that replica is responsible for).
  • For a logged batch, there will also be corresponding local WriteLatency increments for the batches table, for each node that stores a copy of the batchlog.


Given the following schema:

and the following cluster:

  • DC1
    • (will use this as the coordinator)
  • DC2

and the following token assignments (retrieved by running ccm node1 nodetool getendpoints test1 test1 <partition key>):

  • 1 =,
  • 2 =,
  • 3 =,
  • 4 =,
  • 5 =,

Running a batch (logged or unlogged) for a single partition key on the coordinator will increment the following metrics:
org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency +1 0
org.apache.cassandra.metrics:type=Table,keyspace=test1,scope=test1,name=WriteLatency +1 +1

Running a batch for multiple partition keys, managed by the coordinator and other nodes:
org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency 0 +1 0 0
org.apache.cassandra.metrics:type=Table,keyspace=test1,scope=test1,name=WriteLatency +2 +1 +2 +1

Use Cases

General consensus is that using unlogged batches of multiple queries affecting the same partition AND routed to a replica node as the coordinator, may provide performance gains over async queries.

For this reason, Cassandra does not apply BATCH size warning and failure thresholds (batch_size_warn_threshold_in_kbbatch_size_fail_threshold_in_kb) to batches that evaluate to one mutation.


Share This