• Apache Cassandra
  • Technical
Benchmarking Cassandra on the Google Cloud Platform (GCP)

Instaclustr’s GCP support has been available for several months now and we have reached a point where we feel confident to publish benchmarks of the performance we are seeing on GCP.

Instaclustr’s availability on the Google Cloud Platform means that customers wanting to take advantage of the unique and compelling features of that platform now also have the option of using a proven, robust and secure Apache Cassandra Managed Service. By having Instaclustr take care of managing their Cassandra, Elassandra, Kibana and Spark clusters, GCP users can:

  • Automatically provision new Cassandra, Elassandra, Kibana and Spark clusters of any size in minutes and use automated processes to scale these clusters through adding nodes and data centres;
  • Be assured they are using well-tested and tuned configurations for the GCP environment;
  • Have access to the monitoring tools that application owners need to understand the performance of their managed environments; and
  • Most importantly, have their cluster looked after by Instaclustr’s world-class 24×7 technical operations team (including regular maintenance tasks such as backups and repairs, responding to monitoring alerts and answering your queries).

GCP testing followed our standard benchmark procedure:

  1. Insert data to fill disks to ~30% full.
  2. Wait for compactions to complete and EBS burst credits to regenerate.
  3. Run a 2-hour test with a mix of 10 inserts: 10 simple queries: 1 range query. Quorum consistency was used for all operations.

As with any generic benchmarking, results for different data models or application may vary significantly from the benchmark. However, we have found this workload to be a good test that models a range of use cases and is a good basis for comparison of relative performance.

The following table summarises the results we found in running this benchmark on GCP:

Instaclustr Offering SizeOps/SecRead (single row) (ms)Write (ms)
mean95%ilemean95%ile
n1-highmem-4-8002,51115.834.63.17.3
n1-standard-4-8001,50330.185.91.62.5

A few  observations on these results:

  • The higher write latency on the n1-highmem results likely shows that the instance was under relatively higher write load. A similar write latency to the n1-standard could be achieved by backing off the ops/sec load a bit.
  • Conversely, the relatively high read latency of the n1-standard demonstrates that it was under a relatively high read workload and could be improved by reducing ops/sec.
  • The same ratio of read/write operations was used in both cases.  However, the n1-highmem was under relatively higher write load and lower read load.  This demonstrates that the impact of the higher available memory being used for O/S I/O caching which is reducing the overall processing cost of each read.  JVM heap size is configured the same in both tests.
  • Several runs were undertaken for each instance size. The results presented are from a single run that represents the median (ops/sec) result. Variation in ops/sec between runs was in the order of +/- 10%.

These results compare well to previous benchmarking we have published on AWS: