Instaclustr has updated it’s Cassandra on AWS EBS infrastructure offerings to include m4.xlarge class instances on AWS. Over 100 hours of testing and tuning has demonstrated that these nodes, using EBS provide substantial price/performance benefits for many use cases.
It’s traditional wisdom that Cassandra and AWS EBS don’t mix. However, with the release of the latest generation EBS-optimized instances, we started to hear that people were having success using these nodes to run Cassandra. In particular, we’d like to acknowledge a presentation from CrowdStrike at Cassandra Summit and Al Tobey’s Cassandra 2.1 Tuning Guide.
We first started investigating the use of these instance types as a potential offering for customers with large amounts of data but relatively low throughput requirements. However, once we started testing we quickly realised that these new instance types offer better price/performance for many uses cases.
We spent over 100 hours engineering effort benchmarking and tuning these on these instance types, particularly in I/O intensive scenarios. We have also been running EBS-based on nodes on our internal monitoring cluster for over a month and conducted trials with several customers (one of which cause us to re-examine some approaches).
The FAQ that follows provides some more detail on this new offering. Should you have follow-up questions or if you’re an existing customer interested investigating this offering then contact email@example.com.
[COLLAPSE title=”Why has Instaclustr introduced EBS based nodes?”]AWS have made significant improvements in EBS performance in recent months. We conducted benchmarking and released that EBS-based nodes would allow:
- Provide improved price/performance for many uses cases against most of our current node sizes.
- Allow for a better range of choices for customers to choose storage to processing capacity ratios that fit their use case (resulting in large savings where there was not a good fit against current offerings).
[COLLAPSE title=”What is EBS and how is it different to Instaclustr’s current offering?”]EBS, or Elastic Backed Storage, is Amazon’s attached storage offering which allows storage to be provisioned independently of the compute instanced and attached. Previously, this mean that storage traffic shared bandwidth with general network traffic. With the new AWS instance types, EBS used dedicated interfaces to avoid contention. Amazon offers three types of EBS – General Purpose SSD, Provisioned IOPS and Magnetic. Instaclustr uses General Purpose SSD as we have found it to be the best value for money for Cassandra usage and to provide equal performance to provisioned IOPS in many cases.
[COLLAPSE title=”Why did Instaclustr choose the m4.xlarge instance type for this offering?”]We benchmarked a range of c4 and m4 instances sizes (the two current EBS-optimised classes) and determined that the m4 was the best fit for Cassandra usage. The m4s have slightly slower CPUs then the c4s but more memory which can be used for caching, reducing IO pressure which is the limiting performance factor in many use cases. We chose the m4.xlarge as it has a higher IO bandwidth to CPU ratio than the m4.2xlarge size. We also believe that smaller nodes offer manageability advantages in many clusters (for example, a smaller percentage loss of processing power when a single node is down) and also provide customers with a more fine-grained ability to add capacity when required.
[COLLAPSE title=”How do m4 instances compare to current offerings?”]For most uses cases, two of the m4 “balanced” nodes will offer equal or better performance than an i2.2xlarge node with around 30% cost saving. (See below for more detailed benchmarking).
[COLLAPSE title=”Will Instaclustr continue to offer non-EBS instances?”]Yes. There are some use cases for which local, SSD-based instances are still the best value for money. See the table below for details.
[COLLAPSE title=”Does Instaclustr offer custom EBS sizes?”]No. EBS cost is a small proportion of the overall cost of a node. For manageability we stick to a specified set of sizes where we know the performance characteristics of every size we offer.
[COLLAPSE title=”Does Instaclustr offer provisioned IOPS instances?”]Not at this time. Our benchmarking does not demonstrate a value-for-money proposition for using provisioned IOPS with Cassandra. If you believe you have a scenario that warrants this then please contact us and we’d be happy to discuss.
[COLLAPSE title=”What is EBS bursting and what is its impact?”]General Purpose SSD EBS includes burst credits. This allows a volume to operate well over it’s baseline IO capacity for a short period. We have configured our nodes to take maximum advantage of this and in many cases burst capacity will last for several hours. Once it is exhausted it may take days of low activity to refill. When capacity planning you should only use baseline capacity for normal load. Burst capacity should be reserved for unexpected load peaks and Cassandra background operations such as repairs and compactions. For more information see: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_gp2.
[COLLAPSE title=”What instances sizes does Instaclustr offer and what is the use case for each?”]
|m4.l – tiny||250GB|
|Smallest available production node. Use this when getting started. We recommend scaling up to m4.xl rather than scaling out with more m4.large instances.|
|m4.xl – small||400GB|
|Step up from m3.xlarge when more disk required. Starting point for smaller users not ready for m4 balanced offering (lower performance as smaller disk provide less IOPS). Upgrade as you grow.|
|m4.xl – balanced||800GB|
|Best balance of space and performance. Suggested standard building block for most clusters.|
|m4.xl – bulk||1600GB|
|Lowest cost bulk storage for low read ops uses cases.|
|Proven performer – good balance of space and performance. Basis of most of our largest production clusters. Will provide better performance than m4 based nodes for very read-heavy use cases.|
|Lowest cost read performance with relatively small data volumes. Build a cluster with these for extremely high performance to data ratios.|
|May provide an low cost entry point for some uses cases. Has higher throughput than an m4.l-tiny, lower cost (but much smaller disk) than an m4.xl-small.|
Note: we have discontinued the i2.xlarge offering as the m4 based nodes offer a better value for money solution.
[COLLAPSE title=”Can you provide more detailed benchmarking results?”]The following tables provide a summary of key benchmarking results.
I/O Heavy Read and Write – Maximum Throughput
In these tests we aimed to achieve maximum throughput for read and writes with sufficient data to require significant reading from disk.
|Scenario||Operation||3 nodes i2.2xl||6 nodes m4.xlarge|
|Medium||Write||1,363 C* op/sec||1,331 C* op/sec|
|Read||1,818 C* op/sec||1,802 C* op/sec|
|Tiny||Write||36,343 C* op/sec||43,640 C* op/sec|
|Read||24,001 C* op/sec||8,654 C* op/sec|
- Testing was done with a standard C* configuration
- Medium read/write based on table with 32 columns of 2kb each (~64kb per row)
- Tiny read/write is default cassandra-stress schema (< 0.5kb per row)
- Latency results where similar for both instance types (and quite high at these throughput levels)
- Better write performance can be achieved by increasing memtable_flush_writers & concurrent_writes especially on the i2.2xlarges and better read throughput by increasing concurrent_reads and setting HEAP_NEWSIZE to at least a quarter of total heap.
I/O Heavy Read and Write – Latency
In these tests we aimed to read and write at throughput levels when within processing capacity to test relative latency.
|Scenario||Operation||3 nodes i2.2xl|
|6 nodes m4.xlarge|
As the variance in these scenarios shows, actual performance will depend significantly on your data model and application. We highly recommend benchmarking your particular scenario to determine performance characteristics for capacity planning.
[COLLAPSE title=”What is the impact of using EBS on Cassandras availability?”]The EBS control plane is now split amongst availability zones which matches the way we perform replica placement. This means EBS can suffer a complete outage in a single availability zone and your EBS backed Cassandra cluster will still offer strongly consistent reads and writes