• Apache Cassandra
  • Technical
Apache Cassandra Deployed on Private and Public Networks


Since we started our managed Cassandra service, we have had a number of customers wanting to have some of their applications communicate with Cassandra using a private network, while other applications communicate using the public network. While this isn’t an uncommon request, the solution and its limitations are not well documented.

In this blog post we are going to quickly review how applications establish a connection to a Cassandra cluster, then look at the problem of using both public and private network, and finally present a solution to be able to communicate to Cassandra using both private and public IP efficiently.

In the following explanation, the example code, written in java, was run against a 3 node Cassandra 2.2.5 cluster, using the Datastax driver 2.x. We will assume that each node has two network interfaces, a public one (public IP) and a private one (private IP) that is typically used by applications in a VPC peered network as can be done in our offering (see https://support.instaclustr.com/hc/en-us/articles/203559854-Using-VPC-Peering-AWS-)

Establishing a connection to Cassandra

An application that tries to connect to Cassandra is given a list of initial endpoints. In this example, we give it two endpoints:


With the Cassandra Datastax driver, the application will try to establish an initial connection to each of the endpoints provided, one at a time, in a random order until a successful connection is established, or until all endpoints were tried unsuccessfully.

When the driver successfully establishes a connection to one of the nodes, the node will send back the list of IPs of all the other nodes in the cluster. This is part of the node discovery process. As a result, when executing queries, the driver will use this list of IP, as well as the IP used to initiate the connection (one of the IP specified in addContactPoints). The list of IP address returned by the node is constructed using the cassandra.yaml parameter broadcast_rpc_address configured on each node of the cluster.

Private or Public IP?

broadcast_rpc_address can be set to the public IP of the node, or to its private IP. Unfortunately, there is currently no way to configure it to conditionally broadcast the list of public IP when a connection is established via the public interface, and broadcast the list of private IP when a connection is established via the private interface.

For example – and we going to work with this hypothesis – when an application connects via the public network to a cluster with broadcast_rpc_address set on each node to use private IP, the application will receive back a list of private IP. This is not so useful (unless the application happens to be on the same private network, but then, why use a public IP in the first place?). The application will still work but in a degraded mode: the application will only be able to communicate to the cluster via a single node, the one used to establish the initial successful connection during node discovery. This single node will serve all reads/writes query as the only coordinator. There will be no benefit from using the built-in load balancing policy/token aware policy of the driver. This single node will be under more stress than the rest of the cluster, network communication will be sub-optimal, latency higher, the application will not be able to discover new nodes and/or new data centers when the cluster changes topology, and this single node will be… a single point of failure for the app. Not so great….

Private and Public IP!

Thankfully, the Datastax Cassandra driver provides an interface: AddressTranslater (renamed AddressTranslator for driver with version above 3.0). Implementing this interface requires implementing the method:



which is meant to translate all InetSocketAddresses corresponding to the IP addresses found during node discovery. It then becomes easy to build a map that will translate private IP address to the corresponding public one, as is shown in the following code:



And that’s it! The application will now be able to communicate with all the nodes in the cluster, no more single point of failure, no more unbalanced communication. Note that in the example code above the mapping is hard coded and static. New nodes that join the cluster will not get their address translated, which is not ideal if the application is a very long running one. A simple improvement could be to load the mapping from a file that is updated when new nodes are added. Fancier solutions can rely on reverse DNS lookup following a similar pattern than the one of the EC2MultiRegionAddressTranslator implementation of AddressTranslator.

You can find the full code of a simple java application using AddressTranslater of the Datastax driver 2.x (some adjustments are necessary for 3.x) on our github page: https://github.com/instaclustr/sample-CassandraDriverAddressTranslater