The last time I wrote on NoSQL databases in February 2011, the technology was already booming. Today, these databases have changed the way developers think about building their applications, making them look beyond RDBMS back-ends to even handle data on a massive scale. Some very unique data models that were earlier impossible with conventional databases are now possible with NoSQL databases and clustering. One such NoSQL database is Cassandra, which was donated to Apache by Facebook in 2008.
Cassandra’s most enticing and central feature is that it is decentralised and has no single point of failure. It is a column-oriented database, which was initially inspired and based on Amazon’s Dynamo for its distributed design. The decentralised design makes it immune to almost any type of outage that affects a part of the cluster, while the column family-based design allows for richer, more complex data models that resemble Google’s BigTable. This has allowed it to develop a good amalgamation of features from both Dynamo and BigTable, while evolving into a top-notch choice for production environments in various organisations, including the place where it was created—Facebook.
Another concept that is important to Cassandra is eventual consistency, which is increasingly being looked at in the context of Brewer’s CAP theorem, which I had discussed in the earlier article. Eventual consistency, as its name suggests, offers huge performance benefits by assuming that consistency does not need to be guaranteed immediately at all points in the database, and that it can be relaxed to some extent. This is achieved by what is known as a tunable consistency model, which uses the consistency level setting to be specified with each operation, so that they are deemed to be successful even if data has not been written to all replicas.
Architecture
Cassandra uses so many components to build upon its complex architectural theory that it is really difficult to go through all the bits and pieces without missing anything. The terminologies discussed here are those that provide an insight into the inner workings of this database. Cassandra’s architecture is built more towards avoiding a single point of failure in the cluster, so as to have unhindered access to the maximum amount of data in case any part of the cluster fails. It uses technologies that resemble peer-to-peer networking to achieve a failure-proof data distribution model. Hence, no single node in a Cassandra cluster can be termed as a master of others, and coordination among the nodes is achieved with the help of the Gossip failure detection protocol, which is used to determine the availability of a node in the cluster. Gossip is managed with the help of the Gossiper present on each node, which keeps on initiating ‘Gossip’ communications periodically with random nodes to check their availability. As a result, each node performs similar functions to others, and there are no designated roles for a particular function.
Each node in Cassandra is part of a ring, which determines the way in which the topology of a Cassandra cluster is represented. Each node in the cluster is assigned a token, and a part of the data for which it is responsible. The data to be assigned to each node is determined by the partitioner, which allows the row keys to be sorted according to the partitioning strategy chosen. The default strategy is random partitioning, which works on the basis of consistent hashing to distribute row keys. Another partitioning strategy available is the use of Byte-Ordered Partitioner, which orders row keys according to their raw bytes. AntiEntropy is then used to synchronise the replicas of the data to the newest version by periodically comparing the checksums. Merkle trees are used in Cassandra to implement AntiEntropy, just like for Dynamo, but in a slightly different way. For more details, you could read the respective documentation.
Reads and writes
When a node receives a read request from the client, it first determines the consistency level specified in it, on the basis of which it determines the number of replicas that need to be contacted. When each replica responds with the requested data, it is then compared to determine the most recent version to be reported back. If the consistency level specified is a weaker one, then the latest revision is immediately reported back, and then out-of-date replicas are updated. If it is one of the stronger consistency levels, then first a read repair operation is performed, after which the data is reported back.
In case of a write operation, the consistency level is again used to determine the amount of nodes required to respond with an acknowledgement of success, before the whole operation is deemed to be successful. If the nodes required for consistency are not available, then mechanisms like ‘hinted handoff’ are used to ensure consistency whenever the node comes back online. The complete flow for a write operation involves components like the commit logs, memtables, SSTables, etc. The commit logs are the first failover protection mechanism, where the operation is written so that the written data can be recovered in case of a failure. The memtables then act as an in-memory database, where all the data is updated until it is flushed to disk in the form of SSTables. Compaction is then periodically performed to assimilate data, so that it can be merged into a single file.
The data model
The data model for Cassandra is column-family-based, which resembles the design for relational databases using tables. In this case, however, the data model works in a fundamentally different manner to fixed-size columns. In Cassandra, the database schema is not required to be fixed before the application is used—and it can actually be updated on the fly without any problems on the server. In most cases, the client can arbitrarily determine the number of columns and metadata it wants to store in a particular column family. This results in much more flexibility for the application and in rows that vary in size and in the number of columns.
Keyspaces are used to define the namespace for each application’s column families. Hence, these can be defined on a per-application basis, or according to any other schema design that is required. Column families can be of two types—static and dynamic. The static column family allows predefined columns in which a value may or may not be stored. Dynamic column families, on the other hand, allow the application to define columns whenever they are needed according to their usage. While specifying a column, its name, value and timestamp are needed. So basically, a column is the most basic unit in which a piece of information can be stored. Cassandra also supports structures like super columns and composite columns, which allow for further nesting.
Almost all RDBMS’ require you to specify a datatype for each column. Though Cassandra does not have any such strong requirements, it allows specification of comparators and validators, which act like a datatype for the column name and the column’s value, respectively. Except for counters, almost all datatypes like integer, float, double, text, Boolean, etc, can be used for comparators and validators.
Installation and configuration
The binary for installing Cassandra can be found at cassandra.apache.org/download. This is available in the form of a compressed file, which can be easily extracted on any OS. Before installation, ensure that Java 1.6 is available on the system, and that the JAVA_HOME environment variable has been set. Once the extracted files have been placed in the desired directory, the following commands can be used (on Linux) to set up the environment:
For Debian-based Linux distributions like Ubuntu, you can directly install it from the Apache repositories, for which you can add the following lines to ‘Software Sources’:
deb http://www.apache.org/dist/cassandra/debian 12x main deb-src http://www.apache.org/dist/cassandra/debian 12x main
Additionally, there are two GPG keys to be added, as follows:
Cassandra can now be installed with the simple sudo apt-get install cassandra.
The most basic configuration starts by confirming that all the files and folders have proper permissions. The file conf/cassandra.yaml lists all the defaults for other important locations.
gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D gpg --export --armor F758CE318D77295D | sudo apt-key add - gpg --keyserver pgp.mit.edu --recv-keys 2B5C1B00 gpg --export --armor 2B5C1B00 | sudo apt-key add -
Cassandra can now be started by running the following command, which starts the server in the foreground and logs all the output on the terminal:
bin/cassandra -f
To run it in the background as a daemon, use the same command without the -f option. To check if it is running properly, try and access the command-line interface by running the bin/cassandra-cli command. If there are still no errors and you are greeted with a prompt, then your installation has been successful. You can also try and run the CQL prompt (bin/cqlsh). For detailed instructions on how to set up a cluster, access the official documentation, although it does not cover much beyond some basic configuration. In short, you need to install Cassandra on each node similarly, and specify the earlier node as the seed. An IP interface for Gossip and Thrift are also required to be set up beforehand.
Connecting to Cassandra and using client libraries
Cassandra provides several client APIs to access the database directly from your development environment and a lot of client libraries for many languages like Python, Java, Ruby, Perl, PHP, .NET, C++, Erlang, etc. Even if the required library isn’t available, you can use the Thrift interface directly. The Thrift framework supports almost every available language, and allows for cross-platform development by writing simple Thrift files that define the interface between a client and server application.
Cassandra also supports a new interface known as CQL (Cassandra Query Language), which is an SQL clone. It syntactically resembles SQL, besides being more suitable for this data model by allowing for column families and skipping on features like joins, which are not required in the context of a NoSQL data model. CQL is basically a wrapper that provides abstraction to the internal Thrift API, and makes the developer’s job much easier by simplifying things. A CQL sample used to create a keyspace looks like what’s shown below:
CREATE KEYSPACE Test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};
These commands can be run at the cqlsh prompt or using drivers available for languages like Java, Python, PHP, Ruby, etc. (These drivers require Thrift to be installed before use.)
The best part about using Cassandra is that it doesn’t miss out on the analytical power provided by Hadoop’s MapReduce. Setting this up is pretty easy, as Hadoop can directly be overlaid on top of a Cassandra cluster by having Task Trackers that are local to the Cassandra Analytics cluster, while Hadoop’s Name Node and Job Tracker remain separate from the cluster on a different machine. Cassandra provides very good integration to facilitate working with Hadoop MapReduce jobs, and to provide easier access to data stored in Cassandra so that it can be used with other higher-level tools like Pig and Hive. If you are unfamiliar with these terms related to Hadoop, you can read my previous article on the topic, published in an earlier issue of this magazine.
So, we have covered enough about Cassandra for a basic understanding of the important terminology. I have, however, skipped a lot of details that would have been beyond the scope of this article, including the various methods by which we can work with Cassandra, and other theoretical portions like replication strategies, network topology management, security, maintenance and performance tuning.
References
[1] DataStax Cassandra Documentation:
http://www.datastax.com/docs
[2] Official Documentation Wiki: http://wiki.apache.org/cassandra/
[3] Cassandra CQL 3 Documentation: http://cassandra.apache.org/doc/cql3/CQL.html#createTableStmt
[4] O’Reilly’s Cassandra: The Definitive Guide by Eben Hewitt
NoSQL provides a much more flexible, schemaless data model that better maps to an application’s data organization and simplifies the interaction between the application and the database, resulting in less code to write, debug, and maintain. Many security professionals who encounter big data environments for the first time don’t understand why security is a big issue. They assume they are going to rely on the same tools and techniques that they use to protect their relational databases. For more info visit at https://intellipaat.com/nosql-cassandra-hbase-training/