A handy couple of scripts that make creating, moving, and restoring SSTable snapshots between clusters easy. Some common use cases include:
- Copying keyspaces between clusters (QA environments, for example)
- Staging an upgrade (application, Cassandra, both) using a copy of live data
- Seeding data in a test environment
- Snapshotting a keyspace for offline analytics
- Backup and restore
- Generally moving a keyspace from one Cassandra cluster to another
Cassandra Snapshot Tools currently includes two BASH shell scripts, getSnapshot
and putSnapshot
, which leverage standard Cassandra utilities like nodetool
, cqlsh
, and sstableloader
to simplify the process of creating snapshots and moving them between hosts. A convenient compressed tar archive is generated by getSnapshot
, and includes all of the SSTable snapshot files, metadata, and schema information necessary to restore the keyspace into another Cassandra cluster (or the same cluster). The putSnapshot
tool is then used to alter various attributes of the original snapshot, and restore into the destination Cassandra cluster.
- Automatically generates a keyspace snapshot and packages into an easy-to-move archive
- Copies snapshots out of the Cassandra data directory, allowing space to be reclaimed using
nodetool clearsnapshot
- Archive a previously created snapshot (e.g. created manually using
nodetool snapshot
) - Schedulable using Cron (e.g. for scheduled backups)
- Change keyspace name, datacenter name and replication factor on restore
- Restore to local or remote clusters, either privately hosted or through hosted services like Datascale.io
- Easy to use with sane defaults
- Most Cassandra versions supported (tested against Cassandra 2.0, 2.1, 2.2, 3.0, and 3.7)
Usage: ./getSnapshot -h
./getSnapshot -k <keyspace name> [-s <snapshot name>] [-y <cassandra.yaml file>] [--no-timestamp]
-h,--help Print usage and exit
-v,--version Print version information and exit
-k,--keyspace <keyspace name> REQUIRED: The name of the keyspace to snapshot
-s,--snapshot <snapshot name> The name of an existing snapshot to package
-y,--yaml <cassandra.yaml file> Alternate cassandra.yaml file
--no-timestamp Don't include a timestamp in the resulting filename
Usage: ./putSnapshot -h
./putSnapshot -f <snapshot file> [-n <node address>] [-k <new ks name>] [-d <new dc name>] [-r <new rf>] [-y <cassandra.yaml file>]
-h,--help Print usage and exit
-v,--version Print version information and exit
-f,--file <snapshot file> REQUIRED: The snapshot file name (created using the
getSnapshot utility
-n,--node <node address> Destination Cassandra node IP (defaults to the local
Cassandra IP if run on a Cassandra node, otherwise
required in order to connect to Cassandra. Will take
precedence if provided and run on a Cassandra node
-k,--keyspace <new ks name> Override the destination keyspace name (defaults to
the source keyspace name)
-d,--datacenter <new dc name> Override the destination datacenter name (defaults
to the sourcen datacenter name)
-r,--replication <new rf> Override the destination replication factor (defaults
to source replication factor)
-y,--yaml <cassandra.yaml file> Alternate cassandra.yaml file
-
Copy a keyspace to the same Cassandra cluster using a different keyspace name:
$ getSnapshot -k <keyspace name> $ putSnapshot -f <snapshot package file> -k <new keyspace name>
-
Copy a keyspace to a remote Cassandra cluster using the same keyspace name:
$ getSnapshot -k <keyspace name> $ putSnapshot -f <snapshot package file> -n <destination node IP>
-
Copy a keyspace from a Cassandra cluster to a remote cluster using a different keyspace name and replication factor:
$ getSnapshot -k <keyspace name> $ putSnapshot -f <snapshot package file> -n <destination node IP> -k <new keyspace name> -r 1
-
Copy a snapshot previously created using
nodetool snapshot
to a new keyspace:$ nodetool snapshot <keyspace name> -t <custom snapshot name> $ getSnapshot -k <keyspace name> -s <custom snapshot name> $ putSnapshot -f <snapshot package file> -k <new keyspace name>
- Currently supports snapshots/restores between clusters running similar versions of Cassandra.
- Partitioner configuration (e.g. RandomPartitioner, Murmur3Partitioner, etc.) must be the same between source and destination Cassandra clusters.
- Local access to the source Cassandra node is required (to create/collect snapshot files). Fairly open network access is required to the destination Cassandra node (to create the schema and load SSTables).
All contributions welcome! Please see How to Contribute.