Backing up and restoring data

About snapshots

Cassandra backs up data by taking a snapshot of all on-disk data files (SSTable files) stored in the data directory. You can take a snapshot of all keyspaces, a single keyspace, or a single table while the system is online.

Using a parallel ssh tool (such as pssh), you can snapshot an entire cluster. This provides an eventually consistent backup. Although no one node is guaranteed to be consistent with its replica nodes at the time a snapshot is taken, a restored snapshot resumes consistency using Cassandra's built-in consistency mechanisms.

After a system-wide snapshot is performed, you can enable incremental backups on each node to backup data that has changed since the last snapshot: each time a memtable is flushed to disk and an SSTable is created, a hard link is copied into a /backups subdirectory of the data directory (provided JNA is enabled). Compacted SSTables will not create hard links in /backups because these SSTables do not contain any data that has not already been linked.

Taking a snapshot

Note: Cassandra can only restore data from a snapshot when the table schema exists. It is recommended that you also backup the schema. See DESCRIBE SCHEMA in DESCRIBE.

nodetool snapshot

Requested creating snapshot(s) for [all keyspaces] with snapshot name [1526578812109] and options {skipFlush=false}
Snapshot directory: 1526578812109
nodetool snapshot admatic

Requested creating snapshot(s) for [admatic] with snapshot name [1526579163881] and options {skipFlush=false}
Snapshot directory: 1526579163881

The snapshot is created in data_directory/keyspace_name/table_name-UUID/snapshots/snapshot_name directory. Each snapshot directory contains numerous .db files that contain the data at the time of the snapshot.

tree /var/lib/cassandra/data/admatic/

/var/lib/cassandra/data/admatic/
`-- emp-23ceeda059fa11e8adcac9f06c9329ff
    |-- backups
    |-- mc-1-big-CompressionInfo.db
    |-- mc-1-big-Data.db
    |-- mc-1-big-Digest.crc32
    |-- mc-1-big-Filter.db
    |-- mc-1-big-Index.db
    |-- mc-1-big-Statistics.db
    |-- mc-1-big-Summary.db
    |-- mc-1-big-TOC.txt
    `-- snapshots
        `-- 1526579163881
            |-- manifest.json
            |-- mc-1-big-CompressionInfo.db
            |-- mc-1-big-Data.db
            |-- mc-1-big-Digest.crc32
            |-- mc-1-big-Filter.db
            |-- mc-1-big-Index.db
            |-- mc-1-big-Statistics.db
            |-- mc-1-big-Summary.db
            |-- mc-1-big-TOC.txt
            `-- schema.cql

4 directories, 18 files

Deleting snapshot files

When taking a snapshot, previous snapshot files are not automatically deleted. You should remove old snapshots that are no longer needed.

The nodetool clearsnapshot command removes all existing snapshot files from the snapshot directory of each keyspace. You should make it part of your back-up process to clear old snapshots before taking a new one.

nodetool clearsnapshot -t <snapshot_name>
nodetool clearsnapshot -t 1526579163881

Requested clearing snapshot(s) for [all keyspaces] with snapshot name [1526579163881]
tree /var/lib/cassandra/data/admatic/

/var/lib/cassandra/data/admatic/
`-- emp-23ceeda059fa11e8adcac9f06c9329ff
    |-- backups
    |-- mc-1-big-CompressionInfo.db
    |-- mc-1-big-Data.db
    |-- mc-1-big-Digest.crc32
    |-- mc-1-big-Filter.db
    |-- mc-1-big-Index.db
    |-- mc-1-big-Statistics.db
    |-- mc-1-big-Summary.db
    |-- mc-1-big-TOC.txt
    `-- snapshots

3 directories, 8 files
nodetool clearsnapshot

Requested clearing snapshot(s) for [all keyspaces]

Enabling incremental backups

When incremental backups are enabled (disabled by default), Cassandra hard-links each memtable-flushed SSTable to a backups directory under the keyspace data directory. This allows storing backups offsite without transferring entire snapshots. Also, incremental backups combined with snapshots to provide a dependable, up-to-date backup mechanism. Compacted SSTables will not create hard links in /backups because these SSTables do not contain any data that has not already been linked.A snapshot at a point in time, plus all incremental backups and commit logs since that time form a compete backup.

As with snapshots, Cassandra does not automatically clear incremental backup files. DataStax recommends setting up a process to clear incremental backup hard-links each time a new snapshot is created.

vim /etc/cassandra/cassandra.yaml
# Set to true to have Cassandra create a hard link to each sstable
# flushed or streamed locally in a backups/ subdirectory of the
# keyspace data.  Removing these links is the operator's
# responsibility.
incremental_backups: false
service cassandra restart

Procedure

Edit the cassandra.yaml configuration file on each node in the cluster and change the value of incremental_backups to true.

Restoring from a snapshot

Restoring a keyspace from a snapshot requires all snapshot files for the table, and if using incremental backups, any incremental backup files created after the snapshot was taken. Streamed SSTables (from repair, decommission, and so on) are also hardlinked and included.

Note: Restoring from snapshots and incremental backups temporarily causes intensive CPU and I/O activity on the node being restored.

Restoring from local nodes

This method copies the SSTables from the snapshots directory into the correct data directories.

  1. Make sure the table schema exists.
  2. If necessary, truncate the table.

     cqlsh
    
     cqlsh> use admatic;
     cqlsh:admatic> select * from emp;
    
     emp_id | emp_city  | emp_name | emp_phone  | emp_sal
     --------+-----------+----------+------------+---------
         1 | Hyderabad |      ram | 9848022338 |   50000
         2 | Hyderabad |    robin | 9848022339 |   40000
         3 |   Chennai |   rahman | 9848022330 |   45000
    
     (3 rows)
     cqlsh:admatic> truncate emp;
     cqlsh:admatic> select * from emp;
    
     emp_id | emp_city | emp_name | emp_phone | emp_sal
     --------+----------+----------+-----------+---------
    
     (0 rows)
    
  3. Locate the most recent snapshot folder.

     tree /var/lib/cassandra/data/admatic/
     /var/lib/cassandra/data/admatic/
     `-- emp-23ceeda059fa11e8adcac9f06c9329ff
         |-- backups
         `-- snapshots
             |-- 1526579723626
             |   |-- manifest.json
             |   |-- mc-1-big-CompressionInfo.db
             |   |-- mc-1-big-Data.db
             |   |-- mc-1-big-Digest.crc32
             |   |-- mc-1-big-Filter.db
             |   |-- mc-1-big-Index.db
             |   |-- mc-1-big-Statistics.db
             |   |-- mc-1-big-Summary.db
             |   |-- mc-1-big-TOC.txt
             |   `-- schema.cql
             |-- truncated-1526579778534-emp
             |   |-- manifest.json
             |   |-- mc-1-big-CompressionInfo.db
             |   |-- mc-1-big-Data.db
             |   |-- mc-1-big-Digest.crc32
             |   |-- mc-1-big-Filter.db
             |   |-- mc-1-big-Index.db
             |   |-- mc-1-big-Statistics.db
             |   |-- mc-1-big-Summary.db
             |   |-- mc-1-big-TOC.txt
             |   `-- schema.cql
             `-- truncated-1526579840410-emp
                 |-- manifest.json
                 `-- schema.cql
    
     6 directories, 22 files
    
  4. Copy the most recent snapshot SSTable directory to the data_directory/keyspace/table_name-UUID directory.

     cd /var/lib/cassandra/data/admatic/emp-23ceeda059fa11e8adcac9f06c9329ff/
    
     ls
     backups  snapshots
    
     cp snapshots/1526579723626/* .
    
     ls
     backups        mc-1-big-CompressionInfo.db  mc-1-big-Digest.crc32  mc-1-big-Index.db       mc-1-big-Summary.db  schema.cql
     manifest.json  mc-1-big-Data.db             mc-1-big-Filter.db     mc-1-big-Statistics.db  mc-1-big-TOC.txt     snapshots
    
  5. Run nodetool refresh.

     nodetool refresh admatic emp
    
     cqlsh
    
     cqlsh> use admatic;
     cqlsh:admatic> select * from emp;
    
     emp_id | emp_city  | emp_name | emp_phone  | emp_sal
     --------+-----------+----------+------------+---------
         1 | Hyderabad |      ram | 9848022338 |   50000
         2 | Hyderabad |    robin | 9848022339 |   40000
         3 |   Chennai |   rahman | 9848022330 |   45000
    
     (3 rows)
    

Restoring from centralized backups

This method uses sstableloader to restore snapshots.

  1. Make sure the table schema exists.
  2. If necessary, truncate the table.
  3. Restore the most recent snapshot using the sstableloader tool on the backed-up SSTables.

Restoring a snapshot into a new cluster

Old Cluster

cqlsh

select * FROM system.local

127.0.0.1 | Test Cluster | datacenter1 | rack1 |
(1 rows)


SELECT * FROM admatic.emp;

 emp_id | emp_city  | emp_name | emp_phone  | emp_sal
--------+-----------+----------+------------+---------
      1 | Hyderabad |      ram | 9848022338 |   50000
      2 | Hyderabad |    robin | 9848022339 |   40000
      3 |   Chennai |   rahman | 9848022330 |   45000

(3 rows)

New Cluster

cqlsh

select * FROM system.local

159.65.155.232 | New Test Cluster | datacenter1 | rack1 |
(1 rows)


SELECT * FROM admatic.emp;

 emp_id | emp_city | emp_name | emp_phone | emp_sal
--------+----------+----------+-----------+---------

(0 rows)

The token ranges will not match, because the token ranges cannot be exactly the same in the new cluster. You need to specify the tokens for the new cluster that were used in the old cluster.

  1. From the old cluster, retrieve the list of tokens associated with each node's IP:

     nodetool ring | grep 127.0.0.1 | awk '{print $NF ","}' | xargs
    
     -9208038465942525350, -8967241074233412321, -8868503624515852454, -8860316950152549135, -8851621386036506309, -8797654794267860071, -8777869883307206267, -8692491300668557937, -8272200150762539462, -8253041020028278212, -8167932174361813417, -8129819867749024636, -8071835327302551168, -8047951488164154439, -7972185523230997441, -7957182416475307995, -7948297650308222081, -7864464188421933274, -7764846035101529539, -7533281825137014623, -7436395994249606157, -7395064272121707762, -7344461368252797943, -7236898012511605033, -7234229198548945794, -7145419204250750788, -7115731567255846293, -6957767409494803327, -6954870555108695351, -6922735029126484255, -6891643370704901055, -6813870447585335932, -6693881779017603373, -6679435117242727708, -6590086400862086075, -6572031153050351244, -6516046722945376128, -6413241029516918187, -6402882007425201490, -6200296070328279261, -6199629337001849814, -5999017855855643252, -5789128007331268054, -5717612130527355903, -5632949362576807436, -5539051969359266969, -5489273691762945554, -5396273065949931944, -5332166968077236801, -5138692685120094544, -5138226559121535738, -5076582932587828375, -4911622782339960306, -4879452837375521979, -4769944532943966259, -4738203391182377814, -4732013768275132263, -4661305179578999989, -4597386849535732315, -4558246282761831491, -4553866058255304157, -4542662156659007089, -4511151947288821648, -4496920570026868769, -4232844350450336030, -4222285369175720076, -4196575412062465788, -4187108659813495149, -4069939478409182685, -4053045011581779736, -3729469888167683753, -3679159436182989746, -3554003223090582949, -3341405110375992748, -3283663867641199997, -3195325794902300507, -3177387086105421409, -3157695394313898725, -3137185990234709029, -3094375033495757540, -3036515668115950571, -3034391982924389124, -3029652547647826615, -2997918651957376254, -2995573760892202085, -2968803642050556732, -2912620125920886533, -2865242636374236251, -2844603113064922434, -2842448432432556073, -2738814433133189457, -2686787551977892099, -2671378580340244093, -2645226257719411835, -2486920945744783348, -2481057508375599914, -2446139755819866937, -2438317057113462007, -2408285290703864835, -2340099762731927078, -2339374620363707068, -2225873921583918986, -2192964547694785736, -2175201894777545659, -2153617716269468218, -2129613827220177297, -2078981698117656347, -1961987189992294035, -1950642679809507017, -1838801527791388597, -1774400400588061251, -1596452036452177462, -1511265807331299730, -1435646576663017052, -1427089994370065880, -1419527892120106945, -1394085064593863322, -1168215901117369382, -1126872132571729213, -1063159212330085827, -1056328728187345080, -995755030700827889, -888425506584247016, -879441159005430455, -837935764587264126, -717265121255283729, -678083952852941894, -651522325605596852, -611877160127822778, -585259520161952314, -448211873026541480, -414414193319460782, -399788211009311916, -174764330351124418, -161137032773571050, -106255885545307099, -41240965108999673, -24588843660664715, -18423460589374257, 20974828197528031, 100964984321176675, 135493047578681744, 323813923718930935, 838252791635154946, 876463684890735375, 878832547468047968, 938529385243908026, 959284415766428862, 1139942635832053129, 1299295239122174559, 1373162130972294415, 1396342508338306392, 1563321118732630428, 1643760285116968268, 1702270595806054972, 1734653071187008050, 1745246352827141460, 1854362490952243642, 2092607998496523908, 2119052557919909842, 2207578241488403510, 2234001254517435547, 2273836109364229914, 2317755225611039369, 2344203056428341173, 2791268486108263048, 2819768573052202078, 2930495866683084539, 3035449501126210315, 3040963230861712008, 3045351560125771058, 3232135826854513063, 3400297151307142951, 3488547981429863760, 3505986332257578238, 3564895472147053657, 3688170318458124772, 3848568725112324051, 3860937989579367644, 3914446134577390917, 3951861483404384770, 4048415173791575705, 4098602473334953655, 4208666501491489756, 4220997662291790375, 4260046172736974968, 4294584007969632927, 4537584017354719278, 4633901881461143566, 4666967275241474894, 4699301652647451287, 4750665799908484681, 4754617483328527885, 4759034779043846276, 4824095268685506473, 4856027340985436778, 4932149227927249569, 4937647917448737129, 5022384159731376250, 5112555144486925869, 5135944180076546026, 5149678038194292255, 5163128287713373721, 5170374656602164124, 5189723753585312624, 5463030498424318252, 5493597741608304172, 5504495183844231275, 5577664060329465093, 5627903942613330618, 5736776236588171487, 5800712125648724557, 5836382699732124801, 5954013975072711157, 5969347223195223766, 6020434479331730907, 6102225773884555805, 6103981015934059762, 6144808888205228819, 6146847301853563520, 6361358107039819213, 6476074116369837027, 6483076172167342242, 6605611013954059632, 6634165403037100027, 6860108175319911199, 7020423023607291049, 7023472390246829997, 7081711239683399895, 7235760377617743542, 7263344051478928507, 7268978754921137586, 7532314205401467872, 7609838442752399830, 7640570758237440279, 7912754845816250654, 7972612351874348996, 7984649175800798734, 8026519180614803803, 8093168659010135851, 8201394081028743370, 8201934163699245250, 8232260876267449069, 8243720411816962119, 8353111772682237823, 8364137161954963752, 8725700551637339536, 8916141638842000444, 8998948253352903391, 9052322118530813325, 9067068127395821287, 9089206488770546116, 9103399622413116806, 9173733740075071202, 9176014200230188961, 9184598353673522203,
    
  2. In the cassandra.yaml file for each node in the new cluster, add the list of tokens you obtained in the previous step to the initial_token parameter using the same num_tokens setting as in the old cluster.

     initial_token: -9208038465942525350, -8967241074233412321, -8868503624515852454, -8860316950152549135, -8851621386036506309, -8797654794267860071, -8777869883307206267, -8692491300668557937, -8272200150762539462, -8253041020028278212, -8167932174361813417, -8129819867749024636, -8071835327302551168, -8047951488164154439, -7972185523230997441, -7957182416475307995, -7948297650308222081, -7864464188421933274, -7764846035101529539, -7533281825137014623, -7436395994249606157, -7395064272121707762, -7344461368252797943, -7236898012511605033, -7234229198548945794, -7145419204250750788, -7115731567255846293, -6957767409494803327, -6954870555108695351, -6922735029126484255, -6891643370704901055, -6813870447585335932, -6693881779017603373, -6679435117242727708, -6590086400862086075, -6572031153050351244, -6516046722945376128, -6413241029516918187, -6402882007425201490, -6200296070328279261, -6199629337001849814, -5999017855855643252, -5789128007331268054, -5717612130527355903, -5632949362576807436, -5539051969359266969, -5489273691762945554, -5396273065949931944, -5332166968077236801, -5138692685120094544, -5138226559121535738, -5076582932587828375, -4911622782339960306, -4879452837375521979, -4769944532943966259, -4738203391182377814, -4732013768275132263, -4661305179578999989, -4597386849535732315, -4558246282761831491, -4553866058255304157, -4542662156659007089, -4511151947288821648, -4496920570026868769, -4232844350450336030, -4222285369175720076, -4196575412062465788, -4187108659813495149, -4069939478409182685, -4053045011581779736, -3729469888167683753, -3679159436182989746, -3554003223090582949, -3341405110375992748, -3283663867641199997, -3195325794902300507, -3177387086105421409, -3157695394313898725, -3137185990234709029, -3094375033495757540, -3036515668115950571, -3034391982924389124, -3029652547647826615, -2997918651957376254, -2995573760892202085, -2968803642050556732, -2912620125920886533, -2865242636374236251, -2844603113064922434, -2842448432432556073, -2738814433133189457, -2686787551977892099, -2671378580340244093, -2645226257719411835, -2486920945744783348, -2481057508375599914, -2446139755819866937, -2438317057113462007, -2408285290703864835, -2340099762731927078, -2339374620363707068, -2225873921583918986, -2192964547694785736, -2175201894777545659, -2153617716269468218, -2129613827220177297, -2078981698117656347, -1961987189992294035, -1950642679809507017, -1838801527791388597, -1774400400588061251, -1596452036452177462, -1511265807331299730, -1435646576663017052, -1427089994370065880, -1419527892120106945, -1394085064593863322, -1168215901117369382, -1126872132571729213, -1063159212330085827, -1056328728187345080, -995755030700827889, -888425506584247016, -879441159005430455, -837935764587264126, -717265121255283729, -678083952852941894, -651522325605596852, -611877160127822778, -585259520161952314, -448211873026541480, -414414193319460782, -399788211009311916, -174764330351124418, -161137032773571050, -106255885545307099, -41240965108999673, -24588843660664715, -18423460589374257, 20974828197528031, 100964984321176675, 135493047578681744, 323813923718930935, 838252791635154946, 876463684890735375, 878832547468047968, 938529385243908026, 959284415766428862, 1139942635832053129, 1299295239122174559, 1373162130972294415, 1396342508338306392, 1563321118732630428, 1643760285116968268, 1702270595806054972, 1734653071187008050, 1745246352827141460, 1854362490952243642, 2092607998496523908, 2119052557919909842, 2207578241488403510, 2234001254517435547, 2273836109364229914, 2317755225611039369, 2344203056428341173, 2791268486108263048, 2819768573052202078, 2930495866683084539, 3035449501126210315, 3040963230861712008, 3045351560125771058, 3232135826854513063, 3400297151307142951, 3488547981429863760, 3505986332257578238, 3564895472147053657, 3688170318458124772, 3848568725112324051, 3860937989579367644, 3914446134577390917, 3951861483404384770, 4048415173791575705, 4098602473334953655, 4208666501491489756, 4220997662291790375, 4260046172736974968, 4294584007969632927, 4537584017354719278, 4633901881461143566, 4666967275241474894, 4699301652647451287, 4750665799908484681, 4754617483328527885, 4759034779043846276, 4824095268685506473, 4856027340985436778, 4932149227927249569, 4937647917448737129, 5022384159731376250, 5112555144486925869, 5135944180076546026, 5149678038194292255, 5163128287713373721, 5170374656602164124, 5189723753585312624, 5463030498424318252, 5493597741608304172, 5504495183844231275, 5577664060329465093, 5627903942613330618, 5736776236588171487, 5800712125648724557, 5836382699732124801, 5954013975072711157, 5969347223195223766, 6020434479331730907, 6102225773884555805, 6103981015934059762, 6144808888205228819, 6146847301853563520, 6361358107039819213, 6476074116369837027, 6483076172167342242, 6605611013954059632, 6634165403037100027, 6860108175319911199, 7020423023607291049, 7023472390246829997, 7081711239683399895, 7235760377617743542, 7263344051478928507, 7268978754921137586, 7532314205401467872, 7609838442752399830, 7640570758237440279, 7912754845816250654, 7972612351874348996, 7984649175800798734, 8026519180614803803, 8093168659010135851, 8201394081028743370, 8201934163699245250, 8232260876267449069, 8243720411816962119, 8353111772682237823, 8364137161954963752, 8725700551637339536, 8916141638842000444, 8998948253352903391, 9052322118530813325, 9067068127395821287, 9089206488770546116, 9103399622413116806, 9173733740075071202, 9176014200230188961, 9184598353673522203
    
  3. Make any other necessary changes in the new cluster's cassandra.yaml and property files so that the new nodes match the old cluster settings. Make sure the seed nodes are set for the new cluster.

  4. Clear the system table data from each new node

     sudo rm -rf /var/lib/cassandra/data/system/*
    
  5. Start each node using the specified list of token ranges in new cluster's cassandra.yaml

  6. Create schema in the new cluster. All the schemas from the old cluster must be reproduced in the new cluster.

  7. Stop the node. Using nodetool refresh is unsafe because files within the data directory of a running node can be silently overwritten by identically named just-flushed SSTables from memtable flushes or compaction. Copying files into the data directory and restarting the node will not work for the same reason.

  8. Restore the SSTable files snapshotted from the old cluster onto the new cluster using the same directories, while noting that the UUID component of target directory names has changed. Without restoration, the new cluster will not have data to read upon restart.

     scp hadoop@139.59.88.15:/var/lib/cassandra/data/admatic/emp-840cda105a6d11e8ad8413cfa69d3d49/snapshots.zip ~
     unzip snapshots.zip
     tree .
     .
     ├── snapshots
     │   └── 1526629063874
     │       ├── manifest.json
     │       ├── mc-1-big-CompressionInfo.db
     │       ├── mc-1-big-Data.db
     │       ├── mc-1-big-Digest.crc32
     │       ├── mc-1-big-Filter.db
     │       ├── mc-1-big-Index.db
     │       ├── mc-1-big-Statistics.db
     │       ├── mc-1-big-Summary.db
     │       ├── mc-1-big-TOC.txt
     │       └── schema.cql
     └── snapshots.zip
    
     service cassandra stop
     mv snapshots/1526629063874/* /var/lib/cassandra/data/admatic/emp-da1f9c805a6d11e8a3e17d54cccb05b7/
    
  9. Restart the node.

     service cassandra start
    
     nodetool refresh admatic emp
    
     cqlsh
    
     SELECT * from admatic.emp ;
    
     emp_id | emp_city  | emp_name | emp_phone  | emp_sal
     --------+-----------+----------+------------+---------
         1 | Hyderabad |      ram | 9848022338 |   50000
         2 | Hyderabad |    robin | 9848022339 |   40000
         3 |   Chennai |   rahman | 9848022330 |   45000
    
     (3 rows)
    

Old Cluster

nodetool ring

Datacenter: datacenter1
==========
Address    Rack        Status State   Load            Owns                Token
                                                                          9184598353673522203
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             -9208038465942525350
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             -8967241074233412321
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             -8868503624515852454
...
...
...
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             9173733740075071202
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             9176014200230188961
127.0.0.1  rack1       Up     Normal  80.31 KiB       100.00%             9184598353673522203

New Cluster

nodetool ring

Datacenter: datacenter1
==========
Address         Rack        Status State   Load            Owns                Token
                                                                               9184598353673522203
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             -9208038465942525350
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             -8967241074233412321
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             -8868503624515852454
...
...
...
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             9173733740075071202
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             9176014200230188961
159.65.155.232  rack1       Up     Normal  257.09 KiB      100.00%             9184598353673522203

Recovering from a single disk failure using JBOD

Steps for recovering from a single disk failure in a disk array using JBOD (just a bunch of disks).

Cassandra might not fail from the loss of one disk in a JBOD array, but some reads and writes may fail when:

  • The operation's consistency level is ALL.
  • The data being requested or written is stored on the defective disk.
  • The data to be compacted is on the defective disk.

It's possible that you can simply replace the disk, restart Cassandra, and run nodetool repair. However, if the disk crash corrupted the Cassandra system table, you must remove the incomplete data from the other disks in the array. The procedure for doing this depends on whether the cluster uses vnodes or single-token architecture.

results matching ""

    No results matching ""