Cloudera Enterprise 6.3.x | Other versions

Performing Disk Hot Swap for DataNodes

This section describes how to replace HDFS disks without shutting down a DataNode. This is referred to as hot swap.
  Warning: Requirements and Limitations
  • Hot swap can only add disks with empty data directories.
  • Removing a disk does not move the data off the disk, which could potentially result in data loss.
  • Do not perform hot swap on multiple hosts at the same time.

Performing Disk Hot Swap for DataNodes Using Cloudera Manager

Minimum Required Role: Cluster Administrator (also provided by Full Administrator)

  1. Configure data directories to remove the disk you are swapping out:
    1. Go to the HDFS service.
    2. Click the Instances tab.
    3. In the Role Type column, click on the affected DataNode.
    4. Click the Configuration tab.
    5. Select Scope > DataNode.
    6. Select Category > Main.
    7. Change the value of the DataNode Data Directory property to remove the directories that are mount points for the disk you are removing.
        Warning: Change the value of this property only for the specific DataNode instance where you are planning to hot swap the disk. Do not edit the role group value for this property. Doing so will cause data loss.
  2. Enter a Reason for change, and then click Save Changes to commit the changes.
  3. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
  4. Remove the old disk and add the replacement disk.
  5. Change the value of the DataNode Data Directory property to add back the directories that are mount points for the disk you added.
  6. Enter a Reason for change, and then click Save Changes to commit the changes.
  7. Refresh the affected DataNode. Select Actions > Refresh DataNode configuration.
  8. Run the hdfs fsck / command to validate the health of HDFS.
Page generated August 29, 2019.