Cloudera Enterprise 6.3.x | Other versions

Configuring Other CDH Components to Use HDFS HA

Configuring HBase to Use HDFS HA

If you configure HBase to use an HA-enabled HDFS instance, Cloudera Manager automatically handles HA configuration for you.

Configuring the Hive Metastore to Use HDFS HA

The Hive metastore can be configured to use HDFS high availability by using Cloudera Manager or by using the command-line for unmanaged clusters.

  1. In the Cloudera Manager Admin Console, go to the Hive service.
  2. Select Actions > Stop.
      Note: You may want to stop the Hue and Impala services first, if present, as they depend on the Hive service.
    Click Stop again to confirm the command.
  3. Back up the Hive metastore database.
  4. Select Actions > Update Hive Metastore NameNodes and confirm the command.
  5. Select Actions > Start and click Start to confirm the command.
  6. Restart the Hue and Impala services if you stopped them prior to updating the metastore.

Configuring Hue to Work with HDFS HA Using Cloudera Manager

  1. Add the HttpFS role.
  2. After the command has completed, go to the Hue service.
  3. Click the Configuration tab.
  4. Locate the HDFS Web Interface Role property or search for it by typing its name in the Search box.
  5. Select the HttpFS role you just created instead of the NameNode role, and save your changes.
  6. Restart the Hue service.

Configuring Impala to Work with HDFS HA

  1. Complete the steps to reconfigure the Hive metastore database, as described in the preceding section. Impala shares the same underlying database with Hive, to manage metadata for databases, tables, and so on.
  2. Issue the INVALIDATE METADATA statement from an Impala shell. This one-time operation makes all Impala daemons across the cluster aware of the latest settings for the Hive metastore database. Alternatively, restart the Impala service.

Configuring Oozie to Use HDFS HA

To configure an Oozie workflow to use HDFS HA, use the HDFS nameservice instead of the NameNode URI in the <name-node> element of the workflow.

Example:

<action name="mr-node">
  <map-reduce>
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>hdfs://ha-nn

where ha-nn is the value of dfs.nameservices in hdfs-site.xml.

Page generated August 29, 2019.