Cloudera Enterprise 6.3.x | Other versions

Managing ZooKeeper

  Note: This page contains references to CDH 5 components or features that have been removed from CDH 6. These references are only applicable if you are managing a CDH 5 cluster with Cloudera Manager 6. For more information, see Deprecated Items.

This topic describes how to add, remove, and replace ZooKeeper roles.

Continue reading:

Using Multiple ZooKeeper Services

Cloudera Manager requires dependent services within CDH to use the same ZooKeeper service. If you configure dependent CDH services to use different ZooKeeper services, Cloudera Manager reports the following error:

com.cloudera.cmf.command.CmdExecException:java.lang.RuntimeException: java.lang.IllegalStateException: Assumption violated:
getAllDependencies returned multiple distinct services of the same type
at SeqFlowCmd.java line 120
in com.cloudera.cmf.command.flow.SeqFlowCmd run()

CDH services that are not dependent can use different ZooKeeper services. For example, Kafka does not depend on any services other than ZooKeeper. You might have one ZooKeeper service for Kafka, and one ZooKeeper service for the rest of your CDH services.

Adding a ZooKeeper Service Using Cloudera Manager

Minimum Required Role: Full Administrator

When adding the ZooKeeper service, the Add Service wizard automatically initializes the data directories.

When you add Zookeeper servers to an existing ensemble, a rolling restart of all zookeeper is required in order to allow all zookeeper servers to have the same configurations

If you quit the Add Service wizard or it does not finish successfully, you can initialize the directories outside the wizard by following these steps:
  1. Go to the ZooKeeper service.
  2. Select Actions > Initialize.
  3. Click Initialize again to confirm.
  Note: If the data directories are not initialized, the ZooKeeper servers cannot be started.

In a production environment, you should deploy ZooKeeper as an ensemble with an odd number of servers. As long as a majority of the servers in the ensemble are available, the ZooKeeper service will be available. The minimum recommended ensemble size is three ZooKeeper servers, and Cloudera recommends that each server run on a separate machine. In addition, the ZooKeeper server process should have its own dedicated disk storage if possible.

Replacing a Zookeeper Disk Using Cloudera Manager

Minimum Required Role: Full Administrator

  1. In Cloudera Manager, update the Data Directory and Transaction Log Directory settings.
  2. Stop a single ZooKeeper role.
  3. Move the contents to the new disk location (modify mounts as needed). Make sure the permissions and ownership are correct.
  4. Start the ZooKeeper role.
  5. Repeat steps 2-4 for any remaining ZooKeeper roles.

Replacing a ZooKeeper Role Using Cloudera Manager with Zookeeper Service Downtime

Minimum Required Role: Full Administrator

  1. Go to ZooKeeper Instances.
  2. Stop the ZooKeeper role on the old host.
  3. Remove the ZooKeeper role from old host on the ZooKeeper Instances page.
  4. Add a new ZooKeeper role on the new host.
  5. Restart the old ZooKeeper servers that have outdated configuration.
  6. Confirm the ZooKeeper service has elected one of the restarted hosts as a leader on the ZooKeeper Status page. See Confirming the Election Status of a ZooKeeper Service.
  7. Restart the newly added Zookeeper server.
  8. Restart/rolling restart any dependent services such as HBase, HDFS, YARN, Hive, or other services that are marked to have stale configuration.

Replacing a ZooKeeper Role Using Cloudera Manager without Zookeeper Service Downtime

Minimum Required Role: Full Administrator

  Note: This process is valid only if the SASL authentication is not turned on between the Zookeeper servers. You can check this in Cloudera Manager > Zookeeper > Configuration > Enable Server to Server SASL Authentication.
  1. Go to ZooKeeper Instances.
  2. Stop the ZooKeeper role on the old host.
  3. Confirm the ZooKeeper service has elected one of the remaining hosts as a leader on the ZooKeeper Status page. See Confirming the Election Status of a ZooKeeper Service.
  4. On the ZooKeeper Instances page, remove the ZooKeeper role from the old host.
  5. Add a new ZooKeeper role on the new host.
  6. Change the individual configuration of the newly added Zookeeper role to have the highest ZooKeeper Server ID set in the cluster.
  7. Go to Zookeeper > Instances and click the newly added Server instance.
  8. In the individual Server page, select Start this Server from the Actions dropdown menu to start the new ZooKeeper role.
      Note: If you try it from elsewhere, you may see an error message.
  9. On the ZooKeeper Status page, confirm that there is a leader and all other hosts are followers.
  10. Restart the ZooKeeper server that has an outdated configuration and is a follower.
  11. Restart the leader Zookeeper server that has an outdated configuration.
  12. Confirm that a leader has been elected after the restart, and the whole Zookeeper service is in green state.
  13. Restart/rolling restart any dependent services such as HBase, HDFS, YARN, Hive, or other services that are marked to have stale configuration.

Adding or Deleting a ZooKeeper Role on an Unmanaged Cluster

Minimum Required Role: Full Administrator

For information on administering ZooKeeper from the command line, see the ZooKeeper Getting Started Guide.

Replacing a ZooKeeper Role on an Unmanaged Cluster

Minimum Required Role: Full Administrator

These instructions assume you are using ZooKeeper from the command line. For more information, see the ZooKeeper Getting Started Guide.

  1. Stop the ZooKeeper role on the old host.
  2. Confirm the ZooKeeper Quorum has elected a leader. See Confirming the Election Status of a ZooKeeper Service.
  3. Add a new ZooKeeper role on the new server.
  4. Identify the dataDir location from the zoo.cfg file. This defaults to /var/lib/zookeeper.
  5. Identify the ID number for the ZooKeeper Server from the myid file in the configuration: cat /var/lib/zookeeper/myid
  6. On all the ZooKeeper hosts, edit the zoo.cfg file so the server ID references the new server hostname. For example:
    server.1=zk1.example.org:3181:4181
    server.2=zk2.example.org:3181:4181
    server.4=zk4.example.org:3181:4181
  7. Restart the ZooKeeper hosts.
  8. Confirm the ZooKeeper Quorum has elected a leader and the other hosts are followers. See Confirming the Election Status of a ZooKeeper Service.
  9. Restart any dependent services such as HBase, HDFS Failover Controllers with HDFS High Availability, or YARN or Mapreduce v1 with High Availability.
  10. Perform a failover to make one HDFS NameNode active. See Manually Failing Over to the Standby NameNode.

Confirming the Election Status of a ZooKeeper Service

Determining the election status of a ZooKeeper host requires that you have installed telnet or nc (netcat), running from a host with network access to the ZooKeeper host. The default ZooKeeper client port is 2181. Run the following command against each ZooKeeper host:
echo "stat" | nc server.example.org 2181 | grep Mode

For example, a follower host would return the message:

Mode: follower
You can use telnet, if you prefer.
$ telnet server.example.org 2181
Sample output would be similar to the following.
Trying 10.1.2.154...
Connected to server.example.org.
Escape character is '^]'.
stat
Zookeeper version: 3.4.5-cdh5.4.4--1, built on 07/06/2015 23:54 GMT
...

Latency min/avg/max: 0/1/40
Received: 631
Sent: 677
Connections: 7
Outstanding: 0
Zxid: 0x30000011a
Mode: follower               <----
Node count: 40
Connection closed by foreign host.
Page generated August 29, 2019.