Cloudera Enterprise 6.3.x | Other versions

Linux Control Groups (cgroups)

Minimum Required Role: Full Administrator

Cloudera Manager supports the Linux control groups (cgroups) kernel feature. With cgroups, administrators can impose per-resource restrictions and limits on services and roles. This provides the ability to allocate resources using cgroups to enable isolation of compute frameworks from one another. Resource allocation is implemented by setting properties for the services and roles.

Linux Distribution Support

Cgroups are a feature of the Linux kernel, and as such, support depends on the host's Linux distribution and version as shown in the following tables. If a distribution lacks support for a given parameter, changes to the parameter have no effect.

Table 1. RHEL-compatible
Distribution CPU Shares I/O Weight Memory Soft Limit Memory Hard Limit
Red Hat Enterprise Linux, CentOS, and Oracle Enterprise Linux 7
Red Hat Enterprise Linux, CentOS, and Oracle Enterprise Linux 6
Table 2. SLES
Distribution CPU Shares I/O Weight Memory Soft Limit Memory Hard Limit
SUSE Linux Enterprise Server 12
SUSE Linux Enterprise Server 11
Table 3. Ubuntu
Distribution CPU Shares I/O Weight Memory Soft Limit Memory Hard Limit
Ubuntu 16.04 LTS
Ubuntu 16.04 LTS
Ubuntu 14.04 LTS
Ubuntu 12.04 LTS
Table 4. Debian
Distribution CPU Shares I/O Weight Memory Soft Limit Memory Hard Limit
Debian 7.1
Debian 7.0
Debian 6.0
Table 5. Oracle linux (OL)
Distribution CPU Shares I/O Weight Memory Soft Limit Memory Hard Limit
Oracle linux 7
Oracle linux 6

The exact level of support can be found in the Cloudera Manager Agent log file, shortly after the Agent has started. See Viewing the Cloudera Manager Server Log to find the Agent log. In the log file, look for an entry like this:

Found cgroups capabilities: {
'has_memory': True, 
'default_memory_limit_in_bytes': 9223372036854775807, 
'writable_cgroup_dot_procs': True, 
'has_cpu': True, 
'default_blkio_weight': 1000, 
'default_cpu_shares': 1024, 
'has_blkio': True}

The has_cpu and similar entries correspond directly to support for the CPU, I/O, and memory parameters.

Resource Management with Control Groups

To use cgroups, you must enable cgroup-based resource management under the host resource management configuration properties. However, if you configure static service pools, this property is set as part of that process.

Enabling Resource Management

Cgroups-based resource management can be enabled for all hosts, or on a per-host basis.
  1. If you have upgraded from a version of Cloudera Manager older than Cloudera Manager 4.5, restart every Cloudera Manager Agent before using cgroups-based resource management:
    1. Stop all services, including the Cloudera Management Service.
    2. On each cluster host, run as root:
      • RHEL-compatible 7 and higher:
        sudo service cloudera-scm-agent next_stop_hard
        sudo service cloudera-scm-agent restart
      • All other Linux distributions:
        $ sudo service cloudera-scm-agent hard_restart
        
    3. Start all services.
  2. Click the Hosts tab.
  3. Optionally click the link of the host where you want to enable cgroups.
  4. Click the Configuration tab.
  5. Select Category > Resource Management.
  6. Select Enable Cgroup-based Resource Management.
  7. Restart all roles on the host or hosts.

Limitations

  • Role group and role instance override cgroup-based resource management parameters must be saved one at a time. Otherwise some of the changes that should be reflected dynamically will be ignored.
  • The role group abstraction is an imperfect fit for resource management parameters, where the goal is often to take a numeric value for a host resource and distribute it amongst running roles. The role group represents a "horizontal" slice: the same role across a set of hosts. However, the cluster is often viewed in terms of "vertical" slices, each being a combination of worker roles (such as TaskTracker, DataNode, RegionServer, Impala Daemon, and so on). Nothing in Cloudera Manager guarantees that these disparate horizontal slices are "aligned" (meaning, that the role assignment is identical across hosts). If they are unaligned, some of the role group values will be incorrect on unaligned hosts. For example a host whose role groups have been configured with memory limits but that's missing a role will probably have unassigned memory.

Configuring Resource Parameters

After enabling cgroups, you can restrict and limit the resource consumption of roles (or role groups) on a per-resource basis. All of these parameters can be found in the Cloudera Manager Admin Console, under the Resource Management category:
  • CPU Shares - The more CPU shares given to a role, the larger its share of the CPU when under contention. Until processes on the host (including both roles managed by Cloudera Manager and other system processes) are contending for all of the CPUs, this will have no effect. When there is contention, those processes with higher CPU shares will be given more CPU time. The effect is linear: a process with 4 CPU shares will be given roughly twice as much CPU time as a process with 2 CPU shares.

    Updates to this parameter are dynamically reflected in the running role.

  • I/O Weight - The greater the I/O weight, the higher priority will be given to I/O requests made by the role when I/O is under contention (either by roles managed by Cloudera Manager or by other system processes).

    This only affects read requests; write requests remain unprioritized. The Linux I/O scheduler controls when buffered writes are flushed to disk, based on time and quantity thresholds. It continually flushes buffered writes from multiple sources, not certain prioritized processes.

    Updates to this parameter are dynamically reflected in the running role.

  • Memory Soft Limit - When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit.

    After updating this parameter, you must restart the role for changes to take effect.

  • Memory Hard Limit - When a role's resident set size (RSS) exceeds the value of this parameter, the kernel will swap out some of the role's memory. If it is unable to do so, it will kill the process. The kernel measures memory consumption in a manner that does not necessarily match what the top or ps report for RSS, so expect that this limit is a rough approximation.

    After updating this parameter, you must restart the role for changes to take effect.

Example: Protecting Production MapReduce Jobs from Impala Queries

Suppose you have MapReduce deployed in production and want to roll out Impala without affecting production MapReduce jobs. For simplicity, we will make the following assumptions:
  • The cluster is using homogenous hardware
  • Each worker host has two cores
  • Each worker host has 8 GB of RAM
  • Each worker host is running a DataNode, TaskTracker, and an Impala Daemon
  • Each role type is in a single role group
  • Cgroups-based resource management has been enabled on all hosts
Action Procedure
CPU
  1. Leave DataNode and TaskTracker role group CPU shares at 1024.
  2. Set Impala Daemon role group's CPU shares to 256.
  3. The TaskTracker role group should be configured with a Maximum Number of Simultaneous Map Tasks of 2 and a Maximum Number of Simultaneous Reduce Tasks of 1. This yields an upper bound of three MapReduce tasks at any given time; this is an important detail for memory sizing.
Memory
  1. Set Impala Daemon role group memory limit to 1024 MB.
  2. Leave DataNode maximum Java heap size at 1 GB.
  3. Leave TaskTracker maximum Java heap size at 1 GB.
  4. Leave MapReduce Child Java Maximum Heap Size for Gateway at 1 GB.
  5. Leave cgroups hard memory limits alone. We'll rely on "cooperative" memory limits exclusively, as they yield a nicer user experience than the cgroups-based hard memory limits.
I/O
  1. Leave DataNode and TaskTracker role group I/O weight at 500.
  2. Impala Daemon role group I/O weight is set to 125.
When you're done with configuration, restart all services for these changes to take effect. The results are:
  1. When MapReduce jobs are running, all Impala queries together will consume up to a fifth of the cluster's CPU resources.
  2. Individual Impala Daemons will not consume more than 1 GB of RAM. If this figure is exceeded, new queries will be cancelled.
  3. DataNodes and TaskTrackers can consume up to 1 GB of RAM each.
  4. We expect up to 3 MapReduce tasks at a given time, each with a maximum heap size of 1 GB of RAM. That's up to 3 GB for MapReduce tasks.
  5. The remainder of each host's available RAM (6 GB) is reserved for other host processes.
  6. When MapReduce jobs are running, read requests issued by Impala queries will receive a fifth of the priority of either HDFS read requests or MapReduce read requests.
Page generated August 29, 2019.