Exposing Hadoop Metrics to Graphite
Core Hadoop services and HBase support the writing of their metrics to Graphite, a real-time graphing system.
HDFS, YARN, and HBase support the Metrics2 framework; MapReduce1 and HBase support the Metrics framework. See the Cloudera blog post, What is Hadoop Metrics2?
Configure Hadoop Metrics for Graphite Using Cloudera Manager
Minimum Required Role: Configurator (also provided by Cluster Administrator, Full Administrator)
- Go to the Home page by clicking the Cloudera Manager logo.
- Click .
- Search on the term Metrics.
- To configure HDFS, YARN, or HBase, use Hadoop Metrics2 Advanced Configuration Snippet (Safety Valve). For MapReduce1 (or HBase), use Hadoop Metrics Advanced Configuration Snippet (Safety Valve).
- Click Edit Individual Values to see the supported daemons and their default groups.
- Configure each default group with a metrics class, sampling period, and Graphite server. See the tables below.
- To add optional parameters for socket connection retry, modify this example as necessary:
*.sink.graphite.retry_socket_interval=60000 #in milliseconds *.sink.graphite.socket_connection_retries=10 #Set it to 0 if you do not want it to be retried
- Click Save Changes.
- Restart the cluster or service depending on the scope of your changes.
Graphite Configuration Settings Per Daemon
Service | Daemon Default Group | Graphite Configuration Settings |
---|---|---|
HBase | Master and RegionServer |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 hbase.sink.graphite.server_host=<hostname> hbase.sink.graphite.server_port=<port> hbase.sink.graphite.metrics_prefix=<prefix> |
HDFS | DataNode |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 datanode.sink.graphite.server_host=<hostname> datanode.sink.graphite.server_port=<port> datanode.sink.graphite.metrics_prefix=<prefix> |
NameNode |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 namenode.sink.graphite.server_host=<hostname> namenode.sink.graphite.server_port=<port> namenode.sink.graphite.metrics_prefix=<prefix> |
|
SecondaryNameNode |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 secondarynamenode.sink.graphite.server_host=<hostname> secondarynamenode.sink.graphite.server_port=<port> secondarynamenode.sink.graphite.metrics_prefix=<prefix> |
|
YARN | NodeManager |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 nodemanager.sink.graphite.server_host=<hostname> nodemanager.sink.graphite.server_port=<port> nodemanager.sink.graphite.metrics_prefix=<prefix> |
ResourceManager |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 resourcemanager.sink.graphite.server_host=<hostname> resourcemanager.sink.graphite.server_port=<port> resourcemanager.sink.graphite.metrics_prefix=<prefix> |
|
JobHistory Server |
*.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink *.period=10 jobhistoryserver.sink.graphite.server_host=<hostname> jobhistoryserver.sink.graphite.server_port=<port> jobhistoryserver.sink.graphite.metrics_prefix=<prefix> |
Note: To use metrics, set values for each context. For example, for MapReduce1,
add values for both the JobTracker Default Group and the TaskTracker Default Group.
Service | Daemon Default Group | Graphite Configuration Settings |
---|---|---|
HBase | Master |
hbase.class=org.apache.hadoop.metrics.graphite.GraphiteContext hbase.period=10 hbase.servers=<graphite hostname>:<port> jvm.class=org.apache.hadoop.metrics.graphite.GraphiteContext jvm.period=10 jvm.servers=<graphite hostname>:<port> rpc.class=org.apache.hadoop.metrics.graphite.GraphiteContext rpc.period=10 rpc.servers=<graphite hostname>:<port> |
RegionServer | ||
MapReduce1 | JobTracker |
dfs.class=org.apache.hadoop.metrics.graphite.GraphiteContext dfs.period=10 dfs.servers=<graphite hostname>:<port> mapred.class=org.apache.hadoop.metrics.graphite.GraphiteContext mapred.period=10 mapred.servers=<graphite hostname>:<port> jvm.class=org.apache.hadoop.metrics.graphite.GraphiteContext jvm.period=10 jvm.servers=<graphite hostname>:<port> rpc.class=org.apache.hadoop.metrics.graphite.GraphiteContext rpc.period=10 rpc.servers=<graphite hostname>:<port> |
TaskTracker |
Page generated August 29, 2019.
<< Configuring Maximum File Descriptors | ©2016 Cloudera, Inc. All rights reserved | Exposing Hadoop Metrics to Ganglia >> |
Terms and Conditions Privacy Policy |