作者:乌海阿斯顿 | 来源:互联网 | 2023-09-25 18:20
目录7.1yarn-site.xm文件配置7.2测试YARN自动故障转移ResourceManager(RM)负责跟踪集群中的资源,以及调度应用程序(例如,MapRedu
目录
- 7.1 yarn-site.xm文件配置
- 7.2 测试YARN自动故障转移
ResourceManager (RM)负责跟踪集群中的资源,以及调度应用程序(例如,MapReduce作业)。在Hadoop 2.4之前,集群中只有一个ResourceManager,当其中一个宕机时,将影响整个集群。高可用性特性增加了冗余的形式,即一个主动/备用的ResourceManager对,以便可以进行故障转移。
YARN HA的架构如下图所示:
本例中,各节点的角色分配如下表所示:
centos01 |
ResourceManager NodeManager |
centos02 |
ResourceManager NodeManager |
centos03 |
NodeManager |
下面将逐步讲解YARN HA的配置步骤。
7.1 yarn-site.xm文件配置
(1)修改yarn-site.xm文件,加入以下内容:
yarn.resourcemanager.ha.enabled
true
yarn.resourcemanager.cluster-id
cluster1
yarn.resourcemanager.ha.rm-ids
rm1,rm2
yarn.resourcemanager.hostname.rm1
centos01
yarn.resourcemanager.hostname.rm2
centos02
yarn.resourcemanager.webapp.address.rm1
centos01:8088
yarn.resourcemanager.webapp.address.rm2
centos02:8088
yarn.resourcemanager.zk-address
centos01:2181,centos02:2181,centos03:2181
yarn.resourcemanager.recovery.enabled
true
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
上述配置参数解析:
yarn.resourcemanager.ha.enabled:开启RM HA功能。
yarn.resourcemanager.cluster-id:标识集群中的RM。如果设置该选项,需要确保所有的RMs在配置中都有自己的id。
yarn.resourcemanager.ha.rm-ids:RMs的逻辑id列表。可以自定义,此处设置为“rm1,rm2”。后面的配置将引用该id。
yarn.resourcemanager.hostname.rm1:指定RM对应的主机名。另外,可以设置RM的每个服务地址。
yarn.resourcemanager.webapp.address.rm1:指定RM的Web端访问地址。
yarn.resourcemanager.zk-address:指定集成的ZooKeeper的服务地址。
yarn.resourcemanager.recovery.enabled:启用RM重启的功能,默认为false。
yarn.resourcemanager.store.class:用于状态存储的类,默认为org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore,基于Hadoop文件系统的实现。还可以为org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore,该类为基于ZooKeeper的实现。此处指定该类。
(2)yarn-site.xm文件配置好后,需要将其发送到集群中其它节点。
(3)接着上一章启动好的HDFS,继续进行启动YARN。
分别在centos01、centos02节点上执行以下命令,启动ResourceManager:
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start resourcemanager
分别在centos01、centos02、centos03节点上执行以下命令,启动nodemanager:
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start nodemanager
(4)YARN启动后,查看各节点Java进程:
[hadoop@centos01 hadoop-2.7.1]$ jps
3360 QuorumPeerMain
4080 DFSZKFailoverController
4321 NodeManager
4834 Jps
3908 JournalNode
3702 DataNode
4541 ResourceManager
3582 NameNode
[hadoop@centos02 hadoop-2.7.1]$ jps
4486 Jps
3815 DFSZKFailoverController
4071 NodeManager
4359 ResourceManager
3480 NameNode
3353 QuorumPeerMain
3657 JournalNode
3563 DataNode
[hadoop@centos03 hadoop-2.7.1]$ jps
3496 JournalNode
4104 Jps
3836 NodeManager
3293 QuorumPeerMain
3390 DataNode
此时浏览器输入地址http://centos01:8088 访问活动状态的ResourceManager,查看YARN的启动状态。如下图所示。
如果访问备份ResourceManager地址:http://centos02:8088 发现自动跳转到了地址http://centos01:8088。这是因为此时活动状态的ResourceManager在centos01节点上。访问备份节点的ResourceManager会自动跳转到活动节点。
7.2 测试YARN自动故障转移
在centos01节点上执行MapReduce默认的WordCount程序,当正在执行map阶段时,新开一个SSH Shell窗口,杀掉centos01的ResourceManager进程,观察程序执行过程。执行MapReduce默认的WordCount程序的命令如下:
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
执行结果如下:
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
18/03/16 10:48:22 INFO input.FileInputFormat: Total input paths to process : 1
18/03/16 10:48:22 INFO mapreduce.JobSubmitter: number of splits:1
18/03/16 10:48:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521168402181_0001
18/03/16 10:48:23 INFO impl.YarnClientImpl: Submitted application application_1521168402181_0001
18/03/16 10:48:23 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1521168402181_0001/
18/03/16 10:48:23 INFO mapreduce.Job: Running job: job_1521168402181_0001
18/03/16 10:48:56 INFO mapreduce.Job: Job job_1521168402181_0001 running in uber mode : false
18/03/16 10:48:57 INFO mapreduce.Job: map 0% reduce 0%
18/03/16 10:50:21 INFO mapreduce.Job: map 100% reduce 0%
18/03/16 10:50:32 INFO mapreduce.Job: map 100% reduce 100%
18/03/16 10:50:36 INFO mapreduce.Job: Job job_1521168402181_0001 completed successfully
18/03/16 10:50:37 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=1321
FILE: Number of bytes written=239335
FILE: Number of read operatiOns=0
FILE: Number of large read operatiOns=0
FILE: Number of write operatiOns=0
HDFS: Number of bytes read=1094
HDFS: Number of bytes written=971
HDFS: Number of read operatiOns=6
HDFS: Number of large read operatiOns=0
HDFS: Number of write operatiOns=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=14130
Total time spent by all reduces in occupied slots (ms)=7851
Total time spent by all map tasks (ms)=14130
Total time spent by all reduce tasks (ms)=7851
Total vcore-seconds taken by all map tasks=14130
Total vcore-seconds taken by all reduce tasks=7851
Total megabyte-seconds taken by all map tasks=14469120
Total megabyte-seconds taken by all reduce tasks=8039424
Map-Reduce Framework
Map input records=29
Map output records=109
Map output bytes=1368
Map output materialized bytes=1321
Input split bytes=101
Combine input records=109
Combine output records=86
Reduce input groups=86
Reduce shuffle bytes=1321
Reduce input records=86
Reduce output records=86
Spilled Records=172
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=188
CPU time spent (ms)=1560
Physical memory (bytes) snapshot=278478848
Virtual memory (bytes) snapshot=4195344384
Total committed heap usage (bytes)=140480512
Shuffle Errors
BAD_ID=0
COnNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=993
File Output Format Counters
Bytes Written=971
从上述结果中可以看出,虽然ResourceManager进程被杀掉了,但是YARN仍然能够流畅的执行,说明自动故障转移功能生效了,ResourceManager遇到故障后,自动切换到了centos02节点上继续执行。此时浏览器访问备用ResourceManager的Web端地址http://centos02:8088发现可以成功访问了。显示任务成功执行完毕。
到此,YARN HA集群搭建完毕。
原创文章,转载请注明出处!!