作者:ao吖浩_257 | 来源:互联网 | 2023-09-10 20:47
spark.dynamicAllocation.enabled:Whethertousedynamicresourceallocation,whichscalesthenumber
spark.dynamicAllocation.enabled:Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. For more detail, see the description here.
该配置项用于配置是否使用动态资源分配,根据工作负载调整应用程序注册的executor的数量。默认为false(至少在spark2.2-spark2.4中如此),在CDH发行版中默认为true,
与之相关的还有如下配置:spark.dynamicAllocation.minExecutors, spark.dynamicAllocation.maxExecutors, and spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.executorAllocationRatio
如果启用动态分配,在executor空闲spark.dynamicAllocation.executorIdleTimeout(默认60s)之后将被释放。
spark.dynamicAllocation.minExecutors和spark.dynamicAllocation.maxExecutors分别为分配的最小及最大值,spark.dynamicAllocation.initialExecutors为初始分配的值,默认取值为minExecutors。在–num-executors参数设置后,将使用此设置的值作为动态分配executor数的初始值。
在spark1.6中,如果同时设置dynamicAllocation及num-executors,启动时会有
WARN spark.SparkContext: Dynamic Allocation and num executors both set, thus dynamic allocation disabled。
在spark2中已经不再禁用了,如果num-executors也设置的话只是作为初始值存在而已。
如果启用dynamicAllocation则spark.shuffle.service.enable必须设置为true,此选项用于启动外部的shuffle服务,免得在executor释放时造成数据丢失。外部的shuffle服务运行在NodeManager节点中,独立于spark的executor,在spark配置中通过spark.shuffle.service.port指定其端口,默认为7337。
一个简单实验spark动态分配的联系如下:”spark-shell –conf spark.dynamicAllocation.minExecutors=3 –num-executors 6″ 其设置了minExecutors为3, num-executors为6,那么最开始启动后是有1个AM加6个executors存在的,1分钟后只有1个AM加3个executor存在了
19/03/18 18:51:44 INFO yarn.YarnRMClient: Registering the ApplicationMaster
19/03/18 18:51:44 INFO util.Utils: Using initial executors = 6, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
19/03/18 18:51:44 INFO yarn.YarnAllocator: Will request 6 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
19/03/18 18:51:44 INFO yarn.YarnAllocator: Submitted 6 unlocalized container requests.
19/03/18 18:51:44 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000002 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000003 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000004 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000005 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000006 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Launching container container_1549688715051_0631_01_000007 on host zjh-master
19/03/18 18:51:45 INFO yarn.YarnAllocator: Received 6 containers from YARN, launching executors on 6 of them.
19/03/18 18:52:49 INFO yarn.YarnAllocator: Driver requested a total number of 5 executor(s).
19/03/18 18:52:49 INFO yarn.ApplicationMaster$AMEndpoint: Driver requested to kill executor(s) 2.
19/03/18 18:52:49 INFO yarn.YarnAllocator: Driver requested a total number of 3 executor(s).
19/03/18 18:52:49 INFO yarn.ApplicationMaster$AMEndpoint: Driver requested to kill executor(s) 5, 1.