作者:白露zhang_166 | 来源:互联网 | 2023-10-14 11:44
文章目录13.MapReduce框架原理13.3Shuffle机制13.3.2Partition分区13.3.2.3自定义Partitioner步骤13.3.2.3.1自定义类继承
文章目录
- 13.MapReduce框架原理
- 13.3Shuffle机制
- 13.3.2Partition分区
- 13.3.2.3自定义Partitioner步骤
- 13.3.2.3.1自定义类继承Partitioner,重写getPartition()方法
- 13.3.2.3.2在Job驱动中,设置自定义Partitioner
- 13.3.2.3.3自定义Partition后,要根据自定义Partitioner的逻辑设置相应数量的ReduceTask
- 13.3.2.4分区总结
- 13.3.2.5案例分析
- 13.3.3Partition 分区案例实操
- 13.3.3.1需求
- 13.3.3.2需求分析
- 13.3.3.3 Partition 分区案例演示
13.MapReduce框架原理
13.3Shuffle机制
13.3.2Partition分区
13.3.2.3自定义Partitioner步骤
13.3.2.3.1自定义类继承Partitioner,重写getPartition()方法
public class CustomPartitioner extends Partitioner<Text, FlowBean> {&#64;Overridepublic int getPartition(Text key, FlowBean value, int numPartitions) {… …return partition; }
}
13.3.2.3.2在Job驱动中&#xff0c;设置自定义Partitioner
job.setPartitionerClass(CustomPartitioner.class);
13.3.2.3.3自定义Partition后&#xff0c;要根据自定义Partitioner的逻辑设置相应数量的ReduceTask
job.setNumReduceTasks(5);
13.3.2.4分区总结
&#xff08;1&#xff09;如果ReduceTask的数量> getPartition的结果数&#xff0c;则会多产生几个空的输出文件part-r-000xx&#xff1b;
&#xff08;2&#xff09;如果1 &#xff08;3&#xff09;如 果ReduceTask的数量&#61;1&#xff0c;则不管MapTask端输出多少个分区文件&#xff0c;最终结果都交给这一个ReduceTask&#xff0c;最终也就只会产生一个结果文件 part-r-00000&#xff1b;
&#xff08;4&#xff09;分区号必须从零开始&#xff0c;逐一累加。
13.3.2.5案例分析
例如&#xff1a;假设自定义分区数为5&#xff0c;则
&#xff08;1&#xff09;job.setNumReduceTasks(1); 会正常运行&#xff0c;只不过会产生一个输出文件
&#xff08;2&#xff09;job.setNumReduceTasks(2); 会报错
&#xff08;3&#xff09;job.setNumReduceTasks(6);大于5&#xff0c;程序会正常运行&#xff0c;会产生空文件
13.3.3Partition 分区案例实操
13.3.3.1需求
将统计结果按照手机归属地不同省份输出到不同文件中&#xff08;分区&#xff09;
&#xff08;1&#xff09;输入数据
1 13736230513 192.196.100.1 www.baidu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.baidu.com 1527 2106 200
6 84188413 192.168.100.3 www.baidu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.baidu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200
&#xff08;2&#xff09;期望输出数据
手机号 136、137、138、139 开头都分别放到一个独立的 4 个文件中&#xff0c;其他开头的放到一个文件中
13.3.3.2需求分析
1、需求&#xff1a;将统计结果按照手机归属地不同省份输出到不同文件中&#xff08;分区&#xff09;
2、数据输入
13630577991 6960 690
13736230513 2481 24681
13846544121 264 0
13956435636 132 1512
13560439638 918 4938
3、期望数据输出
文件1
文件2
文件3
文件4
文件5
4、增加一个ProvincePartitioner分区
136 分区0
137 分区1
138 分区2
139 分区3
其他 分区4
5、Driver驱动类
//指定自定义数据分区
job.setPartitionerClass ( ProvincePartitioner.class) ;
//同时指定相应数量的reduceTask
job.setNumReduceTasks ( 5);
13.3.3.3 Partition 分区案例演示
创建一个partitioner2的文件夹&#xff0c;将writable里面4个java代码同时复制到partitioner2里面
在案例writable的基础上&#xff0c;增加一个分区类
package com.summer.mapreduce.partitioner;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;public class ProvincePartitioner extends Partitioner<Text, FlowBean> {&#64;Overridepublic int getPartition(Text text, FlowBean flowBean, int numPartitions) {String phone &#61; text.toString();String prePhone &#61; phone.substring(0, 3);int partition;if("136".equals(prePhone)){partition &#61; 0;}else if("137".equals(prePhone)){partition &#61; 1;}else if("138".equals(prePhone)){partition &#61; 2;}else if("139".equals(prePhone)){partition &#61; 3;}else {partition &#61; 4;}return partition;}
}
在驱动函数中增加自定义数据分区设置和ReduceTask设置
package com.summer.mapreduce.partitioner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;public class FlowDriver {public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {Configuration conf &#61; new Configuration();Job job &#61; Job.getInstance(conf);job.setJarByClass(FlowDriver.class);job.setMapperClass(FlowMapper.class);job.setReducerClass(FlowReducer.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(FlowBean.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(FlowBean.class);job.setPartitionerClass(ProvincePartitioner.class);job.setNumReduceTasks(5);FileInputFormat.setInputPaths(job, new Path("D:\\inputflow"));FileOutputFormat.setOutputPath(job, new Path("D\\partitionout"));boolean b &#61; job.waitForCompletion(true);System.exit(b ? 0 : 1);}
}
运行完后有五个分区&#xff0c;和预想值一样&#xff0c;over&#xff01;