热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob类的使用及代码示例

本文整理了Java中org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob类的一些代码示例,展示了

本文整理了Java中org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob类的一些代码示例,展示了ValueAggregatorJob类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。ValueAggregatorJob类的具体详情如下:
包路径:org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
类名称:ValueAggregatorJob

ValueAggregatorJob介绍

[英]This is the main class for creating a map/reduce job using Aggregate framework. The Aggregate is a specialization of map/reduce framework, specilizing for performing various simple aggregations. Generally speaking, in order to implement an application using Map/Reduce model, the developer is to implement Map and Reduce functions (and possibly combine function). However, a lot of applications related to counting and statistics computing have very similar characteristics. Aggregate abstracts out the general patterns of these functions and implementing those patterns. In particular, the package provides generic mapper/redducer/combiner classes, and a set of built-in value aggregators, and a generic utility class that helps user create map/reduce jobs using the generic class. The built-in aggregators include: sum over numeric values count the number of distinct values compute the histogram of values compute the minimum, maximum, media,average, standard deviation of numeric values The developer using Aggregate will need only to provide a plugin class conforming to the following interface: public interface ValueAggregatorDescriptor { public ArrayList generateKeyValPairs(Object key, Object value); public void configure(JobConfjob); } The package also provides a base class, ValueAggregatorBaseDescriptor, implementing the above interface. The user can extend the base class and implement generateKeyValPairs accordingly. The primary work of generateKeyValPairs is to emit one or more key/value pairs based on the input key/value pair. The key in an output key/value pair encode two pieces of information: aggregation type and aggregation id. The value will be aggregated onto the aggregation id according the aggregation type. This class offers a function to generate a map/reduce job using Aggregate framework. The function takes the following parameters: input directory spec input format (text or sequence file) output directory a file specifying the user plugin class
[中]这是使用聚合框架创建map/reduce作业的主要类。聚合是map/reduce框架的一种专门化,专门用于执行各种简单聚合。一般来说,为了使用Map/Reduce模型实现应用程序,开发人员需要实现Map和Reduce函数(可能还需要组合函数)。然而,许多与计数和统计计算相关的应用程序都具有非常相似的特点。聚合抽象出这些函数的一般模式,并实现这些模式。特别是,该包提供了通用的mapper/redducer/combiner类、一组内置的值聚合器,以及一个通用实用程序类,帮助用户使用通用类创建map/Reducer作业。内置的聚合器包括:数值总和统计不同值的数量计算值的直方图计算最小值、最大值、中等值、平均值,数值的标准偏差使用聚合的开发人员只需提供符合以下接口的插件类:公共接口值聚合描述器{public ArrayList generateKeyValPairs(Object key,Object value);public void configure(JobConfjob);}该包还提供了一个基类ValueAggregatorBaseDescriptor,用于实现上述接口。用户可以扩展基类并相应地实现generateKeyValPairs。generateKeyValPairs的主要工作是基于输入键/值对发出一个或多个键/值对。输出键/值对中的键对两条信息进行编码:聚合类型和聚合id。值将根据聚合类型聚合到聚合id上。此类提供了一个使用聚合框架生成map/reduce作业的函数。该函数采用以下参数:输入目录规范输入格式(文本或序列文件)输出目录指定用户插件类的文件

代码示例

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

public static JobConf createValueAggregatorJob(String args[]
, Class[] descriptors)
throws IOException {
JobConf job = createValueAggregatorJob(args);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

/**
* Create an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation. Generic hadoop
* arguments are accepted.
* @return a JobConf object ready for submission.
*
* @throws IOException
* @see GenericOptionsParser
*/
public static JobConf createValueAggregatorJob(String args[])
throws IOException {
return createValueAggregatorJob(args, ValueAggregator.class);
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
* Create an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation. Generic hadoop
* arguments are accepted.
* @return a JobConf object ready for submission.
*
* @throws IOException
* @see GenericOptionsParser
*/
public static JobConf createValueAggregatorJob(String args[])
throws IOException {
return createValueAggregatorJob(args, ValueAggregator.class);
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public static JobConf createValueAggregatorJob(String args[]
, Class[] descriptors)
throws IOException {
JobConf job = createValueAggregatorJob(args);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

/**
* Create an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation. Generic hadoop
* arguments are accepted.
* @return a JobConf object ready for submission.
*
* @throws IOException
* @see GenericOptionsParser
*/
public static JobConf createValueAggregatorJob(String args[])
throws IOException {
return createValueAggregatorJob(args, ValueAggregator.class);
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

public static JobConf createValueAggregatorJob(String args[]
, Class[] descriptors)
throws IOException {
JobConf job = createValueAggregatorJob(args);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

/**
* Create an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation. Generic hadoop
* arguments are accepted.
* @return a JobConf object ready for submission.
*
* @throws IOException
* @see GenericOptionsParser
*/
public static JobConf createValueAggregatorJob(String args[])
throws IOException {
return createValueAggregatorJob(args, ValueAggregator.class);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

public static JobConf createValueAggregatorJob(String args[],
Class[] descriptors,
Class caller) throws IOException {
JobConf job = createValueAggregatorJob(args, caller);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

/**
* create and run an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation
* @throws IOException
*/
public static void main(String args[]) throws IOException {
JobConf job = ValueAggregatorJob.createValueAggregatorJob(args);
JobClient.runJob(job);
}
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

public static JobConf createValueAggregatorJob(String args[],
Class[] descriptors,
Class caller) throws IOException {
JobConf job = createValueAggregatorJob(args, caller);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

/**
* create and run an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation
* @throws IOException
*/
public static void main(String args[]) throws IOException {
JobConf job = ValueAggregatorJob.createValueAggregatorJob(args);
JobClient.runJob(job);
}
}

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

public static JobConf createValueAggregatorJob(String args[]
, Class[] descriptors)
throws IOException {
JobConf job = createValueAggregatorJob(args);
setAggregatorDescriptors(job, descriptors);
return job;
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

/**
* create and run an Aggregate based map/reduce job.
*
* @param args the arguments used for job creation
* @throws IOException
*/
public static void main(String args[]) throws IOException {
JobConf job = ValueAggregatorJob.createValueAggregatorJob(args);
JobClient.runJob(job);
}
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

public static JobControl createValueAggregatorJobs(String args[]) throws IOException {
return createValueAggregatorJobs(args, null);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

public static JobConf createValueAggregatorJob(String args[]
, Class[] descriptors)
throws IOException {
JobConf job = createValueAggregatorJob(args);
setAggregatorDescriptors(job, descriptors);
return job;
}

推荐阅读
author-avatar
手机用户2502910101
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有