热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.hadoop.mapred.FileOutputCommitter类的使用及代码示例

本文整理了Java中org.apache.hadoop.mapred.FileOutputCommitter类的一些代码示例,展示了FileOutputCo

本文整理了Java中org.apache.hadoop.mapred.FileOutputCommitter类的一些代码示例,展示了FileOutputCommitter类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。FileOutputCommitter类的具体详情如下:
包路径:org.apache.hadoop.mapred.FileOutputCommitter
类名称:FileOutputCommitter

FileOutputCommitter介绍

[英]An OutputCommitter that commits files specified in job output directory i.e. ${mapreduce.output.fileoutputformat.outputdir}.
[中]提交作业输出目录中指定的文件的OutputCommitter,即${mapreduce.output.fileoutputformat.outputdir}。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

JobContext jCOntext= new JobContextImpl(conf, taskID.getJobID());
TaskAttemptContext tCOntext= new TaskAttemptContextImpl(conf, taskID);
FileOutputCommitter committer = new FileOutputCommitter();
committer.setupJob(jContext);
committer.setupTask(tContext);
committer.abortTask(tContext);
File out = new File(outDir.toUri().getPath());
Path workPath = committer.getWorkPath(tContext, outDir);
File wp = new File(workPath.toUri().getPath());
File expectedFile = new File(wp, partFile);
assertFalse("task temp dir still exists", expectedFile.exists());
committer.abortJob(jContext, JobStatus.State.FAILED);
expectedFile = new File(out, FileOutputCommitter.TEMP_DIR_NAME);
assertFalse("job temp dir still exists", expectedFile.exists());

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

private void testMapOnlyNoOutputInternal(int version) throws Exception {
JobConf cOnf= new JobConf();
//This is not set on purpose. FileOutputFormat.setOutputPath(conf, outDir);
conf.set(JobContext.TASK_ATTEMPT_ID, attempt);
conf.setInt(org.apache.hadoop.mapreduce.lib.output.
FileOutputCommitter.FILEOUTPUTCOMMITTER_ALGORITHM_VERSION, version);
JobContext jCOntext= new JobContextImpl(conf, taskID.getJobID());
TaskAttemptContext tCOntext= new TaskAttemptContextImpl(conf, taskID);
FileOutputCommitter committer = new FileOutputCommitter();

// setup
committer.setupJob(jContext);
committer.setupTask(tContext);

if(committer.needsTaskCommit(tContext)) {
// do commit
committer.commitTask(tContext);
}
committer.commitJob(jContext);
// validate output
FileUtil.fullyDelete(new File(outDir.toString()));
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

@Override
public boolean needsTaskCommit(TaskAttemptContext context)
throws IOException {
return getWrapped(context).needsTaskCommit(context, getTaskAttemptPath(context));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

JobContext jCOntext= new JobContextImpl(conf, taskID.getJobID());
TaskAttemptContext tCOntext= new TaskAttemptContextImpl(conf, taskID);
FileOutputCommitter committer = new FileOutputCommitter();
committer.setupJob(jContext);
committer.setupTask(tContext);
if(committer.needsTaskCommit(tContext)) {
committer.commitTask(tContext);
Path jobTempDir1 = committer.getCommittedTaskPath(tContext);
File jtd1 = new File(jobTempDir1.toUri().getPath());
if (commitVersion == 1) {
JobContext jContext2 = new JobContextImpl(conf2, taskID.getJobID());
TaskAttemptContext tContext2 = new TaskAttemptContextImpl(conf2, taskID);
FileOutputCommitter committer2 = new FileOutputCommitter();
committer2.setupJob(jContext2);
committer2.recoverTask(tContext2);
Path jobTempDir2 = committer2.getCommittedTaskPath(tContext2);
File jtd2 = new File(jobTempDir2.toUri().getPath());
if (recoveryVersion == 1) {
committer2.commitJob(jContext2);
validateContent(outDir);
FileUtil.fullyDelete(new File(outDir.toString()));

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient

JobContext jCOntext= new JobContextImpl(job, taskID.getJobID());
TaskAttemptContext tCOntext= new TaskAttemptContextImpl(job, taskID);
FileOutputCommitter committer = new FileOutputCommitter();
FileOutputFormat.setWorkOutputPath(job,
committer.getTaskAttemptPath(tContext));
committer.setupJob(jContext);
committer.setupTask(tContext);
String file = "test.txt";
committer.commitTask(tContext);
committer.commitJob(jContext);

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

JobContext jCOntext= new JobContextImpl(job, taskID.getJobID());
TaskAttemptContext tCOntext= new TaskAttemptContextImpl(job, taskID);
FileOutputCommitter committer = new FileOutputCommitter();
FileOutputFormat.setWorkOutputPath(job,
committer.getTempTaskOutputPath(tContext));
committer.setupJob(jContext);
committer.setupTask(tContext);
String file = "test.txt";
committer.commitTask(tContext);
committer.commitJob(jContext);

代码示例来源:origin: com.facebook.hadoop/hadoop-core

/**
* Helper function to create the task's temporary output directory and
* return the path to the task's output file.
*
* @param conf job-configuration
* @param name temporary task-output filename
* @return path to the task's temporary output file
* @throws IOException
*/
public static Path getTaskOutputPath(JobConf conf, String name)
throws IOException {
// ${mapred.out.dir}
Path outputPath = getOutputPath(conf);
if (outputPath == null) {
throw new IOException("Undefined job output-path");
}
OutputCommitter committer = conf.getOutputCommitter();
Path workPath = outputPath;
TaskAttemptContext cOntext= new TaskAttemptContext(conf,
TaskAttemptID.forName(conf.get("mapred.task.id")));
if (committer instanceof FileOutputCommitter) {
workPath = ((FileOutputCommitter)committer).getWorkPath(context,
outputPath);
}

// ${mapred.out.dir}/_temporary/_${taskid}/${name}
return new Path(workPath, name);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@Private
public Path getTaskAttemptPath(TaskAttemptContext context) throws IOException {
Path out = getOutputPath(context);
return out == null ? null : getTaskAttemptPath(context, out);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

@Override
public void commitJob(JobContext context) throws IOException {
Configuration cOnf= context.getConfiguration();
Path share = new Path(conf.get("share"));
FileSystem fs = FileSystem.get(conf);

while (true) {
if (fs.exists(share)) {
break;
}
UtilsForTests.waitFor(100);
}
super.commitJob(context);
}
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

context.getProgressible().progress();
if (fs.isFile(taskOutput)) {
Path finalOutputPath = getFinalPath(jobOutputDir, taskOutput,
getTempTaskOutputPath(context));
if (!fs.rename(taskOutput, finalOutputPath)) {
if (!fs.delete(finalOutputPath, true)) {
} else if(fs.getFileStatus(taskOutput).isDirectory()) {
FileStatus[] paths = fs.listStatus(taskOutput);
Path finalOutputPath = getFinalPath(jobOutputDir, taskOutput,
getTempTaskOutputPath(context));
fs.mkdirs(finalOutputPath);
if (paths != null) {
for (FileStatus path : paths) {
moveTaskOutputs(context, fs, jobOutputDir, path.getPath());

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

private org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
getWrapped(JobContext context) throws IOException {
if(wrapped == null) {
wrapped = new org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter(
getOutputPath(context), context);
}
return wrapped;
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

public Path getWorkPath(TaskAttemptContext context, Path outputPath)
throws IOException {
return outputPath == null ? null : getTaskAttemptPath(context, outputPath);
}

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

@Override
@Deprecated
public void cleanupJob(JobContext context) throws IOException {
getWrapped(context).cleanupJob(context);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

public void commitTask(TaskAttemptContext context)
throws IOException {
Path taskOutputPath = getTempTaskOutputPath(context);
TaskAttemptID attemptId = context.getTaskAttemptID();
JobConf job = context.getJobConf();
if (taskOutputPath != null) {
FileSystem fs = taskOutputPath.getFileSystem(job);
context.getProgressible().progress();
if (fs.exists(taskOutputPath)) {
Path jobOutputPath = taskOutputPath.getParent().getParent();
// Move the task outputs to their final place
moveTaskOutputs(context, fs, jobOutputPath, taskOutputPath);
// Delete the temporary task-specific output directory
if (!fs.delete(taskOutputPath, true)) {
LOG.info("Failed to delete the temporary output" +
" directory of task: " + attemptId + " - " + taskOutputPath);
}
LOG.info("Saved output of task '" + attemptId + "' to " +
jobOutputPath);
}
}
}

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public void abortTask(TaskAttemptContext context) throws IOException {
Path taskOutputPath = getTempTaskOutputPath(context);
try {
if (taskOutputPath != null) {
FileSystem fs = taskOutputPath.getFileSystem(context.getJobConf());
context.getProgressible().progress();
if (!fs.delete(taskOutputPath, true)) {
LOG.warn("Deleting output in " + taskOutputPath + " returns false");
}
}
} catch (IOException ie) {
LOG.warn("Error discarding output in " + taskOutputPath, ie);
throw ie;
}
}

代码示例来源:origin: LiveRamp/hank

public void setupJob(JobContext context) throws IOException {
// Finally, set up FileOutputCommitter
super.setupJob(context);
}

代码示例来源:origin: org.apache.hive.hcatalog/hive-hcatalog-hbase-storage-handler

public HBaseBulkOutputCommitter() {
baseOutputCommitter = new FileOutputCommitter();
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-app

@Override
public void abortJob(JobContext context, int runState) throws IOException {
super.abortJob(context, runState);
this.abortJobCalled = true;
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

public void abortTask(TaskAttemptContext context) throws IOException {
System.err.println(cleanupLog);
String attemptId = System.getProperty("hadoop.tasklog.taskid");
assertNotNull(attemptId);
if (attemptId.endsWith("_0")) {
assertFalse(Boolean.getBoolean(System
.getProperty("hadoop.tasklog.iscleanup")));
} else {
assertTrue(Boolean.getBoolean(System
.getProperty("hadoop.tasklog.iscleanup")));
}
super.abortTask(context);
}
}

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

/**
* Helper function to create the task's temporary output directory and
* return the path to the task's output file.
*
* @param conf job-configuration
* @param name temporary task-output filename
* @return path to the task's temporary output file
* @throws IOException
*/
public static Path getTaskOutputPath(JobConf conf, String name)
throws IOException {
// ${mapred.out.dir}
Path outputPath = getOutputPath(conf);
if (outputPath == null) {
throw new IOException("Undefined job output-path");
}
OutputCommitter committer = conf.getOutputCommitter();
Path workPath = outputPath;
TaskAttemptContext cOntext=
new TaskAttemptContextImpl(conf,
TaskAttemptID.forName(conf.get(
JobContext.TASK_ATTEMPT_ID)));
if (committer instanceof FileOutputCommitter) {
workPath = ((FileOutputCommitter)committer).getWorkPath(context,
outputPath);
}

// ${mapred.out.dir}/_temporary/_${taskid}/${name}
return new Path(workPath, name);
}

推荐阅读
author-avatar
狗血饭团联_367
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有