热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.hadoop.mapred.JobStatus类的使用及代码示例

本文整理了Java中org.apache.hadoop.mapred.JobStatus类的一些代码示例,展示了JobStatus类的具体用法

本文整理了Java中org.apache.hadoop.mapred.JobStatus类的一些代码示例,展示了JobStatus类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JobStatus类的具体详情如下:
包路径:org.apache.hadoop.mapred.JobStatus
类名称:JobStatus

JobStatus介绍

[英]Describes the current status of a job. This is not intended to be a comprehensive piece of data. For that, look at JobProfile.
[中]描述作业的当前状态。这不是一个全面的数据。为此,请查看JobProfile。

代码示例

代码示例来源:origin: apache/hive

public List listJobs(String user, boolean showall, String jobId,
int numRecords, boolean showDetails)
throws NotAuthorizedException, BadParam, IOException, InterruptedException {
UserGroupInformation ugi = null;
WebHCatJTShim tracker = null;
ArrayList ids = new ArrayList();
try {
ugi = UgiFactory.getUgi(user);
tracker = ShimLoader.getHadoopShims().getWebHCatShim(appConf, ugi);
JobStatus[] jobs = tracker.getAllJobs();
if (jobs != null) {
for (JobStatus job : jobs) {
String id = job.getJobID().toString();
if (showall || user.equals(job.getUsername()))
ids.add(id);
}
}
} catch (IllegalStateException e) {
throw new BadParam(e.getMessage());
} finally {
if (tracker != null)
tracker.close();
if (ugi != null)
FileSystem.closeAllForUGI(ugi);
}
return getJobStatus(ids, user, showall, jobId, numRecords, showDetails);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapreduce-client-core

private static JobStatus createTestJobStatus(String jobId, int state) {
return new JobStatus(
JobID.forName(jobId), 0.5f, 0.0f,
state, "root", "TestJobEndNotifier", null, null);
}

代码示例来源:origin: apache/hive

private void logJob(String logDir, String jobID, PrintWriter listWriter)
throws IOException {
RunningJob rj = jobClient.getJob(JobID.forName(jobID));
String jobURLString = rj.getTrackingURL();
fs.mkdirs(jobDir);
listWriter.println("job: " + jobID + "(" + "name=" + rj.getJobName() + ","
+ "status=" + JobStatus.getJobRunState(rj.getJobState()) + ")");

代码示例来源:origin: apache/hive

/**
* Grab a handle to a job that is already known to the JobTracker.
*
* @return Profile of the job, or null if not found.
*/
public JobProfile getJobProfile(JobID jobid)
throws IOException {
RunningJob rj = getJob(jobid);
if(rj == null) {
return null;
}
JobStatus jobStatus = rj.getJobStatus();
return new JobProfile(jobStatus.getUsername(), jobStatus.getJobID(),
jobStatus.getJobFile(), jobStatus.getTrackingUrl(), jobStatus.getJobName());
}

代码示例来源:origin: apache/hive

org.apache.hadoop.mapred.JobID.forName(childJobIdString);
LOG.info(String.format("Reconnecting to an existing job %s", childJobIdString));
if (jobStatus.isJobComplete()) {
LOG.info(String.format("Child job %s completed", childJobIdString));
int exitCode = 0;
if (jobStatus.getRunState() != org.apache.hadoop.mapred.JobStatus.SUCCEEDED) {
exitCode = 1;
jobStatus.mapProgress()*100, jobStatus.reduceProgress()*100);
updateJobStatePercentAndChildId(conf, context.getJobID().toString(), percent, null);

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-common

this.systemJobFile = new Path(systemJobDir, "job.xml");
this.id = jobid;
JobConf cOnf= new JobConf(systemJobFile);
this.localFs = FileSystem.getLocal(conf);
String user = UserGroupInformation.getCurrentUser().getShortUserName();
this.localJobDir = localFs.makeQualified(new Path(
new Path(conf.getLocalPath(jobDir), user), jobid.toString()));
this.localJobFile = new Path(this.localJobDir, id + ".xml");
OutputStream out = localFs.create(localJobFile);
try {
conf.writeXml(out);
} finally {
out.close();
profile = new JobProfile(job.getUser(), id, systemJobFile.toString(),
"http://localhost:8080/", job.getJobName());
status = new JobStatus(id, 0.0f, 0.0f, JobStatus.RUNNING,
profile.getUser(), profile.getJobName(), profile.getJobFile(),
profile.getURL().toString());

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient

mr = createMiniClusterWithCapacityScheduler();
JobConf job = new JobConf(mr.getConfig());
FileOutputFormat.setOutputPath(job, outDir);
job.setInputFormat(TextInputFormat.class);
job.setOutputFormat(TextOutputFormat.class);
JobClient client = new JobClient(mr.getConfig());
assertFalse(runningJob.isRetired());
assertEquals( runningJob.getFailureInfo(),"");
assertEquals(runningJob.getJobStatus().getJobName(), "N/A");
assertEquals(status.getActiveTrackerNames().size(), 2);
assertEquals(status.getBlacklistedTrackers(), 0);
assertEquals(status.getBlacklistedTrackerNames().size(), 0);
assertTrue(client.getJobsFromQueue("default")[0].getJobFile().endsWith(
"/job.xml"));
.getJobStatus().getJobID());
assertEquals("Expected matching startTimes", rj.getJobStatus()
.getStartTime(), client.getJob(jobId).getJobStatus().getStartTime());
} finally {
if (fileSys != null) {

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient

mr = createMiniClusterWithCapacityScheduler();
JobConf job = new JobConf(mr.getConfig());
FileOutputFormat.setOutputPath(job, outDir);
job.setInputFormat(TextInputFormat.class);
job.setOutputFormat(TextOutputFormat.class);
job.setMapperClass(IdentityMapper.class);
job.setNumReduceTasks(0);
JobClient client = new JobClient(mr.getConfig());
RunningJob rj = client.submitJob(job);
JobID jobId = rj.getID();
assertEquals("Expected matching JobIDs", jobId, client.getJob(jobId)
.getJobStatus().getJobID());
assertEquals("Expected matching startTimes", rj.getJobStatus()
.getStartTime(), client.getJob(jobId).getJobStatus()
.getStartTime());
} finally {
if (fileSys != null) {

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

public void testCurrentJHParser() throws Exception {
final Configuration cOnf= new Configuration();
final FileSystem lfs = FileSystem.getLocal(conf);
lfs.getUri(), lfs.getWorkingDirectory());
conf.setInt(TTConfig.TT_REDUCE_SLOTS, 1);
MiniMRCluster mrCluster = new MiniMRCluster(1, "file:///", 1, null, null,
new JobConf(conf));
JobClient jc = new JobClient(jConf);
String user = jc.getAllJobs()[0].getUsername();

代码示例来源:origin: com.facebook.hadoop/hadoop-core

JobID jobId = job.getStatus().getJobID();
Path jobStatusFile = getInfoFilePath(jobId);
try {
FSDataOutputStream dataOut = fs.create(jobStatusFile);
job.getStatus().write(dataOut);
ex.getMessage(), ex);
try {
fs.delete(jobStatusFile, true);

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

clock.advance(600);
TaskAttemptID[] tid = new TaskAttemptID[2];
JobConf cOnf= new JobConf();
conf.setNumMapTasks(1);
conf.setNumReduceTasks(0);
conf.set(MRJobConfig.MAX_TASK_FAILURES_PER_TRACKER, "1");
conf.set(MRJobConfig.SETUP_CLEANUP_NEEDED, "false");
JobStatus.SUCCEEDED, job.getStatus().getRunState());
1, jobTracker.getClusterStatus(false).getBlacklistedTrackers());
jobTracker.getClusterStatus(false).getActiveTrackerNames()
.contains(trackers[0]));
0, jobTracker.getClusterStatus(false).getBlacklistedTrackers());

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

+ jobtracker.getInfoPort() + "/jobdetails.jsp?jobid=" + jobid;
this.jobtracker = jobtracker;
this.status = new JobStatus(jobid, 0.0f, 0.0f, JobStatus.PREP);
this.startTime = System.currentTimeMillis();
status.setStartTime(startTime);
this.localFs = FileSystem.getLocal(default_conf);
JobConf default_job_cOnf= new JobConf(default_conf);
this.localJobFile = default_job_conf.getLocalPath(JobTracker.SUBDIR
+"/"+jobid + ".xml");
this.localJarFile = default_job_conf.getLocalPath(JobTracker.SUBDIR
+"/"+ jobid + ".jar");
Path sysDir = new Path(this.jobtracker.getSystemDir());
FileSystem fs = sysDir.getFileSystem(default_conf);
jobFile = new Path(sysDir, jobid + "/job.xml");
fs.copyToLocalFile(jobFile, localJobFile);
cOnf= new JobConf(localJobFile);
this.priority = conf.getJobPriority();
this.status.setJobPriority(this.priority);
this.profile = new JobProfile(conf.getUser(), jobid,
jobFile.toString(), url, conf.getJobName(),
String jarFile = conf.getJar();
if (jarFile != null) {
fs.copyToLocalFile(new Path(jarFile), localJarFile);
conf.setJar(localJarFile.toString());

代码示例来源:origin: com.facebook.hadoop/hadoop-core

public Job(JobID jobid, JobConf conf) throws IOException {
this.doSequential =
conf.getBoolean("mapred.localrunner.sequential", true);
this.id = jobid;
this.mapoutputFile = new MapOutputFile(jobid);
this.mapoutputFile.setConf(conf);
this.localFile = new JobConf(conf).getLocalPath(jobDir+id+".xml");
this.localFs = FileSystem.getLocal(conf);
persistConf(this.localFs, this.localFile, conf);
this.job = new JobConf(localFile);
profile = new JobProfile(job.getUser(), id, localFile.toString(),
"http://localhost:8080/", job.getJobName());
status = new JobStatus(id, 0.0f, 0.0f, JobStatus.RUNNING);
jobs.put(id, this);
numSlots = conf.getInt(LOCAL_RUNNER_SLOTS, DEFAULT_LOCAL_RUNNER_SLOTS);
executor = Executors.newFixedThreadPool(numSlots);
int handlerCount = numSlots;
umbilicalServer =
RPC.getServer(this, LOCALHOST, 0, handlerCount, false, conf);
umbilicalServer.start();
umbilicalPort = umbilicalServer.getListenerAddress().getPort();
this.start();
}

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

public Job(JobID jobid, JobConf conf) throws IOException {
this.file = new Path(getSystemDir(), jobid + "/job.xml");
this.id = jobid;
this.mapoutputFile = new MapOutputFile(jobid);
this.mapoutputFile.setConf(conf);
this.localFile = new JobConf(conf).getLocalPath(jobDir+id+".xml");
this.localFs = FileSystem.getLocal(conf);
fs.copyToLocalFile(file, localFile);
this.job = new JobConf(localFile);
profile = new JobProfile(job.getUser(), id, file.toString(),
"http://localhost:8080/", job.getJobName());
status = new JobStatus(id, 0.0f, 0.0f, JobStatus.RUNNING);
jobs.put(id, this);
this.start();
}

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

/**
* We store a JobProfile and a timestamp for when we last
* acquired the job profile. If the job is null, then we cannot
* perform any of the tasks. The job might be null if the cluster
* has completely forgotten about the job. (eg, 24 hours after the
* job completes.)
*/
public NetworkedJob(JobStatus status, Cluster cluster) throws IOException {
this(status, cluster, new JobConf(status.getJobFile()));
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

JobConf cOnf= new JobConf(miniMRCluster.createJobConf());
if (doPhysicalMemory) {
conf.setLong(MRJobConfig.MAP_MEMORY_PHYSICAL_MB, PER_TASK_LIMIT);
conf.setLong(MRJobConfig.REDUCE_MEMORY_PHYSICAL_MB, PER_TASK_LIMIT);
} else {
conf.setMemoryForMapTask(PER_TASK_LIMIT);
JobClient jClient = new JobClient(conf);
JobStatus[] jStatus = jClient.getAllJobs();
JobStatus js = jStatus[0]; // Our only job
RunningJob rj = jClient.getJob(js.getJobID());
TaskCompletionEvent[] taskComplEvents = rj.getTaskCompletionEvents(0);
rj.getTaskDiagnostics(tce.getTaskAttemptId());

代码示例来源:origin: cdapio/cdap

/**
* @param runId for which information will be returned.
* @return a {@link MRJobInfo} containing information about a particular MapReduce program run.
* @throws IOException if there is failure to communicate through the JobClient.
* @throws NotFoundException if a Job with the given runId is not found.
*/
@Override
public MRJobInfo getMRJobInfo(Id.Run runId) throws IOException, NotFoundException {
Preconditions.checkArgument(ProgramType.MAPREDUCE.equals(runId.getProgram().getType()));
JobClient jobClient = new JobClient(hConf);
JobStatus[] jobs = jobClient.getAllJobs();
JobStatus thisJob = findJobForRunId(jobs, runId.toEntityId());
RunningJob runningJob = jobClient.getJob(thisJob.getJobID());
if (runningJob == null) {
throw new IllegalStateException(String.format("JobClient returned null for RunId: '%s', JobId: '%s'",
runId, thisJob.getJobID()));
}
Counters counters = runningJob.getCounters();
TaskReport[] mapTaskReports = jobClient.getMapTaskReports(thisJob.getJobID());
TaskReport[] reduceTaskReports = jobClient.getReduceTaskReports(thisJob.getJobID());
return new MRJobInfo(runningJob.mapProgress(), runningJob.reduceProgress(),
groupToMap(counters.getGroup(TaskCounter.class.getName())),
toMRTaskInfos(mapTaskReports), toMRTaskInfos(reduceTaskReports), true);
}

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

@SuppressWarnings("deprecation")
public void testKillCompletedJob() throws IOException, InterruptedException {
job = new MyFakeJobInProgress(new JobConf(), jobTracker);
jobTracker.addJob(job.getJobID(), (JobInProgress)job);
job.status.setRunState(JobStatus.SUCCEEDED);
jobTracker.killJob(job.getJobID());
assertTrue("Run state changed when killing completed job" ,
job.status.getRunState() == JobStatus.SUCCEEDED);
}

代码示例来源:origin: LiveRamp/cascading_ext

JobClient jobClient = new JobClient(new InetSocketAddress(jobTracker, port), new Configuration());
for(JobStatus status: jobClient.getAllJobs()){
if(status.getRunState() == JobStatus.RUNNING){
RunningJob job = jobClient.getJob(status.getJobID());
if(job.getJobName().contains(jobsToTarget) || job.getID().toString().contains(jobsToTarget)){
JobID jobid = status.getJobID();
RunningJob runningJob = jobClient.getJob(entry.getKey().getJobID());
runningJob.killTask(entry.getKey(), false);

代码示例来源:origin: org.apache.hadoop/hadoop-mapred-test

JobClient jClient = new JobClient(conf);
JobStatus[] jStatus = jClient.getAllJobs();
JobStatus js = jStatus[0]; // Our only job
RunningJob rj = jClient.getJob(js.getJobID());
TaskCompletionEvent[] taskComplEvents = rj.getTaskCompletionEvents(0);
rj.getTaskDiagnostics(tce.getTaskAttemptId());

推荐阅读
  • 如何使用 `org.eclipse.rdf4j.query.impl.MapBindingSet.getValue()` 方法及其代码示例详解 ... [详细]
  • Presto:高效即席查询引擎的深度解析与应用
    本文深入解析了Presto这一高效的即席查询引擎,详细探讨了其架构设计及其优缺点。Presto通过内存到内存的数据处理方式,显著提升了查询性能,相比传统的MapReduce查询,不仅减少了数据传输的延迟,还提高了查询的准确性和效率。然而,Presto在大规模数据处理和容错机制方面仍存在一定的局限性。本文还介绍了Presto在实际应用中的多种场景,展示了其在大数据分析领域的强大潜力。 ... [详细]
  • HBase Java API 进阶:过滤器详解与应用实例
    本文详细探讨了HBase 1.2.6版本中Java API的高级应用,重点介绍了过滤器的使用方法和实际案例。首先,文章对几种常见的HBase过滤器进行了概述,包括列前缀过滤器(ColumnPrefixFilter)和时间戳过滤器(TimestampsFilter)。此外,还详细讲解了分页过滤器(PageFilter)的实现原理及其在大数据查询中的应用场景。通过具体的代码示例,读者可以更好地理解和掌握这些过滤器的使用技巧,从而提高数据处理的效率和灵活性。 ... [详细]
  • 深入剖析Java中SimpleDateFormat在多线程环境下的潜在风险与解决方案
    深入剖析Java中SimpleDateFormat在多线程环境下的潜在风险与解决方案 ... [详细]
  • Apache Hadoop HDFS QJournalProtocol 中 getJournalCTime 方法的应用与代码实例分析 ... [详细]
  • 深入解析 Android 中 EditText 的 getLayoutParams 方法及其代码应用实例 ... [详细]
  • 本文探讨了 Java 中 Pair 类的历史与现状。虽然 Java 标准库中没有内置的 Pair 类,但社区和第三方库提供了多种实现方式,如 Apache Commons 的 Pair 类和 JavaFX 的 javafx.util.Pair 类。这些实现为需要处理成对数据的开发者提供了便利。此外,文章还讨论了为何标准库未包含 Pair 类的原因,以及在现代 Java 开发中使用 Pair 类的最佳实践。 ... [详细]
  • 2012年9月12日优酷土豆校园招聘笔试题目解析与备考指南
    2012年9月12日,优酷土豆校园招聘笔试题目解析与备考指南。在选择题部分,有一道题目涉及中国人的血型分布情况,具体为A型30%、B型20%、O型40%、AB型10%。若需确保在随机选取的样本中,至少有一人为B型血的概率不低于90%,则需要选取的最少人数是多少?该问题不仅考察了概率统计的基本知识,还要求考生具备一定的逻辑推理能力。 ... [详细]
  • 动态壁纸 LiveWallPaper:让您的桌面栩栩如生(第二篇)
    在本文中,我们将继续探讨如何开发动态壁纸 LiveWallPaper,使您的桌面更加生动有趣。作为 2010 年 Google 暑期大学生博客分享大赛 Android 篇的一部分,我们将详细介绍 Ed Burnette 的《Hello, Android》第三版中的相关内容,并分享一些实用的开发技巧和经验。通过本篇文章,您将了解到如何利用 Android SDK 创建引人入胜的动态壁纸,提升用户体验。 ... [详细]
  • SQLite数据库CRUD操作实例分析与应用
    本文通过分析和实例演示了SQLite数据库中的CRUD(创建、读取、更新和删除)操作,详细介绍了如何在Java环境中使用Person实体类进行数据库操作。文章首先阐述了SQLite数据库的基本概念及其在移动应用开发中的重要性,然后通过具体的代码示例,逐步展示了如何实现对Person实体类的增删改查功能。此外,还讨论了常见错误及其解决方法,为开发者提供了实用的参考和指导。 ... [详细]
  • FastDFS Nginx 扩展模块的源代码解析与技术剖析
    FastDFS Nginx 扩展模块的源代码解析与技术剖析 ... [详细]
  • 本文深入探讨了CGLIB BeanCopier在Bean对象复制中的应用及其优化技巧。相较于Spring的BeanUtils和Apache的BeanUtils,CGLIB BeanCopier在性能上具有显著优势。通过详细分析其内部机制和使用场景,本文提供了多种优化方法,帮助开发者在实际项目中更高效地利用这一工具。此外,文章还讨论了CGLIB BeanCopier在复杂对象结构和大规模数据处理中的表现,为读者提供了实用的参考和建议。 ... [详细]
  • 如何高效启动大数据应用之旅?
    在前一篇文章中,我探讨了大数据的定义及其与数据挖掘的区别。本文将重点介绍如何高效启动大数据应用项目,涵盖关键步骤和最佳实践,帮助读者快速踏上大数据之旅。 ... [详细]
  • Hadoop 2.6 主要由 HDFS 和 YARN 两大部分组成,其中 YARN 包含了运行在 ResourceManager 的 JVM 中的组件以及在 NodeManager 中运行的部分。本文深入探讨了 Hadoop 2.6 日志文件的解析方法,并详细介绍了 MapReduce 日志管理的最佳实践,旨在帮助用户更好地理解和优化日志处理流程,提高系统运维效率。 ... [详细]
  • 在第二课中,我们将深入探讨Scala的面向对象编程核心概念及其在Spark源码中的应用。首先,通过详细的实战案例,全面解析Scala中的类和对象。作为一门纯面向对象的语言,Scala的类设计和对象使用是理解其面向对象特性的关键。此外,我们还将介绍如何通过阅读Spark源码来进一步巩固对这些概念的理解。这不仅有助于提升编程技能,还能为后续的高级应用开发打下坚实的基础。 ... [详细]
author-avatar
水平蓝精灵天堂_678
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有