热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.hadoop.yarn.api.records.timeline.TimelineEvent类的使用及代码示例

本文整理了Java中org.apache.hadoop.yarn.api.records.timeline.TimelineEvent类的一些代码示例,展示了

本文整理了Java中org.apache.hadoop.yarn.api.records.timeline.TimelineEvent类的一些代码示例,展示了TimelineEvent类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。TimelineEvent类的具体详情如下:
包路径:org.apache.hadoop.yarn.api.records.timeline.TimelineEvent
类名称:TimelineEvent

TimelineEvent介绍

[英]The class that contains the information of an event that is related to some conceptual entity of an application. Users are free to define what the event means, such as starting an application, getting allocated a container and etc.
[中]包含与应用程序的某个概念实体相关的事件信息的类。用户可以自由定义事件的含义,例如启动应用程序、分配容器等。

代码示例

代码示例来源:origin: alibaba/jstorm

private static void publishContainerStartEvent(
final TimelineClient timelineClient, Container container, String domainId,
UserGroupInformation ugi) {
final TimelineEntity entity = new TimelineEntity();
entity.setEntityId(container.getId().toString());
entity.setEntityType(DSEntity.DS_CONTAINER.toString());
entity.setDomainId(domainId);
entity.addPrimaryFilter(JOYConstants.USER, ugi.getShortUserName());
TimelineEvent event = new TimelineEvent();
event.setTimestamp(System.currentTimeMillis());
event.setEventType(DSEvent.DS_CONTAINER_START.toString());
event.addEventInfo(JOYConstants.NODE, container.getNodeId().toString());
event.addEventInfo(JOYConstants.RESOURCES, container.getResource().toString());
entity.addEvent(event);
try {
ugi.doAs(new PrivilegedExceptionAction() {
@Override
public TimelinePutResponse run() throws Exception {
return timelineClient.putEntities(entity);
}
});
} catch (Exception e) {
LOG.error("Container start event could not be published for "
+ container.getId().toString(),
e instanceof UndeclaredThrowableException ? e.getCause() : e);
}
}

代码示例来源:origin: alibaba/jstorm

private static void publishApplicationAttemptEvent(
final TimelineClient timelineClient, String appAttemptId,
DSEvent appEvent, String domainId, UserGroupInformation ugi) {
final TimelineEntity entity = new TimelineEntity();
entity.setEntityId(appAttemptId);
entity.setEntityType(DSEntity.DS_APP_ATTEMPT.toString());
entity.setDomainId(domainId);
entity.addPrimaryFilter(JOYConstants.USER, ugi.getShortUserName());
TimelineEvent event = new TimelineEvent();
event.setEventType(appEvent.toString());
event.setTimestamp(System.currentTimeMillis());
entity.addEvent(event);
try {
timelineClient.putEntities(entity);
} catch (YarnException e) {
LOG.error("App Attempt "
+ (appEvent.equals(DSEvent.DS_APP_ATTEMPT_START) ? JOYConstants.START : JOYConstants.END)
+ " event could not be published for "
+ appAttemptId.toString(), e);
} catch (IOException e) {
LOG.error("App Attempt "
+ (appEvent.equals(DSEvent.DS_APP_ATTEMPT_START) ? JOYConstants.START : JOYConstants.END)
+ " event could not be published for "
+ appAttemptId.toString(), e);
}
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-timeline-pluginstorage

/**
* Create a test event
*/
static TimelineEvent createEvent(long timestamp, String type, Map Object> info) {
TimelineEvent event = new TimelineEvent();
event.setTimestamp(timestamp);
event.setEventType(type);
event.setEventInfo(info);
return event;
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-common

partEvents.setEntityType("entity type " + j);
for (int i = 0; i <2; ++i) {
TimelineEvent event = new TimelineEvent();
event.setTimestamp(System.currentTimeMillis());
event.setEventType("event type " + i);
event.addEventInfo("key1", "val1");
event.addEventInfo("key2", "val2");
partEvents.addEvent(event);
Assert.assertEquals(2, partEvents1.getEvents().size());
TimelineEvent event11 = partEvents1.getEvents().get(0);
Assert.assertEquals("event type 0", event11.getEventType());
Assert.assertEquals(2, event11.getEventInfo().size());
TimelineEvent event12 = partEvents1.getEvents().get(1);
Assert.assertEquals("event type 1", event12.getEventType());
Assert.assertEquals(2, event12.getEventInfo().size());
TimelineEvents.EventsOfOneEntity partEvents2 = events.getAllEvents().get(1);
Assert.assertEquals("entity id 1", partEvents2.getEntityId());
Assert.assertEquals(2, partEvents2.getEvents().size());
TimelineEvent event21 = partEvents2.getEvents().get(0);
Assert.assertEquals("event type 0", event21.getEventType());
Assert.assertEquals(2, event21.getEventInfo().size());
TimelineEvent event22 = partEvents2.getEvents().get(1);
Assert.assertEquals("event type 1", event22.getEventType());
Assert.assertEquals(2, event22.getEventInfo().size());

代码示例来源:origin: ch.cern.hadoop/hadoop-yarn-server-applicationhistoryservice

if (events != null) {
for (TimelineEvent event : events) {
if (event.getEventType().equals(
ApplicationMetricsConstants.CREATED_EVENT_TYPE)) {
createdTime = event.getTimestamp();
} else if (event.getEventType().equals(
ApplicationMetricsConstants.FINISHED_EVENT_TYPE)) {
finishedTime = event.getTimestamp();
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-applicationhistoryservice

private static void createAppModifiedEvent(ApplicationId appId,
TimelineEvent tEvent, long updatedTimeIndex, String queue, int priority) {
tEvent.setEventType(ApplicationMetricsConstants.UPDATED_EVENT_TYPE);
tEvent.setTimestamp(Integer.MAX_VALUE + updatedTimeIndex + appId.getId());
Map eventInfo = new HashMap();
eventInfo.put(ApplicationMetricsConstants.QUEUE_ENTITY_INFO, queue);
eventInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO,
priority);
tEvent.setEventInfo(eventInfo);
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-common

TimelineEvent event = new TimelineEvent();
List> eventInfoList =
new ArrayList>();
eventInfo.put("ekey2", "eval2");
eventInfoList.add(eventInfo);
event.setEventInfo(null);
for (Map eventInfoToSet : eventInfoList) {
event.setEventInfo(eventInfoToSet);
assertEventInfo(event);
event.addEventInfo(eventInfoToAdd);
assertEventInfo(event);

代码示例来源:origin: ch.cern.hadoop/hadoop-yarn-server-applicationhistoryservice

if (events != null) {
for (TimelineEvent event : events) {
if (event.getEventType().equals(
AppAttemptMetricsConstants.REGISTERED_EVENT_TYPE)) {
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;
.toString());
} else if (event.getEventType().equals(
AppAttemptMetricsConstants.FINISHED_EVENT_TYPE)) {
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;

代码示例来源:origin: com.github.jiayuhan-it/hadoop-yarn-server-applicationhistoryservice

break;
if (event.getTimestamp() <= windowStart) {
continue;
if (event.getTimestamp() > windowEnd) {
continue;
if (eventTypes != null && !eventTypes.contains(event.getEventType())) {
continue;

代码示例来源:origin: io.hops/hadoop-yarn-common

private static void assertEventInfo(TimelineEvent event) {
Assert.assertNotNull(event);
Assert.assertNotNull(event.getEventInfoJAXB());
Assert.assertTrue(event.getEventInfo() instanceof HashMap);
Assert.assertTrue(event.getEventInfoJAXB() instanceof HashMap);
Assert.assertEquals(event.getEventInfo(), event.getEventInfoJAXB());
}
}

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-applicationhistoryservice

if (e.getTimestamp() startTime = e.getTimestamp();

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-jobclient

Assert.assertEquals(EventType.AM_STARTED.toString(),
tEntity.getEvents().get(tEntity.getEvents().size() - 1)
.getEventType());
Assert.assertEquals(EventType.JOB_FINISHED.toString(),
tEntity.getEvents().get(0).getEventType());
Assert.assertEquals(EventType.AM_STARTED.toString(),
tEntity.getEvents().get(tEntity.getEvents().size() - 1)
.getEventType());
Assert.assertEquals(EventType.JOB_FAILED.toString(),
tEntity.getEvents().get(0).getEventType());
} finally {
if (cluster != null) {

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-applicationhistoryservice

/**
* Create a test event
*/
private static TimelineEvent createEvent(long timestamp, String type, Map Object> info) {
TimelineEvent event = new TimelineEvent();
event.setTimestamp(timestamp);
event.setEventType(type);
event.setEventInfo(info);
return event;
}

代码示例来源:origin: apache/hive

TimelineEntity createPostHookEvent(String queryId, long stopTime, String user, String requestuser, boolean success,
String opId, Map durations, String domainId) throws Exception {
LOG.info("Received post-hook notification for :" + queryId);
TimelineEntity atsEntity = new TimelineEntity();
atsEntity.setEntityId(queryId);
atsEntity.setEntityType(EntityTypes.HIVE_QUERY_ID.name());
atsEntity.addPrimaryFilter(PrimaryFilterTypes.user.name(), user);
atsEntity.addPrimaryFilter(PrimaryFilterTypes.requestuser.name(), requestuser);
if (opId != null) {
atsEntity.addPrimaryFilter(PrimaryFilterTypes.operationid.name(), opId);
}
TimelineEvent stopEvt = new TimelineEvent();
stopEvt.setEventType(EventTypes.QUERY_COMPLETED.name());
stopEvt.setTimestamp(stopTime);
atsEntity.addEvent(stopEvt);
atsEntity.addOtherInfo(OtherInfoTypes.STATUS.name(), success);
// Perf times
JSONObject perfObj = new JSONObject(new LinkedHashMap<>());
for (Map.Entry entry : durations.entrySet()) {
perfObj.put(entry.getKey(), entry.getValue());
}
atsEntity.addOtherInfo(OtherInfoTypes.PERF.name(), perfObj.toString());
atsEntity.setDomainId(domainId);
return atsEntity;
}

代码示例来源:origin: io.hops/hadoop-yarn-common

partEvents.setEntityType("entity type " + j);
for (int i = 0; i <2; ++i) {
TimelineEvent event = new TimelineEvent();
event.setTimestamp(System.currentTimeMillis());
event.setEventType("event type " + i);
event.addEventInfo("key1", "val1");
event.addEventInfo("key2", "val2");
partEvents.addEvent(event);
Assert.assertEquals(2, partEvents1.getEvents().size());
TimelineEvent event11 = partEvents1.getEvents().get(0);
Assert.assertEquals("event type 0", event11.getEventType());
Assert.assertEquals(2, event11.getEventInfo().size());
TimelineEvent event12 = partEvents1.getEvents().get(1);
Assert.assertEquals("event type 1", event12.getEventType());
Assert.assertEquals(2, event12.getEventInfo().size());
TimelineEvents.EventsOfOneEntity partEvents2 = events.getAllEvents().get(1);
Assert.assertEquals("entity id 1", partEvents2.getEntityId());
Assert.assertEquals(2, partEvents2.getEvents().size());
TimelineEvent event21 = partEvents2.getEvents().get(0);
Assert.assertEquals("event type 0", event21.getEventType());
Assert.assertEquals(2, event21.getEventInfo().size());
TimelineEvent event22 = partEvents2.getEvents().get(1);
Assert.assertEquals("event type 1", event22.getEventType());
Assert.assertEquals(2, event22.getEventInfo().size());

代码示例来源:origin: com.github.jiayuhan-it/hadoop-yarn-server-applicationhistoryservice

if (events != null) {
for (TimelineEvent event : events) {
if (event.getEventType().equals(
ApplicationMetricsConstants.CREATED_EVENT_TYPE)) {
createdTime = event.getTimestamp();
} else if (event.getEventType().equals(
ApplicationMetricsConstants.FINISHED_EVENT_TYPE)) {
finishedTime = event.getTimestamp();
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;

代码示例来源:origin: io.hops/hadoop-yarn-common

TimelineEvent event = new TimelineEvent();
List> eventInfoList =
new ArrayList>();
eventInfo.put("ekey2", "eval2");
eventInfoList.add(eventInfo);
event.setEventInfo(null);
for (Map eventInfoToSet : eventInfoList) {
event.setEventInfo(eventInfoToSet);
assertEventInfo(event);
event.addEventInfo(eventInfoToAdd);
assertEventInfo(event);

代码示例来源:origin: com.github.jiayuhan-it/hadoop-yarn-server-applicationhistoryservice

if (events != null) {
for (TimelineEvent event : events) {
if (event.getEventType().equals(
AppAttemptMetricsConstants.REGISTERED_EVENT_TYPE)) {
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;
.toString());
} else if (event.getEventType().equals(
AppAttemptMetricsConstants.FINISHED_EVENT_TYPE)) {
Map eventInfo = event.getEventInfo();
if (eventInfo == null) {
continue;

代码示例来源:origin: ch.cern.hadoop/hadoop-yarn-server-applicationhistoryservice

break;
if (event.getTimestamp() <= windowStart) {
continue;
if (event.getTimestamp() > windowEnd) {
continue;
if (eventTypes != null && !eventTypes.contains(event.getEventType())) {
continue;

代码示例来源:origin: org.apache.hadoop/hadoop-yarn-common

private static void assertEventInfo(TimelineEvent event) {
Assert.assertNotNull(event);
Assert.assertNotNull(event.getEventInfoJAXB());
Assert.assertTrue(event.getEventInfo() instanceof HashMap);
Assert.assertTrue(event.getEventInfoJAXB() instanceof HashMap);
Assert.assertEquals(event.getEventInfo(), event.getEventInfoJAXB());
}
}

推荐阅读
  • com.sun.javadoc.PackageDoc.exceptions()方法的使用及代码示例 ... [详细]
  • javax.mail.search.BodyTerm.matchPart()方法的使用及代码示例 ... [详细]
  • 秒建一个后台管理系统?用这5个开源免费的Java项目就够了
    秒建一个后台管理系统?用这5个开源免费的Java项目就够了 ... [详细]
  • 在JavaWeb开发中,文件上传是一个常见的需求。无论是通过表单还是其他方式上传文件,都必须使用POST请求。前端部分通常采用HTML表单来实现文件选择和提交功能。后端则利用Apache Commons FileUpload库来处理上传的文件,该库提供了强大的文件解析和存储能力,能够高效地处理各种文件类型。此外,为了提高系统的安全性和稳定性,还需要对上传文件的大小、格式等进行严格的校验和限制。 ... [详细]
  • 如何使用 `org.apache.tomcat.websocket.server.WsServerContainer.findMapping()` 方法及其代码示例解析 ... [详细]
  • 本文探讨了 Java 中 Pair 类的历史与现状。虽然 Java 标准库中没有内置的 Pair 类,但社区和第三方库提供了多种实现方式,如 Apache Commons 的 Pair 类和 JavaFX 的 javafx.util.Pair 类。这些实现为需要处理成对数据的开发者提供了便利。此外,文章还讨论了为何标准库未包含 Pair 类的原因,以及在现代 Java 开发中使用 Pair 类的最佳实践。 ... [详细]
  • 在JUnit测试框架中,确保@Test注解的方法按特定顺序执行是常见的需求。本文总结了三种实现这一目标的策略。首先,介绍了通过方法名称排序来控制执行顺序的基本方法。其次,推荐了一种利用依赖管理插件的方式,这种方法更为灵活且易于维护。最后,探讨了使用第三方库如TestNG或Jupiter扩展来实现更复杂的顺序控制。每种方法都有其适用场景和优缺点,开发者可以根据具体需求选择最合适的方案。 ... [详细]
  • 深入解析Struts、Spring与Hibernate三大框架的面试要点与技巧 ... [详细]
  • 在处理大规模数据数组时,优化分页组件对于提高页面加载速度和用户体验至关重要。本文探讨了如何通过高效的分页策略,减少数据渲染的负担,提升应用性能。具体方法包括懒加载、虚拟滚动和数据预取等技术,这些技术能够显著降低内存占用和提升响应速度。通过实际案例分析,展示了这些优化措施的有效性和可行性。 ... [详细]
  • 本文详细解析了 Yii2 框架中视图和布局的各种函数,并综述了它们在实际开发中的应用场景。通过深入探讨每个函数的功能和用法,为开发者提供了全面的参考,帮助他们在项目中更高效地利用这些工具。 ... [详细]
  • 深入解析Java虚拟机的内存分区与管理机制
    Java虚拟机的内存分区与管理机制复杂且精细。其中,某些内存区域在虚拟机启动时即创建并持续存在,而另一些则随用户线程的生命周期动态创建和销毁。例如,每个线程都拥有一个独立的程序计数器,确保线程切换后能够准确恢复到之前的执行位置。这种设计不仅提高了多线程环境下的执行效率,还增强了系统的稳定性和可靠性。 ... [详细]
  • Apache Hadoop HDFS QJournalProtocol 中 getJournalCTime 方法的应用与代码实例分析 ... [详细]
  • 本文探讨了 Kafka 集群的高效部署与优化策略。首先介绍了 Kafka 的下载与安装步骤,包括从官方网站获取最新版本的压缩包并进行解压。随后详细讨论了集群配置的最佳实践,涵盖节点选择、网络优化和性能调优等方面,旨在提升系统的稳定性和处理能力。此外,还提供了常见的故障排查方法和监控方案,帮助运维人员更好地管理和维护 Kafka 集群。 ... [详细]
  • 本文介绍了如何在iOS平台上使用GLSL着色器将YV12格式的视频帧数据转换为RGB格式,并展示了转换后的图像效果。通过详细的技术实现步骤和代码示例,读者可以轻松掌握这一过程,适用于需要进行视频处理的应用开发。 ... [详细]
  • 2012年9月12日优酷土豆校园招聘笔试题目解析与备考指南
    2012年9月12日,优酷土豆校园招聘笔试题目解析与备考指南。在选择题部分,有一道题目涉及中国人的血型分布情况,具体为A型30%、B型20%、O型40%、AB型10%。若需确保在随机选取的样本中,至少有一人为B型血的概率不低于90%,则需要选取的最少人数是多少?该问题不仅考察了概率统计的基本知识,还要求考生具备一定的逻辑推理能力。 ... [详细]
author-avatar
Just忽略我_559
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有