热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.flink.api.common.JobID类的使用及代码示例

本文整理了Java中org.apache.flink.api.common.JobID类的一些代码示例,展示了JobID类的具体用法。这些代码

本文整理了Java中org.apache.flink.api.common.JobID类的一些代码示例,展示了JobID类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。JobID类的具体详情如下:
包路径:org.apache.flink.api.common.JobID
类名称:JobID

JobID介绍

[英]Unique (at least statistically unique) identifier for a Flink Job. Jobs in Flink correspond to dataflow graphs.

Jobs act simultaneously as sessions, because jobs can be created and submitted incrementally in different parts. Newer fragments of a graph can be attached to existing graphs, thereby extending the current data flow graphs.
[中]Flink作业的唯一(至少在统计上唯一)标识符。Flink中的作业对应于数据流图。
作业同时充当会话,因为作业可以在不同的部分以增量方式创建和提交。图的较新片段可以附加到现有图,从而扩展当前的数据流图。

代码示例

代码示例来源:origin: apache/flink

private static ClusterClient createClusterClient() throws Exception {
final ClusterClient clusterClient = mock(ClusterClient.class);
when(clusterClient.listJobs()).thenReturn(CompletableFuture.completedFuture(Arrays.asList(
new JobStatusMessage(new JobID(), "job1", JobStatus.RUNNING, 1L),
new JobStatusMessage(new JobID(), "job2", JobStatus.CREATED, 1L),
new JobStatusMessage(new JobID(), "job3", JobStatus.SUSPENDING, 3L),
new JobStatusMessage(new JobID(), "job4", JobStatus.SUSPENDING, 2L),
new JobStatusMessage(new JobID(), "job5", JobStatus.FINISHED, 3L)
)));
return clusterClient;
}
}

代码示例来源:origin: apache/flink

@Test
public void testStop() throws Exception {
// test stop properly
JobID jid = new JobID();
String jidString = jid.toString();
String[] parameters = { jidString };
final ClusterClient clusterClient = createClusterClient(null);
MockedCliFrontend testFrOntend= new MockedCliFrontend(clusterClient);
testFrontend.stop(parameters);
Mockito.verify(clusterClient, times(1)).stop(any(JobID.class));
}

代码示例来源:origin: apache/flink

/**
* Creates a new Execution Environment.
*/
protected ExecutionEnvironment() {
jobID = JobID.generate();
}

代码示例来源:origin: apache/flink

@Test
public void testMissingParallelism() throws Exception {
final JobID jobId = new JobID();
final String[] args = {jobId.toString()};
try {
callModify(args);
fail("Expected CliArgsException");
} catch (CliArgsException expected) {
// expected
}
}

代码示例来源:origin: apache/flink

/**
* Tests cancelling with the savepoint option.
*/
@Test
public void testCancelWithSavepoint() throws Exception {
{
// Cancel with savepoint (no target directory)
JobID jid = new JobID();
String[] parameters = { "-s", jid.toString() };
final ClusterClient clusterClient = createClusterClient();
MockedCliFrontend testFrOntend= new MockedCliFrontend(clusterClient);
testFrontend.cancel(parameters);
Mockito.verify(clusterClient, times(1))
.cancelWithSavepoint(any(JobID.class), isNull(String.class));
}
{
// Cancel with savepoint (with target directory)
JobID jid = new JobID();
String[] parameters = { "-s", "targetDirectory", jid.toString() };
final ClusterClient clusterClient = createClusterClient();
MockedCliFrontend testFrOntend= new MockedCliFrontend(clusterClient);
testFrontend.cancel(parameters);
Mockito.verify(clusterClient, times(1))
.cancelWithSavepoint(any(JobID.class), notNull(String.class));
}
}

代码示例来源:origin: apache/flink

StreamTask streamTask = spy(new EmptyStreamTask(mockEnvironment));
CheckpointMetaData checkpointMetaData = new CheckpointMetaData(checkpointId, timestamp);
StreamOperator streamOperator1 = mock(StreamOperator.class);
StreamOperator streamOperator2 = mock(StreamOperator.class);
StreamOperator streamOperator3 = mock(StreamOperator.class);
RunnableFuture> failingFuture = mock(RunnableFuture.class);
when(failingFuture.get()).thenThrow(new ExecutionException(new Exception("Test exception")));
when(operatorSnapshotResult3.getOperatorStateRawFuture()).thenReturn(failingFuture);
when(streamOperator1.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class))).thenReturn(operatorSnapshotResult1);
when(streamOperator2.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class))).thenReturn(operatorSnapshotResult2);
when(streamOperator3.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class))).thenReturn(operatorSnapshotResult3);
Whitebox.setInternalState(streamTask, "cancelables", new CloseableRegistry());
Whitebox.setInternalState(streamTask, "asyncOperationsThreadPool", newDirectExecutorService());
Whitebox.setInternalState(streamTask, "configuration", new StreamConfig(new Configuration()));
Whitebox.setInternalState(streamTask, "checkpointStorage", new MemoryBackendCheckpointStorage(new JobID(), null, null, Integer.MAX_VALUE));

代码示例来源:origin: apache/flink

final OneShotLatch completeSubtask = new OneShotLatch();
Environment mockEnvirOnment= spy(new MockEnvironmentBuilder().build());
thenAnswer((Answer) invocation -> {
createSubtask.trigger();
completeSubtask.await();
CheckpointMetaData checkpointMetaData = new CheckpointMetaData(checkpointId, timestamp);
final StreamOperator streamOperator = mock(StreamOperator.class);
final OperatorID operatorID = new OperatorID();
when(streamOperator.getOperatorID()).thenReturn(operatorID);
KeyedStateHandle managedKeyedStateHandle = mock(KeyedStateHandle.class);
DoneFuture.of(SnapshotResult.of(rawOperatorStateHandle)));
when(streamOperator.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class))).thenReturn(operatorSnapshotResult);
when(operatorChain.getAllOperators()).thenReturn(streamOperators);
CheckpointStorage checkpointStorage = new MemoryBackendCheckpointStorage(new JobID(), null, null, Integer.MAX_VALUE);
Whitebox.setInternalState(streamTask, "cancelables", new CloseableRegistry());
Whitebox.setInternalState(streamTask, "asyncOperationsThreadPool", executor);
Whitebox.setInternalState(streamTask, "configuration", new StreamConfig(new Configuration()));
Whitebox.setInternalState(streamTask, "checkpointStorage", checkpointStorage);

代码示例来源:origin: apache/flink

final OneShotLatch completeAcknowledge = new OneShotLatch();
CheckpointResponder checkpointRespOnder= mock(CheckpointResponder.class);
doAnswer(new Answer() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
new JobID(1L, 2L),
new ExecutionAttemptID(1L, 2L),
mock(TaskLocalStateStoreImpl.class),
null,
checkpointResponder);
StreamOperator streamOperator = mock(StreamOperator.class);
when(streamOperator.getOperatorID()).thenReturn(new OperatorID(42, 42));
KeyedStateHandle managedKeyedStateHandle = mock(KeyedStateHandle.class);
DoneFuture.of(SnapshotResult.of(rawOperatorStateHandle)));
when(streamOperator.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class))).thenReturn(operatorSnapshotResult);
OperatorChain> operatorChain = mock(OperatorChain.class);
when(operatorChain.getAllOperators()).thenReturn(streamOperators);
CheckpointStorage checkpointStorage = new MemoryBackendCheckpointStorage(new JobID(), null, null, Integer.MAX_VALUE);

代码示例来源:origin: apache/flink

new BlobCacheService(mock(PermanentBlobCache.class), mock(TransientBlobCache.class));
LibraryCacheManager libCache = mock(LibraryCacheManager.class);
when(libCache.getClassLoader(any(JobID.class))).thenReturn(StreamTaskTest.class.getClassLoader());
ResultPartitionManager partitiOnManager= mock(ResultPartitionManager.class);
NetworkEnvironment network = mock(NetworkEnvironment.class);
when(network.getResultPartitionManager()).thenReturn(partitionManager);
when(network.getDefaultIOMode()).thenReturn(IOManager.IOMode.SYNC);
when(network.createKvStateTaskRegistry(any(JobID.class), any(JobVertexID.class)))
.thenReturn(mock(TaskKvStateRegistry.class));
when(network.getTaskEventDispatcher()).thenReturn(taskEventDispatcher);
new JobID(),
"Job Name",
new SerializedValue<>(new ExecutionConfig()),
new Configuration(),
Collections.emptyList(),
Collections.emptyList());

代码示例来源:origin: apache/flink

final long timestamp = 1L;
Environment mockEnvirOnment= spy(new MockEnvironmentBuilder().build());
final List checkpointResult = new ArrayList<>(1);
new JobID(1L, 2L),
new ExecutionAttemptID(1L, 2L),
mock(TaskLocalStateStoreImpl.class),
checkpointResponder);
when(mockEnvironment.getTaskStateManager()).thenReturn(taskStateManager);
when(statelessOperator.getOperatorID()).thenReturn(operatorID);
when(statelessOperator.snapshotState(anyLong(), anyLong(), any(CheckpointOptions.class), any(CheckpointStreamFactory.class)))
.thenReturn(statelessOperatorSnapshotResult);
Whitebox.setInternalState(streamTask, "operatorChain", operatorChain);
Whitebox.setInternalState(streamTask, "cancelables", new CloseableRegistry());
Whitebox.setInternalState(streamTask, "configuration", new StreamConfig(new Configuration()));
Whitebox.setInternalState(streamTask, "asyncOperationsThreadPool", Executors.newCachedThreadPool());
Whitebox.setInternalState(streamTask, "checkpointStorage", new MemoryBackendCheckpointStorage(new JobID(), null, null, Integer.MAX_VALUE));

代码示例来源:origin: apache/flink

final Configuration taskCOnfiguration= new Configuration();
final StreamConfig streamCOnfig= new StreamConfig(taskConfiguration);
final NoOpStreamOperator noOpStreamOperator = new NoOpStreamOperator<>();
new JobID(),
"Test Job",
new SerializedValue<>(new ExecutionConfig()),
new Configuration(),
Collections.emptyList(),
Collections.emptyList());
final NetworkEnvironment networkEnv = mock(NetworkEnvironment.class);
when(networkEnv.createKvStateTaskRegistry(any(JobID.class), any(JobVertexID.class))).thenReturn(mock(TaskKvStateRegistry.class));
when(networkEnv.getTaskEventDispatcher()).thenReturn(taskEventDispatcher);
Executors.directExecutor());
CompletableFuture taskRun = CompletableFuture.runAsync(
() -> task.run(),
TestingUtils.defaultExecutor());
taskRun.get();

代码示例来源:origin: apache/flink

NetworkEnvironment networkEnvirOnment= mock(NetworkEnvironment.class);
when(networkEnvironment.createKvStateTaskRegistry(any(JobID.class), any(JobVertexID.class)))
.thenReturn(mock(TaskKvStateRegistry.class));
when(networkEnvironment.getTaskEventDispatcher()).thenReturn(taskEventDispatcher);
new JobID(),
"test job name",
new SerializedValue<>(new ExecutionConfig()),
new Configuration(),
Collections.emptyList(),
Collections.emptyList());

代码示例来源:origin: apache/flink

new SlotRequest(new JobID(), new AllocationID(), resourceProfile1, taskHost));
return null;
});
registerSlotRequestFuture.get();
doReturn(Collections.singletonList(Collections.singletonList(resourceManager.getContainerRequest())))
.when(mockResourceManagerClient).getMatchingRequests(any(Priority.class), anyString(), any(Resource.class));
verify(mockResourceManagerClient).addContainerRequest(any(AMRMClient.ContainerRequest.class));
verify(mockNMClient).startContainer(eq(testingContainer), any(ContainerLaunchContext.class));
hardwareDescription,
Time.seconds(10L))
.thenCompose(
(RegistrationResponse response) -> {
assertThat(response, instanceOf(TaskExecutorRegistrationSuccess.class));
Time.seconds(10L));
})
.handleAsync(
(Acknowledge ignored, Throwable throwable) -> rmServices.slotManager.getNumberRegisteredSlots(),
resourceManager.getMainThreadExecutorForTesting());

代码示例来源:origin: apache/flink

CompletableFuture registerSlotRequestFuture = resourceManager.runInMainThread(() -> {
rmServices.slotManager.registerSlotRequest(
new SlotRequest(new JobID(), new AllocationID(), resourceProfile1, taskHost));
return null;
});
registerSlotRequestFuture.get();
doReturn(Collections.singletonList(Collections.singletonList(resourceManager.getContainerRequest())))
.when(mockResourceManagerClient).getMatchingRequests(any(Priority.class), anyString(), any(Resource.class));
verify(mockResourceManagerClient).addContainerRequest(any(AMRMClient.ContainerRequest.class));
verify(mockResourceManagerClient).removeContainerRequest(any(AMRMClient.ContainerRequest.class));
verify(mockNMClient).startContainer(eq(testingContainer), any(ContainerLaunchContext.class));

代码示例来源:origin: uber/AthenaX

@Test
public void testDeployerWithIsolatedConfiguration() throws Exception {
YarnClusterConfiguration clusterCOnf= mock(YarnClusterConfiguration.class);
doReturn(new YarnConfiguration()).when(clusterConf).conf();
ScheduledExecutorService executor = mock(ScheduledExecutorService.class);
Configuration flinkCOnf= new Configuration();
YarnClient client = mock(YarnClient.class);
JobDeployer deploy = new JobDeployer(clusterConf, client, executor, flinkConf);
AthenaXYarnClusterDescriptor desc = mock(AthenaXYarnClusterDescriptor.class);
YarnClusterClient clusterClient = mock(YarnClusterClient.class);
doReturn(clusterClient).when(desc).deploy();
ActorGateway actorGateway = mock(ActorGateway.class);
doReturn(actorGateway).when(clusterClient).getJobManagerGateway();
doReturn(Future$.MODULE$.successful(null)).when(actorGateway).ask(any(), any());
JobGraph jobGraph = mock(JobGraph.class);
doReturn(JobID.generate()).when(jobGraph).getJobID();
deploy.start(desc, jobGraph);
verify(clusterClient).runDetached(jobGraph, null);
}
}

代码示例来源:origin: apache/flink

@Test
public void testTriggerSavepointSuccess() throws Exception {
replaceStdOutAndStdErr();
JobID jobId = new JobID();
String savepointPath = "expectedSavepointPath";
final ClusterClient clusterClient = createClusterClient(savepointPath);
try {
MockedCliFrontend frOntend= new MockedCliFrontend(clusterClient);
String[] parameters = { jobId.toString() };
frontend.savepoint(parameters);
verify(clusterClient, times(1))
.triggerSavepoint(eq(jobId), isNull(String.class));
assertTrue(buffer.toString().contains(savepointPath));
}
finally {
clusterClient.shutdown();
restoreStdOutAndStdErr();
}
}

代码示例来源:origin: apache/flink

@Test
public void testNonExistingJobRetrieval() throws Exception {
final JobID jobID = new JobID();
try {
client.requestJobResult(jobID).get();
fail();
} catch (Exception exception) {
Optional expectedCause = ExceptionUtils.findThrowable(exception,
candidate -> candidate.getMessage() != null && candidate.getMessage().contains("Could not find Flink job"));
if (!expectedCause.isPresent()) {
throw exception;
}
}
}

代码示例来源:origin: apache/flink

@Test
public void testClusterClientSavepoint() throws Exception {
Configuration cOnfig= new Configuration();
config.setString(JobManagerOptions.ADDRESS, "localhost");
JobID jobID = new JobID();
String savepointDirectory = "/test/directory";
String savepointPath = "/test/path";
TestSavepointActorGateway gateway = new TestSavepointActorGateway(jobID, savepointDirectory, savepointPath);
TestClusterClient clusterClient = new TestClusterClient(config, gateway);
try {
CompletableFuture pathFuture = clusterClient.triggerSavepoint(jobID, savepointDirectory);
Assert.assertTrue(gateway.messageArrived);
Assert.assertEquals(savepointPath, pathFuture.get());
} finally {
clusterClient.shutdown();
}
}

代码示例来源:origin: apache/flink

private static MapState getMapState(
String jobId,
QueryableStateClient client,
MapStateDescriptor stateDescriptor) throws InterruptedException, ExecutionException {
CompletableFuture> resultFuture =
client.getKvState(
JobID.fromHexString(jobId),
QsConstants.QUERY_NAME,
QsConstants.KEY, // which key of the keyed state to access
BasicTypeInfo.STRING_TYPE_INFO,
stateDescriptor);
return resultFuture.get();
}
}

代码示例来源:origin: apache/flink

final JobID jobId = new JobID();
final String jobName = "Semi-Rebalance Test Job";
final Configuration cfg = new Configuration();
new NoRestartStrategy(),
new RestartAllStrategy.Factory(),
new TestingSlotProvider(ignored -> new CompletableFuture<>()),
ExecutionGraph.class.getClassLoader(),
VoidBlobWriter.getInstance(),

推荐阅读
  • Java中处理NullPointerException:getStackTrace()方法详解与实例代码 ... [详细]
  • 如何利用Java 5 Executor框架高效构建和管理线程池
    Java 5 引入了 Executor 框架,为开发人员提供了一种高效管理和构建线程池的方法。该框架通过将任务提交与任务执行分离,简化了多线程编程的复杂性。利用 Executor 框架,开发人员可以更灵活地控制线程的创建、分配和管理,从而提高服务器端应用的性能和响应能力。此外,该框架还提供了多种线程池实现,如固定线程池、缓存线程池和单线程池,以适应不同的应用场景和需求。 ... [详细]
  • Java高并发与多线程(二):线程的实现方式详解
    本文将深入探讨Java中线程的三种主要实现方式,包括继承Thread类、实现Runnable接口和实现Callable接口,并分析它们之间的异同及其应用场景。 ... [详细]
  • 深入解析Java虚拟机的内存分区与管理机制
    Java虚拟机的内存分区与管理机制复杂且精细。其中,某些内存区域在虚拟机启动时即创建并持续存在,而另一些则随用户线程的生命周期动态创建和销毁。例如,每个线程都拥有一个独立的程序计数器,确保线程切换后能够准确恢复到之前的执行位置。这种设计不仅提高了多线程环境下的执行效率,还增强了系统的稳定性和可靠性。 ... [详细]
  • 本文探讨了 Java 中 Pair 类的历史与现状。虽然 Java 标准库中没有内置的 Pair 类,但社区和第三方库提供了多种实现方式,如 Apache Commons 的 Pair 类和 JavaFX 的 javafx.util.Pair 类。这些实现为需要处理成对数据的开发者提供了便利。此外,文章还讨论了为何标准库未包含 Pair 类的原因,以及在现代 Java 开发中使用 Pair 类的最佳实践。 ... [详细]
  • 多线程基础概览
    本文探讨了多线程的起源及其在现代编程中的重要性。线程的引入是为了增强进程的稳定性,确保一个进程的崩溃不会影响其他进程。而进程的存在则是为了保障操作系统的稳定运行,防止单一应用程序的错误导致整个系统的崩溃。线程作为进程的逻辑单元,多个线程共享同一CPU,需要合理调度以避免资源竞争。 ... [详细]
  • 在多线程并发环境中,普通变量的操作往往是线程不安全的。本文通过一个简单的例子,展示了如何使用 AtomicInteger 类及其核心的 CAS 无锁算法来保证线程安全。 ... [详细]
  • 外观模式:为子系统中的一系列接口提供一个统一的访问入口,通过定义一个高层次的接口,使子系统的使用变得更加简便和高效。该模式特别适用于那些需要简化复杂子系统交互的场景,能够显著提升代码的可复用性和可维护性。对于具备一定面向对象编程基础的开发者来说,掌握外观模式将有助于更好地组织和管理复杂的软件架构。 ... [详细]
  • 如何使用 `org.eclipse.rdf4j.query.impl.MapBindingSet.getValue()` 方法及其代码示例详解 ... [详细]
  • 在本地环境中部署了两个不同版本的 Flink 集群,分别为 1.9.1 和 1.9.2。近期在尝试启动 1.9.1 版本的 Flink 任务时,遇到了 TaskExecutor 启动失败的问题。尽管 TaskManager 日志显示正常,但任务仍无法成功启动。经过详细分析,发现该问题是由 Kafka 版本不兼容引起的。通过调整 Kafka 客户端配置并升级相关依赖,最终成功解决了这一故障。 ... [详细]
  • 深入解析C#中app.config文件的配置与修改方法
    在C#开发过程中,经常需要对系统的配置文件进行读写操作,如系统初始化参数的修改或运行时参数的更新。本文将详细介绍如何在C#中正确配置和修改app.config文件,包括其结构、常见用法以及最佳实践。此外,还将探讨exe.config文件的生成机制及其在不同环境下的应用,帮助开发者更好地管理和维护应用程序的配置信息。 ... [详细]
  • 投融资周报 | Circle 达成 4 亿美元融资协议,唯一艺术平台 A 轮融资超千万美元 ... [详细]
  • 本文全面解析了 gRPC 的基础知识与高级应用,从 helloworld.proto 文件入手,详细阐述了如何定义服务接口。例如,`Greeter` 服务中的 `SayHello` 方法,该方法在客户端和服务器端的消息交互中起到了关键作用。通过实例代码,读者可以深入了解 gRPC 的工作原理及其在实际项目中的应用。 ... [详细]
  • 深入解析 Android TextView 中 getImeActionLabel() 方法的使用与代码示例 ... [详细]
  • 如何使用和示例代码解析 org.semanticweb.owlapi.model.OWLSubPropertyChainOfAxiom.getPropertyChain() 方法 ... [详细]
author-avatar
乌海阿斯顿
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有