作者:壹花壹浄土 | 来源:互联网 | 2023-07-31 13:19
本文整理了Java中org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits()方法的一些代码
本文整理了Java中org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits()
方法的一些代码示例,展示了HRegion.replayRecoveredEdits()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HRegion.replayRecoveredEdits()
方法的具体详情如下:
包路径:org.apache.hadoop.hbase.regionserver.HRegion
类名称:HRegion
方法名:replayRecoveredEdits
HRegion.replayRecoveredEdits介绍
[英]Read the edits put under this region by wal splitting process. Put the recovered edits back up into this region.
We can ignore any wal message that has a sequence ID that's equal to or lower than minSeqId. (Because we know such messages are already reflected in the HFiles.)
While this is running we are putting pressure on memory yet we are outside of our usual accounting because we are not yet an onlined region (this stuff is being run as part of Region initialization). This means that if we're up against global memory limits, we'll not be flagged to flush because we are not online. We can't be flushed by usual mechanisms anyways; we're not yet online so our relative sequenceids are not yet aligned with WAL sequenceids -- not till we come up online, post processing of split edits.
But to help relieve memory pressure, at least manage our own heap size flushing if are in excess of per-region limits. Flushing, though, we have to be careful and avoid using the regionserver/wal sequenceid. Its running on a different line to whats going on in here in this region context so if we crashed replaying these edits, but in the midst had a flush that used the regionserver wal with a sequenceid in excess of whats going on in here in this region and with its split editlogs, then we could miss edits the next time we go to recover. So, we have to flush inline, using seqids that make sense in a this single region context only -- until we online.
[中]通过wal拆分过程读取放置在此区域下的编辑。将恢复的编辑放回该区域。
我们可以忽略序列ID等于或低于minSeqId的任何wal消息。(因为我们知道这些信息已经反映在HFiles中。)
在运行时,我们对内存施加了压力,但我们超出了通常的计算范围,因为我们还不是一个在线区域(这些东西是作为区域初始化的一部分运行的)。这意味着,如果我们遇到全局内存限制,我们将不会被标记为刷新,因为我们不在线。无论如何,我们不能被通常的机制冲垮;我们还没有上线,所以我们的相对SequenceID还没有与WAL SequenceID对齐——直到我们上线,对分割编辑进行后期处理。
但为了帮助减轻内存压力,如果堆大小超过了每个区域的限制,至少要管理我们自己的堆大小刷新。不过,我们必须小心,避免使用regionserver/wal sequenceid。它运行在与此区域上下文中发生的情况不同的行上,因此如果我们在重放这些编辑时崩溃,但在中间有一个刷新,它使用的regionserver wal的sequenceid超过此区域中发生的内容及其拆分的editlogs,那么我们在下一次恢复时可能会错过编辑。所以,我们必须内联刷新,使用仅在这个单一区域上下文中有意义的seqid——直到我们联机。
代码示例
代码示例来源:origin: apache/hbase
seqid = Math.max(seqid, replayRecoveredEdits(edits, maxSeqIdInStores, reporter, fs));
} catch (IOException e) {
boolean skipErrors = conf.getBoolean(
代码示例来源:origin: co.cask.hbase/hbase
seqid = replayRecoveredEdits(edits, seqid, reporter);
} catch (IOException e) {
boolean skipErrors = conf.getBoolean("hbase.skip.errors", false);
代码示例来源:origin: harbby/presto-connectors
seqid = Math.max(seqid, replayRecoveredEdits(edits, maxSeqIdInStores, reporter));
} catch (IOException e) {
boolean skipErrors = conf.getBoolean(