热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

hudiwithflink1.14验证

hudiwithflinkdatastreambyjava版本flink1.14hudi0.12.1pom依赖

hudi with flink datastream by java

版本



  • flink1.14

  • hudi0.12.1


pom依赖

<dependencies>
<dependency>
<groupId>org.apache.hudigroupId>
<artifactId>hudi-flink1.14-bundleartifactId>
<version>${hudi.version}version>
dependency>

<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-clientartifactId>
<version>2.6.0version>
<scope>providedscope>
<exclusions>
<exclusion>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-log4j12artifactId>
exclusion>
exclusions>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-runtime-web_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-api-java-bridge_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-runtime_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-planner_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-streaming-java_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-clients_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-connector-kafka_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-statebackend-rocksdb_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-simpleartifactId>
<version>1.7.25version>
<scope>compilescope>
dependency>
dependencies>

View Code


write

消费kafka,通过flink datastream持续写入hudi

package com.liangji.hudi0121.flink114;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.connector.kafka.source.KafkaSource;
import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer;
import org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend;
import org.apache.flink.formats.common.TimestampFormat;
import org.apache.flink.formats.json.JsonRowDataDeserializationSchema;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.CheckpointConfig;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.data.RowData;
import org.apache.flink.table.runtime.typeutils.InternalTypeInfo;
import org.apache.flink.table.types.logical.RowType;
import org.apache.hudi.common.model.HoodieTableType;
import org.apache.hudi.configuration.FlinkOptions;
import org.apache.hudi.util.HoodiePipeline;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.config.SaslConfigs;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
public class JavaTestInsert {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env
= StreamExecutionEnvironment.getExecutionEnvironment();
String targetTable
= "t1";
String basePath
= "/project/ccr_ai_upgrade/ccr_ai_upgrade/aiui/flink/demo/t1";
String checkpointDir
= "/project/ccr_ai_upgrade/ccr_ai_upgrade/aiui/flink/chk";
ParameterTool parameterTool
= ParameterTool.fromArgs(args);
//设置并行度
String sourceTopic = parameterTool.get("sourceTopic","odeon_test_liangji_114_test");
String cg
= parameterTool.get("cg","CG_odeon_test_liangji_114_test");
String cgpwd
= parameterTool.get("cgPwd","252287");
EmbeddedRocksDBStateBackend backend
= new EmbeddedRocksDBStateBackend(true);
env.setStateBackend(backend);
env.enableCheckpointing(
60000);
CheckpointConfig config
= env.getCheckpointConfig();
config.setCheckpointStorage(checkpointDir);
config.enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
config.setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE);
config.setCheckpointTimeout(
60000);
Properties prop
= new Properties();
prop.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
"172.31.162.72:9093,172.31.162.73:9093,172.31.162.74:9093");
prop.setProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"SASL_PLAINTEXT");
prop.setProperty(SaslConfigs.SASL_MECHANISM,
"SCRAM-SHA-256");
prop.setProperty(ConsumerConfig.GROUP_ID_CONFIG,cg);
prop.setProperty(SaslConfigs.SASL_JAAS_CONFIG,
"org.apache.kafka.common.security.scram.ScramLoginModule required username=\""+cg+"\"\n" +"password=\""+cgpwd+"\";");
prop.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
"true");
prop.setProperty(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,
"1000");
prop.setProperty(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,
"30000");
RowType rowType
= createJsonRowType();
KafkaSource
source = KafkaSource.builder()
.setBootstrapServers(
"bootstrap server")
.setTopics(sourceTopic)
.setGroupId(cg)
.setStartingOffsets(OffsetsInitializer.latest())
.setValueOnlyDeserializer(
new JsonRowDataDeserializationSchema(rowType, InternalTypeInfo.of(rowType), false, false, TimestampFormat.ISO_8601))
.setProperties(prop)
.build();
DataStream
dataStream = env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source");
Map
optiOns= new HashMap<>();
options.put(FlinkOptions.PATH.key(), basePath);
options.put(FlinkOptions.TABLE_TYPE.key(), HoodieTableType.MERGE_ON_READ.name());
options.put(FlinkOptions.PRECOMBINE_FIELD.key(),
"ts");
HoodiePipeline.Builder builder
= HoodiePipeline.builder(targetTable)
.column(
"uuid VARCHAR(200)")
.column(
"name VARCHAR(100)")
.column(
"age INT")
.column(
"ts TIMESTAMP(0)")
.column(
"`partition` VARCHAR(20)")
.pk(
"uuid")
.partition(
"partition")
.options(options);
builder.sink(dataStream,
false); // The second parameter indicating whether the input data stream is bounded
env.execute("Api_Sink");
}
private static RowType createJsonRowType() {
return (RowType) DataTypes.ROW(
DataTypes.FIELD(
"uuid", DataTypes.STRING()),
DataTypes.FIELD(
"name", DataTypes.STRING()),
DataTypes.FIELD(
"age", DataTypes.INT()),
DataTypes.FIELD(
"ts", DataTypes.TIMESTAMP(0)),
DataTypes.FIELD(
"partition", DataTypes.STRING())
).getLogicalType();
}
}

View Code

kafka消息发送示例如下:

package com.liangji.kafka;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.config.SaslConfigs;
import java.util.Properties;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
/**
*
@author liangji
* @Description TODO
* @Date 2021/5/7 19:17
*/
public class KafkaProducerFrom1 {
public static void main(String[] args) throws ExecutionException, InterruptedException {
Properties props
= new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"bootstrap server");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
"SASL_PLAINTEXT");
props.put(SaslConfigs.SASL_MECHANISM,
"SCRAM-SHA-256");
props.put(SaslConfigs.SASL_JAAS_CONFIG,
"org.apache.kafka.common.security.scram.ScramLoginModule required username=\"PG_odeon_test_liangji_114_test\" password=\"889448\";");
props.put(ProducerConfig.ACKS_CONFIG,
"all");
props.put(ProducerConfig.RETRIES_CONFIG,
3);
props.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG,
1000);
props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION,
1);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringSerializer");
Producer producer
= new org.apache.kafka.clients.producer.KafkaProducer<>(props);
while (true) {
String uuid
= UUID.randomUUID().toString().replace("-", "").toLowerCase();
ProducerRecord producerRecord
= new ProducerRecord<>("odeon_test_liangji_114_test","k", "{" +
"\"uuid\":" + "\"" + uuid + "\"," +
"\"name\":\"liangji-" + uuid + "\"," +
"\"age\":\"18\"" + "," +
"\"ts\":\"2022-11-17T14:00:00\"" + "," +
"\"partition\":\"2022-11-17\"" +
"}");
Future
future = producer.send(producerRecord);
future.get();
System.out.println(
"success");
Thread.sleep(
1000);
}
}
}

View Code


query

通过flink datastream持续读取hudi

package com.liangji.hudi0121.flink114;
import org.apache.commons.lang3.StringUtils;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.data.RowData;
import org.apache.hudi.common.model.HoodieTableType;
import org.apache.hudi.common.table.HoodieTableMetaClient;
import org.apache.hudi.common.table.timeline.HoodieInstant;
import org.apache.hudi.configuration.FlinkOptions;
import org.apache.hudi.configuration.HadoopConfigurations;
import org.apache.hudi.util.HoodiePipeline;
import java.util.HashMap;
import java.util.Map;
public class JavaTestQuery {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env
= StreamExecutionEnvironment.getExecutionEnvironment();
String targetTable
= "t1";
String basePath
= "/project/ccr_ai_upgrade/ccr_ai_upgrade/aiui/flink/demo/t1";
String startCommit
= "20221117164814122";
if (StringUtils.isEmpty(startCommit)) {
HoodieTableMetaClient metaClient
= HoodieTableMetaClient.builder()
.setConf(HadoopConfigurations.getHadoopConf(
new Configuration())).setBasePath(basePath).build();
startCommit
= metaClient.getCommitsAndCompactionTimeline().filterCompletedInstants().firstInstant()
.map(HoodieInstant::getTimestamp).orElse(
null);
}
Map
optiOns= new HashMap<>();
options.put(FlinkOptions.PATH.key(), basePath);
options.put(FlinkOptions.TABLE_TYPE.key(), HoodieTableType.MERGE_ON_READ.name());
options.put(FlinkOptions.READ_AS_STREAMING.key(),
"true");
options.put(FlinkOptions.READ_START_COMMIT.key(), startCommit);
HoodiePipeline.Builder builder
= HoodiePipeline.builder(targetTable)
.column(
"uuid VARCHAR(200)")
.column(
"name VARCHAR(100)")
.column(
"age INT")
.column(
"ts TIMESTAMP(0)")
.column(
"`partition` VARCHAR(20)")
.pk(
"uuid")
.partition(
"partition")
.options(options);
DataStream
rowDataDataStream = builder.source(env);
rowDataDataStream.print();
env.execute(
"Api_Source");
}
}

View Code


hudi with flink sql & hudi on metastore by scala

pom依赖

<dependencies>
<dependency>
<groupId>org.apache.hudigroupId>
<artifactId>hudi-flink1.14-bundleartifactId>
<version>${hudi.version}version>
dependency>

<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-clientartifactId>
<version>2.6.0version>
<scope>providedscope>
<exclusions>
<exclusion>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-log4j12artifactId>
exclusion>
<exclusion>
<groupId>org.apache.commonsgroupId>
<artifactId>commons-math3artifactId>
exclusion>
exclusions>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-runtime-web_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-api-scala-bridge_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-table-planner_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-clients_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-connector-kafka_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>

<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-statebackend-rocksdb_2.11artifactId>
<version>${flink.version}version>
<scope>providedscope>
dependency>
<dependency>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-simpleartifactId>
<version>1.7.25version>
<scope>compilescope>
dependency>
dependencies>

View Code


ddl

通过flink sql创建表,并将hudi表元数据托管在hive metastore中

package com.liangji.hudi0121.flink114.sql.ddl
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import java.net.URL
object CreateTable {
def main(args: Array[String]): Unit
= {
System.setProperty(
"HADOOP_USER_NAME","hueuser")
val env
= StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv
= StreamTableEnvironment.create(env)
val createCatalogSql
=
s
"""
|CREATE CATALOG hudi_catalog WITH (
| 'type' = 'hudi',
| 'mode' = 'hms',
| 'default-database' = 'default',
| 'hive.conf.dir' = 'E:\\mine\\hudi0121-demo-maven\\flink114-scala\\src\\main\\resources'
|)
|""".stripMargin

val createDbSql
= """create database if not exists hudi_catalog.ccr_ai_upgrade""".stripMargin
val changeDbSql
= """use hudi_catalog.ccr_ai_upgrade""".stripMargin
val createTbSql
=
"""
|CREATE TABLE if not exists test_hudi_flink_mor (
| uuid VARCHAR(200) PRIMARY KEY NOT ENFORCED,
| name VARCHAR(100),
| age INT,
| ts TIMESTAMP(0),
| `partition` VARCHAR(20)
|)
|PARTITIONED BY (`partition`
|)
|WITH (
| 'connector' = 'hudi',
| 'path' = '/project/ccr_ai_upgrade/ccr_ai_upgrade/aiui/flink/demo/test_hudi_flink_mor',
| 'table.type' = 'MERGE_ON_READ',
| 'hoodie.datasource.write.keygenerator.class' = 'org.apache.hudi.keygen.ComplexAvroKeyGenerator',
| 'hoodie.datasource.write.recordkey.field' = 'uuid',
| 'hoodie.datasource.write.hive_style_partitioning' = 'true'
|)
|""".stripMargin

tableEnv.executeSql(createCatalogSql)
tableEnv.executeSql(createDbSql)
tableEnv.executeSql(changeDbSql)
tableEnv.executeSql(createTbSql).print()
}
}

View Code


batch


write

通过flink sql写入hudi表,如下是通过batch方式写入

package com.liangji.hudi0121.flink114.sql.write
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
object BatchWrite {
def main(args: Array[String]): Unit
= {
System.setProperty(
"HADOOP_USER_NAME","hueuser")
val env
= StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv
= StreamTableEnvironment.create(env)
val createCatalogSql
=
s
"""
|CREATE CATALOG hudi_catalog WITH (
| 'type' = 'hudi',
| 'mode' = 'hms',
| 'default-database' = 'default',
| 'hive.conf.dir' = 'E:\\mine\\hudi0121-demo-maven\\flink114-scala\\src\\main\\resources'
|)
|""".stripMargin

val changeDbSql
= """use hudi_catalog.ccr_ai_upgrade""".stripMargin
tableEnv.executeSql(createCatalogSql)
tableEnv.executeSql(changeDbSql)
tableEnv.executeSql(
"insert into test_hudi_flink_mor values ('b','liangji-1',19,TIMESTAMP '2022-11-18 18:00:00','2022-11-18')").print()
}
}

View Code


query

通过flink sql查询hudi表,如下是通过batch方式查询

package com.liangji.hudi0121.flink114.sql.query
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
object BatchQuery {
def main(args: Array[String]): Unit
= {
System.setProperty(
"HADOOP_USER_NAME","hueuser")
val env
= StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv
= StreamTableEnvironment.create(env)
val createCatalogSql
=
s
"""
|CREATE CATALOG hudi_catalog WITH (
| 'type' = 'hudi',
| 'mode' = 'hms',
| 'default-database' = 'default',
| 'hive.conf.dir' = 'E:\\mine\\hudi0121-demo-maven\\flink114-scala\\src\\main\\resources'
|)
|""".stripMargin

val changeDbSql
= """use hudi_catalog.ccr_ai_upgrade""".stripMargin
tableEnv.executeSql(createCatalogSql)
tableEnv.executeSql(changeDbSql)
tableEnv.executeSql(
"select * from test_hudi_flink_mor").print()
}
}

View Code


streaming


write

通过flink sql消费hudi表,流式写入hudi表,如下是通过streaming方式写入

package com.liangji.hudi0121.flink114.sql.streaming.write
import org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend
import org.apache.flink.streaming.api.CheckpointingMode
import org.apache.flink.streaming.api.environment.CheckpointConfig
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
object StreamingWrite {
def main(args: Array[String]): Unit
= {
System.setProperty(
"HADOOP_USER_NAME","hueuser")
val env
= StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv
= StreamTableEnvironment.create(env)
val checkpointDir
= "hdfs://ns-tmp/project/ccr_ai_upgrade/ccr_ai_upgrade/aiui/flink/chk"
val backend
= new EmbeddedRocksDBStateBackend(true)
env.setStateBackend(backend)
env.enableCheckpointing(
60000)
val config
= env.getCheckpointConfig
config.setCheckpointStorage(checkpointDir)
config.enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
config.setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE)
config.setCheckpointTimeout(
60000)
val createCatalogSql
=
s
"""
|CREATE CATALOG hudi_catalog WITH (
| 'type' = 'hudi',
| 'mode' = 'hms',
| 'default-database' = 'default',
| 'hive.conf.dir' = 'E:\\mine\\hudi0121-demo-maven\\flink114-scala\\src\\main\\resources'
|)
|""".stripMargin

val changeDbSql
= """use hudi_catalog.ccr_ai_upgrade""".stripMargin
tableEnv.executeSql(createCatalogSql)
tableEnv.executeSql(changeDbSql)
val table
= tableEnv.sqlQuery("select * from test_hudi_flink_mor/*+ OPTIONS('read.streaming.enabled'='true')*/")
// table=>datastream
val stream = tableEnv.toDataStream(table)
// do anything u want
// datastream=>table
val inputTable = tableEnv.fromDataStream(stream)
// create temp view
tableEnv.createTemporaryView("InputTable", inputTable)
tableEnv.executeSql(
"insert into test_hudi_flink_mor_streaming select * from InputTable")
}
}

View Code

注:



  1. 需开启chk,否则数据不会提交

  2. 可自由转换table和datastream进行逻辑处理


query

通过flink sql流式读取hudi表,如下是通过streaming方式读取

package com.liangji.hudi0121.flink114.sql.streaming.query
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
object StreamingQuery {
def main(args: Array[String]): Unit
= {
System.setProperty(
"HADOOP_USER_NAME","hueuser")
val env
= StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv
= StreamTableEnvironment.create(env)
val createCatalogSql
=
s
"""
|CREATE CATALOG hudi_catalog WITH (
| 'type' = 'hudi',
| 'mode' = 'hms',
| 'default-database' = 'default',
| 'hive.conf.dir' = 'E:\\mine\\hudi0121-demo-maven\\flink114-scala\\src\\main\\resources'
|)
|""".stripMargin

val changeDbSql
= """use hudi_catalog.ccr_ai_upgrade""".stripMargin
tableEnv.executeSql(createCatalogSql)
tableEnv.executeSql(changeDbSql)
tableEnv.executeSql(
"select * from test_hudi_flink_mor/*+ OPTIONS('read.streaming.enabled'='true')*/").print()
}
}

View Code

 

注:需在sql添加hint:'read.streaming.enabled'='true'


参考

  1. https://github.com/apache/hudi/blob/master/hudi-examples/hudi-examples-flink/src/main/java/org/apache/hudi/examples/quickstart/HoodieFlinkQuickstart.java

  2. Flink SQL通过Hudi HMS Catalog读写Hudi并同步Hive表(强烈推荐这种方式)

  3. Static methods in interface require -target:jvm-1.8_好色仙人的徒弟的博客-CSDN博客

  4. Flink 使用之操作 Hudi 表

  5. 使用 Flink Hudi 构建流式数据湖-阿里云开发者社区

TRANSLATE with x

English
















































































ArabicHebrewPolish
BulgarianHindiPortuguese
CatalanHmong DawRomanian
Chinese SimplifiedHungarianRussian
Chinese TraditionalIndonesianSlovak
CzechItalianSlovenian
DanishJapaneseSpanish
DutchKlingonSwedish
EnglishKoreanThai
EstonianLatvianTurkish
FinnishLithuanianUkrainian
FrenchMalayUrdu
GermanMalteseVietnamese
GreekNorwegianWelsh
Haitian CreolePersian 



 

TRANSLATE with

COPY THE URL BELOW



Back

EMBED THE SNIPPET BELOW IN YOUR SITE



Enable collaborative features and customize widget: Bing Webmaster Portal

Back

 

 

此页面的语言为中文(简体)

 

翻译为

 

 

 

 



  • 中文(简体)

  • 中文(繁体)

  • 丹麦语

  • 乌克兰语

  • 乌尔都语

  • 亚美尼亚语

  • 俄语

  • 保加利亚语

  • 克罗地亚语

  • 冰岛语

  • 加泰罗尼亚语

  • 匈牙利语

  • 卡纳达语

  • 印地语

  • 印尼语

  • 古吉拉特语

  • 哈萨克语

  • 土耳其语

  • 威尔士语

  • 孟加拉语

  • 尼泊尔语

  • 布尔语(南非荷兰语)

  • 希伯来语

  • 希腊语

  • 库尔德语

  • 德语

  • 意大利语

  • 拉脱维亚语

  • 挪威语

  • 捷克语

  • 斯洛伐克语

  • 斯洛文尼亚语

  • 旁遮普语

  • 日语

  • 普什图语

  • 毛利语

  • 法语

  • 波兰语

  • 波斯语

  • 泰卢固语

  • 泰米尔语

  • 泰语

  • 海地克里奥尔语

  • 爱沙尼亚语

  • 瑞典语

  • 立陶宛语

  • 缅甸语

  • 罗马尼亚语

  • 老挝语

  • 芬兰语

  • 英语

  • 荷兰语

  • 萨摩亚语

  • 葡萄牙语

  • 西班牙语

  • 越南语

  • 阿塞拜疆语

  • 阿姆哈拉语

  • 阿尔巴尼亚语

  • 阿拉伯语

  • 韩语

  • 马尔加什语

  • 马拉地语

  • 马拉雅拉姆语

  • 马来语

  • 马耳他语

  • 高棉语

 





推荐阅读
  • 在本地环境中部署了两个不同版本的 Flink 集群,分别为 1.9.1 和 1.9.2。近期在尝试启动 1.9.1 版本的 Flink 任务时,遇到了 TaskExecutor 启动失败的问题。尽管 TaskManager 日志显示正常,但任务仍无法成功启动。经过详细分析,发现该问题是由 Kafka 版本不兼容引起的。通过调整 Kafka 客户端配置并升级相关依赖,最终成功解决了这一故障。 ... [详细]
  • Java Socket 关键参数详解与优化建议
    Java Socket 的 API 虽然被广泛使用,但其关键参数的用途却鲜为人知。本文详细解析了 Java Socket 中的重要参数,如 backlog 参数,它用于控制服务器等待连接请求的队列长度。此外,还探讨了其他参数如 SO_TIMEOUT、SO_REUSEADDR 等的配置方法及其对性能的影响,并提供了优化建议,帮助开发者提升网络通信的稳定性和效率。 ... [详细]
  • Python 伦理黑客技术:深入探讨后门攻击(第三部分)
    在《Python 伦理黑客技术:深入探讨后门攻击(第三部分)》中,作者详细分析了后门攻击中的Socket问题。由于TCP协议基于流,难以确定消息批次的结束点,这给后门攻击的实现带来了挑战。为了解决这一问题,文章提出了一系列有效的技术方案,包括使用特定的分隔符和长度前缀,以确保数据包的准确传输和解析。这些方法不仅提高了攻击的隐蔽性和可靠性,还为安全研究人员提供了宝贵的参考。 ... [详细]
  • 在处理 XML 数据时,如果需要解析 `` 标签的内容,可以采用 Pull 解析方法。Pull 解析是一种高效的 XML 解析方式,适用于流式数据处理。具体实现中,可以通过 Java 的 `XmlPullParser` 或其他类似的库来逐步读取和解析 XML 文档中的 `` 元素。这样不仅能够提高解析效率,还能减少内存占用。本文将详细介绍如何使用 Pull 解析方法来提取 `` 标签的内容,并提供一个示例代码,帮助开发者快速解决问题。 ... [详细]
  • 本文探讨了如何利用Java代码获取当前本地操作系统中正在运行的进程列表及其详细信息。通过引入必要的包和类,开发者可以轻松地实现这一功能,为系统监控和管理提供有力支持。示例代码展示了具体实现方法,适用于需要了解系统进程状态的开发人员。 ... [详细]
  • 使用Maven JAR插件将单个或多个文件及其依赖项合并为一个可引用的JAR包
    本文介绍了如何利用Maven中的maven-assembly-plugin插件将单个或多个Java文件及其依赖项打包成一个可引用的JAR文件。首先,需要创建一个新的Maven项目,并将待打包的Java文件复制到该项目中。通过配置maven-assembly-plugin,可以实现将所有文件及其依赖项合并为一个独立的JAR包,方便在其他项目中引用和使用。此外,该方法还支持自定义装配描述符,以满足不同场景下的需求。 ... [详细]
  • 本文详细介绍了在 Android 7.1 系统中调整屏幕分辨率和默认音量设置的方法。针对系统默认音量过大的问题,提供了具体的步骤来降低系统、铃声、媒体和闹钟的默认音量,以提升用户体验。此外,还涵盖了如何通过系统设置或使用第三方工具来优化屏幕分辨率,确保设备显示效果更加清晰和流畅。 ... [详细]
  • 在多年使用Java 8进行新应用开发和现有应用迁移的过程中,我总结了一些非常实用的技术技巧。虽然我不赞同“最佳实践”这一术语,因为它可能暗示了通用的解决方案,但这些技巧在实际项目中确实能够显著提升开发效率和代码质量。本文将深入解析并探讨这四大高级技巧的具体应用,帮助开发者更好地利用Java 8的强大功能。 ... [详细]
  • 本文介绍了如何利用ObjectMapper实现JSON与JavaBean之间的高效转换。ObjectMapper是Jackson库的核心组件,能够便捷地将Java对象序列化为JSON格式,并支持从JSON、XML以及文件等多种数据源反序列化为Java对象。此外,还探讨了在实际应用中如何优化转换性能,以提升系统整体效率。 ... [详细]
  • Java中不同类型的常量池(字符串常量池、Class常量池和运行时常量池)的对比与关联分析
    在研究Java虚拟机的过程中,笔者发现存在多种类型的常量池,包括字符串常量池、Class常量池和运行时常量池。通过查阅CSDN、博客园等相关资料,对这些常量池的特性、用途及其相互关系进行了详细探讨。本文将深入分析这三种常量池的差异与联系,帮助读者更好地理解Java虚拟机的内部机制。 ... [详细]
  • Java能否直接通过HTTP将字节流绕过HEAP写入SD卡? ... [详细]
  • 在Android应用开发中,实现与MySQL数据库的连接是一项重要的技术任务。本文详细介绍了Android连接MySQL数据库的操作流程和技术要点。首先,Android平台提供了SQLiteOpenHelper类作为数据库辅助工具,用于创建或打开数据库。开发者可以通过继承并扩展该类,实现对数据库的初始化和版本管理。此外,文章还探讨了使用第三方库如Retrofit或Volley进行网络请求,以及如何通过JSON格式交换数据,确保与MySQL服务器的高效通信。 ... [详细]
  • Android中将独立SO库封装进JAR包并实现SO库的加载与调用
    在Android开发中,将独立的SO库封装进JAR包并实现其加载与调用是一个常见的需求。本文详细介绍了如何将SO库嵌入到JAR包中,并确保在外部应用调用该JAR包时能够正确加载和使用这些SO库。通过这种方式,开发者可以更方便地管理和分发包含原生代码的库文件,提高开发效率和代码复用性。文章还探讨了常见的问题及其解决方案,帮助开发者避免在实际应用中遇到的坑。 ... [详细]
  • Spring框架的核心组件与架构解析 ... [详细]
  • 美团优选推荐系统架构师 L7/L8:算法与工程深度融合 ... [详细]
author-avatar
LEE渡
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有