hadoop2.10+flink环境搭建
Hadoop集群搭建
1.安装配置JDK
https://file.zhanghail.cn/jdk-9.0.1_linux-x64_bin.tar.gz
tar xf jdkxxxxx.tar.gz -C /usr/local
ln -s /usr/local/jadkxx /usr/local/java
cat > /etc/profile <export JAVA_HOME&#61;/usr/lib/jdk-9.0.1
export JRE_HOME&#61;${JAVA_HOME}/jre
export CLASSPATH&#61;.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH&#61;${JAVA_HOME}/bin:$PATH
EOF
2.配置所有节点互相免密登录
ssh-keygen 一路回车
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
......
3.修改主机名、hosts劫持信息
vim /etc/hosts
10.0.0.10 hadoop-master
10.10.0.7 slave1
10.10.0.8 slave2
10.10.0.9 slave3
4.Hadoop环境搭建
- 1.安装包下载
下载hadoop到mastar节点
wget https://file.zhanghail.cn/hadoop-2.10.0.tar.gz
mkdir /hadoop
tar -zxvf hadoop-2.10.0.tar.gz -C /hadoop/
cd /hadoop
- 4.配置hadoop-master的环境变量
cat > /etc/profile < export HADOOP_HOME&#61;/hadoop/hadoop-2.10.0
export PATH&#61;PATH:PATH:PATH:HADOOP_HOME/bin
EOF - 5.生效环境变量
source /etc/profile
- 6.修改配置文件
配置core-site.xml&#xff0c;执行vim etc/hadoop/core-site.xml命令&#xff0c;通过fs.defaultFS指定NameNode
hadoop.tmp.dirfile:/hadoop/data/tmpAbase for other temporary directories.fs.default.namehdfs://hadoop-master:9000
- 7.配置hdfs.-site.xml&#xff0c;执行vim etc/hadoop/hdfs-site.xml
dfs.replication3 ##此处为几个slave节点dfs.name.dir/data/hadoop/data/hdfs/namedfs.data.dir/data/hadoop/data/hdfs/data
- 8.配置mapred-site.xml
先复制mapred-site.xml.template为mapred-site.xml&#xff0c;在进行修改
cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xmlvim /etc/hadoop/mapred-site.xml
mapreduce.framework.nameyarnmapred.job.trackerhttp://hadoop-master:9001
vim etc/hadoop/masters
hadoop-mastervim /etc/slaves
slave1
slave2
slave3
3.配置所有从节点
- 1.将主节点上的hadoop文件夹依次拷贝到所有从节点的相同目录下&#xff1a;
scp -r /hadoop/ hadoop-slave1:/data
scp -r /hadoop/ hadoop-slave2:/data
scp -r /hadoop/ hadoop-slave3:/data
export HADOOP_HOME&#61;/hadoop/hadoop-2.10.0
export PATH&#61;$PATH:$HADOOP_HOME/bin
4.启动集群
在master节点执行&#xff1a;
hadoop namenode -format ##切忌重复执行
sbin/start-all.sh
- 3.若启动无报错&#xff0c;访问http://hadoop-master:50070/即可访问hadoop集群
Flink standalone集群HA高可用搭建
1.下载flink (hadoop对应版本的)
wget https://file.zhanghail.cn/flink-1.10.0-bin-scala_2.11.tgz
2.安装flink
- 1.解压 flink-1.10.0-bin-scala_2.11.tgz 到 /opt 目录
sudo tar -zxvf flink-1.10.0-bin-scala_2.11.tgz -C /opt
解压后可修改目录名称为flink
export FLINK_HOME&#61;/opt/flink
export PATH&#61;$FLINK_HOME/bin:$PATH加载环境变量
source /etc/profile
3.配置flink
vim /opt/flink/conf/flink-conf.yaml
jobmanager.rpc.address: hadoop-masterhigh-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: hadoop-master:2181,slave1:2181,slave2:2181,slave3:2181
master:8081
slave1:8081
slave1
slave2
slave3
scp /opt/flink slave1:/opt/
scp /opt/flink slave2:/opt/
scp /opt/flink slave3:/opt/
/opt/flink/bin/start-cluster.sh