hadoop@master ~]$
1、hadoop 启动start-all.sh 停止 stop-all.sh
2、hive hive
3、zookeeper
cd /home/hadoop
启动 命令(每台机器都启动)
zookeeper-3.4.14/bin/zkServer.sh start 日志会在启动目录下 生成 zookeeper.out
查看状态命令
zookeeper-3.4.14/bin/zkServer.sh status
4、sqoop
验证 sqoop是否可用
sqoop import --connect jdbc:mysql://192.168.91.112:3306/bgdmysqldb --username root --password \'2019_Mysql\' --table dh_call_info2 --fields-terminated-by \'\t\' --num-mappers 1 --hive-import --hive-database rdw --hive-table dh_call_info2 --delete-target-dir
验证:select * from rdw.dh_call_info2;
5、azkaban
6、spark集群启动
启动hive的元数据服务# hive --service metastore
备注;jps 会出现一个名为“RunJar”的进程。
cd /home/hadoop/spark-244
bin/spark-sql --master yarn
use rdw;
select * from dh_call_info2;
7、habse集群
/home/hadoop/hbase-222/bin/start-hbase.sh
jps
39596 HMaster
39766 HRegionServer
cd /home/hadoop/hbase-222
bin/hbase shell
执行 get \'t1\',\'rowkey001\', {COLUMN=>\'f1:col1\'}
8、kafka集群
启动 kafka 每台机器都启动
cd /home/hadoop/kafka-212-230
bin/kafka-server-start.sh config/server.properties
启动控制台生产者
bin/kafka-console-producer.sh --broker-list 192.168.91.112:9092 --topic topic_test1
启动控制台消费者
bin/kafka-console-consumer.sh --bootstrap-server 192.168.91.113:9092 --topic topic_test1 --from-beginning
在生产者控制台输入hello world
9、flink 集群
启动、flink集群
start-cluster.sh
jps
30436 StandaloneSessionClusterEntrypoint 主节点
29516 TaskManagerRunner 子节点
cd /home/hadoop/flink-191
运行示例 注意在【cd /home/hadoop/flink-191】目录下运行,否则可能找不到 文件
flink run examples/streaming/WordCount.jar
打开web管理页面
http://master:8091
关闭flink集群
stop-cluster.sh
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
通过SSH 统一管理 集群的启动和停止
更多命令参考:https://www.cnblogs.com/youngerger/p/9104144.html
在主服务器创建 文件 vi bigdata-start.sh
内容如下:
start-all.sh
cd /home/hadoop
zookeeper-3.4.14/bin/zkServer.sh start
ssh hadoop@slave1 "cd /home/hadoop;zookeeper-3.4.14/bin/zkServer.sh start;exit"
ssh hadoop@slave2 "cd /home/hadoop;zookeeper-3.4.14/bin/zkServer.sh start;exit"
/home/hadoop/hbase-222/bin/start-hbase.sh
cd /home/hadoop/kafka-212-230
nohup bin/kafka-server-start.sh config/server.properties &
ssh hadoop@slave1 "cd /home/hadoop/kafka-212-230;nohup bin/kafka-server-start.sh config/server.properties >>kafka-nohup.out 2>&1 &"
ssh hadoop@slave2 "cd /home/hadoop/kafka-212-230;nohup bin/kafka-server-start.sh config/server.properties >>kafka-nohup.out 2>&1 &"
start-cluster.sh
exit 1
在主服务器创建文件 vi bigdata-stop.sh
内容如下:
###master stop
stop-all.sh
/home/hadoop/zookeeper-3.4.14/bin/zkServer.sh stop
/home/hadoop/kafka-212-230/bin/kafka-server-stop.sh
kill -9 $(ps -ef|grep HMaster|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep HRegionServer|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep Kafka|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
stop-cluster.sh
####slave1 stop
# you could exe remote mashine\'s sh file too.
#ssh hadoop@slave1 "/home/hadoop/bigdata-stop-slave1.sh"
#ssh hadoop@slave2 "/home/hadoop/bigdata-stop-slave2.sh"
#
ssh hadoop@slave1
ssh hadoop@slave2
exit 1
在主服务器创建文件 vi /home/hadoop/bigdata-stop-slave.sh
内容如下:
kill -9 $(ps -ef|grep Kafka|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep HMaster|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep HRegionServer|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep QuorumPeerMain|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
kill -9 $(ps -ef|grep ZKMainServer|gawk \'$0 !~/grep/ {print $2}\' |tr -s \'\n\' \' \')
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++