Hadoop集群快捷启动/停止脚本
由于原生Hadoop集群没有统一的管理工具,当向集群中部署了越来越多的组件后,集群的管理就变得非常繁琐复杂,包括集群的启动与停止,需要执行好多条命令,所以我就写了个一键启动、停止集群的shell脚本。
注意:
- 该脚本在自己搭的伪分布式集群上随便玩玩就好,正式生产集群上慎用(一般正式环境上用原生Hadoop应该不会很多吧)!!!
- 放置该脚本的机器需要拥有对脚本中涉及到的机器的SSH免密钥登录权限
- 当前脚本包含组件有:Zookeeper,HDFS,YARN(HA),JobHistoryServer,HBase,HiveMetaStore,HiveServer2.
- 注意看脚本前面的说明
脚本内容
#!/bin/bash
# 启动|停止 原生Hadoop集群快捷脚本
# 包含组件:Zookeeper,HDFS,YARN(HA),JobHistoryServer,HBase,HiveMetaStore,HiveServer2
# 需要配置各节点间的SSH免密钥登录
# 必须配置:Zookeeper,Hadoop,HBase,Hive的环境变量(/etc/profile), 如下:
######################################################################################################
# export ZOOKEEPER_HOME=/usr/hadoop/zookeeper-3.4.10
# export HADOOP_PREFIX=/usr/hadoop/hadoop-2.7.4
# export HBASE_HOME=/usr/hadoop/hbase-1.2.6
# export HIVE_HOME=/usr/hadoop/hive-1.2.1
# export PATH=$PATH:$ZOOKEEPER_HOME/bin:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin
######################################################################################################
# 如果Yarn未配置ResourceManager高可用,需要注释掉两行命令(第55、91行)即可
# 如果未配置JobHistoryServer服务,需要注释掉第57、58、85、86这4行命令
# zookeeper节点地址,多个地址用双引号包围,空格分隔
export ZK_HOST=("master1" "master2" "worker1")
# hdfs主节点地址,高可用配置的话填写其中任何一个都可
export HDFS_HOST=master1
# yarn主节点地址
export YARN_HOST=master1
# yarn备用节点地址
export YARN_BAK_HOST=master2
# JobHistoryServer节点地址
export JOB_HOST=master2
# hbase主节点地址,高可用配置的话填写其中任何一个都可
export HBASE_HOST=master1
# hive metastore服务节点地址
export HMETA_HOST=master1
# hive server2服务节点地址
export HSERVER_HOST=master2
# 集群启动用户
export CLUSTER_USER=root
if [ $# -ne 1 ];then
echo -e "\n\tUsage: $0 {start|stop}\n"
exit 1;
fi
case "$1" in
start)
echo "-------------------------- 启动Zookeeper ------------------------"
for zk_host in ${ZK_HOST[@]}
do
echo -e "\nStart Zk_Server On Host [$zk_host]..."
ssh $CLUSTER_USER@$zk_host "source /etc/profile;zkServer.sh start"
done
echo "---------------------------- 启动HDFS ---------------------------"
ssh $CLUSTER_USER@$HDFS_HOST "source /etc/profile;start-dfs.sh"
echo "---------------------------- 启动YARN ---------------------------"
ssh $CLUSTER_USER@$YARN_HOST "source /etc/profile;start-yarn.sh"
ssh $CLUSTER_USER@$YARN_BAK_HOST "source /etc/profile;yarn-daemon.sh start resourcemanager"
echo "---------------------- 启动JobHistoryServer ---------------------"
ssh $CLUSTER_USER@$JOB_HOST "source /etc/profile;mr-jobhistory-daemon.sh start historyserver"
echo "---------------------------- 启动HBase --------------------------"
ssh $CLUSTER_USER@$HBASE_HOST "source /etc/profile;start-hbase.sh"
echo "----------------------- 启动HiveMetaStore -----------------------"
echo "Start HiveMetaStore On Host [$HMETA_HOST]..."
ssh $CLUSTER_USER@$HMETA_HOST "source /etc/profile;nohup hive --service metastore >> /var/hivelog.log 2>&1 &"
echo "------------------------ 启动HiveServer2 ------------------------"
echo "Start HiveServer2 On Host [$HSERVER_HOST]..."
ssh $CLUSTER_USER@$HSERVER_HOST "source /etc/profile;nohup hiveserver2 >> /var/hivelog.log 2>&1 &"
echo -e "\n------------------------- 集群启动完成 --------------------------\n"
;;
stop)
echo "----------------------- 停止HiveMetaStore -----------------------"
echo "Stop HiveMetaStore On Host [$HMETA_HOST]..."
ssh $CLUSTER_USER@$HMETA_HOST "pkill -f hive.metastore.HiveMetaStore"
echo "------------------------ 停止HiveServer2 ------------------------"
echo "Stop HiveServer2 On Host [$HSERVER_HOST]..."
ssh $CLUSTER_USER@$HSERVER_HOST "pkill -f hive.service.server.HiveServer2"
echo "---------------------------- 停止HBase --------------------------"
ssh $CLUSTER_USER@$HBASE_HOST "source /etc/profile;stop-hbase.sh"
echo "---------------------- 停止JobHistoryServer ---------------------"
ssh $CLUSTER_USER@$JOB_HOST "source /etc/profile;mr-jobhistory-daemon.sh stop historyserver"
echo "---------------------------- 停止YARN ---------------------------"
ssh $CLUSTER_USER@$YARN_HOST "source /etc/profile;stop-yarn.sh"
ssh $CLUSTER_USER@$YARN_BAK_HOST "source /etc/profile;yarn-daemon.sh stop resourcemanager"
echo "---------------------------- 停止HDFS ---------------------------"
ssh $CLUSTER_USER@$HDFS_HOST "source /etc/profile;stop-dfs.sh"
echo "------------------------- 停止Zookeeper -------------------------"
for zk_host in ${ZK_HOST[@]}
do
echo -e "\nStop Zk_Server On Host [$zk_host]..."
ssh $CLUSTER_USER@$zk_host "source /etc/profile;zkServer.sh stop"
done
echo -e "\n------------------------- 集群已停止运行 -------------------------\n"
;;
*)
echo -e "\n\tUsage: $0 {start|stop}\n"
exit 1
;;
esac
exit 0
使用说明
复制脚本内容保存为cluster.sh
文件,
将脚本上传到集群中任意一个节点(推荐管理节点),
并使用# chmod +x cluster.sh
赋予脚本可执行权限,
- 集群启动
# ./cluster.sh start
- 集群停止
# ./cluster.sh stop