集群规划
node01 | node02 | node03 |
---|---|---|
NameNode | NameNode | NameNode |
ZKFC | ZKFC | ZKFC |
JournalNode | JournalNode | JournalNode |
DataNode | DataNode | DataNode |
ZK | ZK | ZK |
ResourceManager | ResourceManager | |
NodeManager | NodeManager | NodeManager |
准备模板虚拟机
关闭防火墙,关闭防火墙开机自启
systemctl stop firewalld
systemctl disable firewalld
创建普通用户,并修改密码
useradd lixuan
passwd lixuan
配置lixuan用户具有root权限
vim /etc/sudoers
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
lixuan ALL=(ALL) NOPASSWD:ALL
在/opt目录下创建文件加,并修改所属主和所属主
mkdir /opt/module
mkdir /opt/software
chown lixuan:lixuan /opt/module
chown lixuan:lixuan /opt/software
卸载虚拟机自带的openJDK
rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps
重启虚拟机
reboot
克隆虚拟机node01
修改克隆机静态IP(三台都要改)
vim /etc/sysconfig/network-scripts/ifcfg-ens33
DEVICE=ens33
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
NAME="ens33"
IPADDR=192.168.50.100
PREFIX=24
GATEWAY=192.168.50.2
DNS1=192.168.50.2
查看虚拟网络编辑器
修改克隆主机名
vim /etc/hostname
配置host文件
vim /etc/hosts
192.168.50.100 node01
192.168.50.110 node02
192.168.50.120 node03
重启
修改windows主机的hosts文件
安装JDK
ls /opt/software/
hadoop-3.1.3.tar.gz jdk-8u212-linux-x64.tar.gz
解压JDK
tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module/
配置JDK环境变量
sudo vim /etc/profile.d/my_env.sh
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
安装hadoop
tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/
将Hadoop添加到环境变量
sudo vim /etc/profile.d/my_env.sh
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
source /etc/profile
SSH免密登录配置
ssh-keygen -t rsa #然后敲3个回车
# 另外两台机器也要执行同样的操作
ssh-copy-id node01
ssh-copy-id node02
ssh-copy-id node03
安装Zookeeper
解压安装
tar -zxvf zookeeper-3.5.7.tar.gz -C /opt/module/
xsync.sh zookeeper-3.5.7/
配置服务编号
# 在/opt/module/zookeeper-3.5.7/这个目录下创建zkData
mkdir -p zkData
# 在/opt/module/zookeeper-3.5.7/zkData目录下创建一个myid的文件
touch myid
# 编辑myid文件,添加对应的编号(1,2,3)
xsync.sh zookeeper-3.5.7/
配置zoo.cfg文件
# 重命名/opt/module/zookeeper-3.5.7/conf这个目录下的zoo_sample.cfg为zoo.cfg
mv zoo_sample.cfg zoo.cfg
vim zoo.cfg
# 修改数据存储路径配置
dataDir=/opt/module/zookeeper-3.5.7/zkData
# 增加如下配置
# 2888是Follower与Leader服务器交换信息的端口
# 3888是Leader挂了后用来执行选举时服务器互相通信的端口
#######################cluster##########################
server.1=node01:2888:3888
server.2=node02:2888:3888
server.3=node03:2888:3888
# 同步zoo.cfg配置文件
xsync.sh zoo.cfg
Hadoop配置文件修改
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.1.3/data</value>
</property>
<!-- 配置HDFS网页登录使用的静态用户为lixuan -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>lixuan</value>
</property>
<!-- 配置该lixuan(superUser)允许通过代理访问的主机节点 -->
<property>
<name>hadoop.proxyuser.lixuan.hosts</name>
<value>*</value>
</property>
<!-- 配置该lixuan(superUser)允许通过代理用户所属组 -->
<property>
<name>hadoop.proxyuser.lixuan.groups</name>
<value>*</value>
</property>
<!-- 配置该lixuan(superUser)允许通过代理的用户-->
<property>
<name>hadoop.proxyuser.lixuan.groups</name>
<value>*</value>
</property>
<!-- 指定zkfc要连接的zkServer地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- nn web端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>node01:9870</value>
</property>
<!-- 2nn web端访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node03:9868</value>
</property>
<!-- NameNode数据存储目录 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/name</value>
</property>
<!-- DataNode数据存储目录 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>file://${hadoop.tmp.dir}/data</value>
</property>
<!-- JournalNode数据存储目录 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>${hadoop.tmp.dir}/jn</value>
</property>
<!-- 完全分布式集群名称 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!-- 集群中NameNode节点都有哪些 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
</property>
<!-- NameNode的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node02:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn3</name>
<value>node03:8020</value>
</property>
<!-- NameNode的http通信地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node01:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node02:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn3</name>
<value>node03:9870</value>
</property>
<!-- 指定NameNode元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
</property>
<!-- 访问代理类:client用于确定哪个NameNode为Active -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 使用隔离机制时需要ssh秘钥登录-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/lixuan/.ssh/id_rsa</value>
</property>
<!-- 启用nn故障自动转移 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 启用resourcemanager ha -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 声明两台resourcemanager的地址 -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster-yarn1</value>
</property>
<!--指定resourcemanager的逻辑列表-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- ========== rm1的配置 ========== -->
<!-- 指定rm1的主机名 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node01</value>
</property>
<!-- 指定rm1的web端地址 -->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>node01:8088</value>
</property>
<!-- 指定rm1的内部通信地址 -->
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>node01:8032</value>
</property>
<!-- 指定AM向rm1申请资源的地址 -->
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>node01:8030</value>
</property>
<!-- 指定供NM连接的地址 -->
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
<value>node01:8031</value>
</property>
<!-- ========== rm2的配置 ========== -->
<!-- 指定rm2的主机名 -->
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node02</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>node02:8088</value>
</property>
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>node02:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>node02:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
<value>node02:8031</value>
</property>
<!-- 指定zookeeper集群的地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property>
<!-- 启用自动恢复 -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!-- 指定resourcemanager的状态信息存储在zookeeper集群 -->
<property>
<name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
<!-- 开启日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置日志聚集服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://node01:19888/jobhistory/logs</value>
</property>
<!-- 设置日志保留时间为7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
mapred-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MapReduce程序运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>node01:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>node01:19888</value>
</property>
workers
vim /opt/module/hadoop-3.1.3/etc/hadoop/workers
#添加如下内容, 结尾不能有空格,文件中不允许有空行
node01
node02
node03
#同步所有节点配置文件
xsync.sh /opt/module/hadoop-3.1.3/etc
启动集群
第一次启动
各个节点启动
journalnode
服务和zkServer服务hdfs --daemon start journalnode zkServer.sh start
删除每个节点的data和logs文件
nn1格式化hdfs,并启动
hdfs namenode -format hdfs --daemon start namenode
nn2,nn3上同步nn1的元数据信息
hdfs namenode -bootstrapStandby
关闭所有HDFS服务
启动Zookeeper集群
zkServer.sh start
初始化HA在Zookeeper中的状态
hdfs zkfc -formatZK
启动HDFS服务
查看Web端
# 查看NameNode
node01:9870
# 查看ResourceManager
node02:8088
# 查看JobHistory
node01:19888
常用脚本
集群分发脚本xsync.sh
cd /home/lixuan
mkdir bin
cd bin
vim xsync.sh
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Input Path You Need Give Others
exit;
fi
#2. 遍历集群所有机器
for host in node01 node02 node03
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4. 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
chmod +x xsync.sh
jpsall.sh
#!/bin/bash
for host in node01 node02 node03
do
echo =============== $host ===============
ssh $host jps $@ | grep -v Jps
done
HA一键启动脚本
#!/bin/bash
if [ $# -lt 1 ]
then
echo "please input start | stop"
exit ;
fi
case $1 in
"start")
echo " =================== 启动 HA-hadoop集群 ==================="
echo " ======== 启动 ZK ========"
for i in node01 node02 node03
do
ssh $i "cd /opt/module/zookeeper-3.5.7/;
bin/zkServer.sh start;"
# hdfs --daemon start journalnode
done
echo " ======== 启动 Hadoop ========"
ssh node01 "cd /opt/module/hadoop-3.1.3/;
sbin/start-dfs.sh;
sbin/start-yarn.sh;
bin/mapred --daemon start historyserver"
;;
"stop")
echo " =================== 关闭 HA-hadoop集群 ==================="
echo " ======== 停止 ZK ========"
for i in node01 node02 node03
do
ssh $i "cd /opt/module/zookeeper-3.5.7/;
bin/zkServer.sh stop;"
#hdfs --daemon stop journalnode
done
echo " ======== 停止 Hadoop ========"
ssh node01 "cd /opt/module/hadoop-3.1.3/;
sbin/stop-dfs.sh;
sbin/stop-yarn.sh;
bin/mapred --daemon stop historyserver"
;;
*)
echo "Input Error..."
;;
esac
常见问题
解决XShell连接虚拟机慢的问题
方式1
vim /etc/ssh/sshd_config
#将UseDNS=yes的注释符去掉并改为no
systemctl restart sshd
方式2
Xshell -> 属性 -> SSH -> 隧道 -> 将转发x11这个选项关闭
解决8485端口拒绝连接问题
core-site..xml
<property>
<name>ipc.client.connect.max.retries</name>
<value>100</value>
</property>
<property>
<name>ipc.client.connect.retry.interval</name>
<value>10000</value>
</property>