用LVS实现负载均衡
实现步骤:
#若在虚拟环境中需执行此步骤创建两个新的虚拟机,VMWARE可忽略此步骤
真实主机:
cd /var/lib/libvirt/images/
ls
qemu-img create -f qcow2 -b rhel7.6.qcow2 server3
qemu-img create -f qcow2 -b rhel7.6.qcow2 server4
server1:
pcs cluster disable --all
pcs cluster stop --all
systemctl status pcsd
systemctl disable --now pcsd
ssh server2 disable --now pcsd
ssh server2 systemctl disable --now pcsd
server3:
hostnamectl set-hostname server3
cd /etc/yum.repos.d/
vim dvd.repo
yum install httpd
systemctl enable --now httpd
systemctl start httpd
cd /var/www/html/
echo vm3> index.html
ip addr add 172.25.19.100/24 dev eth0
yum install -y arptables
arptables -A INPUT -d 172.25.19.100 -j DROP
arptables -A OUTPUT -s 172.25.19.100 -j mangle --mangle-ip-s 172.25.19.3
server4:
hostnamectl set-hostname server4
cd /etc/yum.repos.d/
vim dvd.repo
yum install httpd
systemctl enable --now httpd
systemctl start httpd
cd /var/www/html/
echo vm3> index.html
ip addr add 172.25.19.100/24 dev eth0
yum install -y arptables
arptables -A INPUT -d 172.25.19.100 -j DROP
arptables -A OUTPUT -s 172.25.19.100 -j mangle --mangle-ip-s 172.25.19.4
server2:
curl server3
curl server4
yum install ipvsadm -y
ip addr add 172.25.19.100/24 dev eth0
ipvsadm -A -t 172.25.19.100:80 -s rr
ipvsadm -a -t 172.25.19.100:80 -r 172.25.19.3:80 -g
ipvsadm -a -t 172.25.19.100:80 -r 172.25.19.4:80 -g
ipvsadm -ln
真实主机:
curl 172.25.19.100
负载均衡(轮询机制)实现
LVS+keepalived实现负载均衡健康监测
实验目的
上个实验中实现了负载均衡。本实验中我们将结合keepalived实现负载均衡高级群的健康监测,即在一真实主机的apache关闭后,主机可以监测到,并不再访问这台关闭apache的主机。
实验环境
五台虚拟机:home为主机,server1、server2为VS调度,server3、server4为RS真实主机。
实验步骤
1.环境搭建
server1、server2:
ipvsadm -C 清除IPVS中的IP,因为IPVS的配置会与keepalived冲突
ipvsadm -ln
yum install keepalived -y
ip addr del 172.25.19.100/24 dev eth0
vim /etc/keepalived/keepalived.conf 编写此文件相当于用配置文件的方式实现负载均衡
server1 keepalived.conf:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.19.100
}
}
virtual_server 172.25.19.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
#persistence_timeout 50
protocol TCP
real_server 172.25.19.3 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.25.19.4 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
server2 keepalived.conf:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.19.100
}
}
virtual_server 172.25.19.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
#persistence_timeout 50
protocol TCP
real_server 172.25.19.3 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.25.19.4 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
systemctl restart keepalived.service
tail -f /var/log/messages
ipvsadm -ln 执行此命令后发现ip已配置完毕
2.健康检测
1)RS真实主机健康检测
在server3中 关闭httpd服务
systemctl stop httpd
此时在真实主机中执行
curl 172.25.19.100
发现主机已知晓3的服务关闭,并且主机策略已不存在server3
并发送邮件给了server1、server2
再开启server3中的httpd,并在主机中检测,发现已恢复正常
2)DS健康检测
关闭server1中的keepalived服务
[root@server1 ~]# systemctl stop keepalived.service
此时在server2中:
tail -f /var/log/messages
发现server2已接管server1
在主机中探测RS发现不收任何影响,即keepalived解决了DS的单点故障