一、bonding技术
bonding(绑定)是一种linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功能,有很多优势。
bonding技术是linux系统内核层面实现的,它是一个内核模块(驱动)。使用它需要系统有这个模块, 我们可以modinfo命令查看下这个模块的信息, 一般来说都支持.
modinfo bonding
filename: /lib/modules/3.10.0-957.1.3.el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1
license: GPL
alias: rtnl-link-bond
retpoline: Y
rhelversion: 7.6
srcversion: 120C91D145D649655185C69
depends:
intree: Y
vermagic: 3.10.0-957.1.3.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: E7:CE:F3:61:3A:9B:8B:D0:12:FA:E7:49:82:72:15:9B:B1:87:9C:65
sig_hashalgo: sha256
parm: max_bonds:Max number of bonded devices (int)
parm: tx_queues:Max number of transmit queues (default = 16) (int)
parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds (int)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm: primary:Primary network device to use (charp)
parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm: ad_select:802.3ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm: min_links:Minimum number of available links before turning on carrier (int)
parm: xmit_hash_policy:balance-alb, balance-tlb, balance-xor, 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3, 3 for encap layer 2+3, 4 for encap layer 3+4 (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm: resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int)
parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)
bonding的七种工作模式:
bonding技术提供了七种工作模式,在使用的时候需要指定一种,每种有各自的优缺点.
- balance-rr (mode=0) 默认, 有高可用 (容错) 和负载均衡的功能, 需要交换机的配置,每块网卡轮询发包 (流量分发比较均衡).
- active-backup (mode=1) 只有高可用 (容错) 功能, 不需要交换机配置, 这种模式只有一块网卡工作, 对外只有一个mac地址。缺点是端口利用率比较低
- balance-xor (mode=2) 不常用
- broadcast (mode=3) 不常用
- 802.3ad (mode=4) IEEE 802.3ad 动态链路聚合,需要交换机配置
- balance-tlb (mode=5) 不常用
- balance-alb (mode=6) 有高可用 ( 容错 )和负载均衡的功能,不需要交换机配置 (流量分发到每个接口不是特别均衡)。
二、Centos7配置bonding
系统: Centos7.5网卡: ifcfg-eno49、ifcfg-eno50bond0:10.162.97.41负载模式: mode4(802.3ad 动态链路聚合)
1、关闭和停止NetworkManager服务
systemctl stop NetworkManager.service # 停止NetworkManager服务
systemctl disable NetworkManager.service # 禁止开机启动NetworkManager服务
ps: 一定要关闭,不关会对做bonding有干扰
2、加载bonding模块
modprobe bonding
没有提示说明加载成功, 如果出现modprobe: ERROR: could not insert 'bonding': Module already in kernel说明你已经加载了这个模块, 就不用管了
你也可以使用lsmod | grep bonding查看模块是否被加载
lsmod | grep bonding
bonding 136705 0
3、创建基于bond0接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-bond0
修改成如下,根据你的情况:
DEVICE=bond0TYPE=BondBOOTPROTO=noneONBOOT=yesIPADDR=10.162.97.41NETMASK=255.255.255.0GATEWAY=10.162.97.253DNS1=10.1.0.62BONDING_MASTER=yesBONDING_OPTS="mode=4 miimon=100"
上面的BONDING_OPTS="mode=4 miimon=100" 表示这里配置的工作模式是802.3ad 动态链路聚合, miimon表示监视网络链接的频度 (毫秒), 我们设置的是100毫秒, 根据你的需求也可以指定mode成其它的负载模式。
4、修改ifcfg-eno49接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-eno49
修改成如下:
TYPE=EthernetPROXY_METHOD=nonedBROWSER_ONLY=noBOOTPROTO=staticDEFROUT=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eno49UUID=29d2526a-2eec-4a5e-8190-3d1fe5e04f57DEVICE=eno49.97ONBOOT=yesMASTER=bond0SLAVE=yesVLAN=yes //此处配置VLAN,因为所处交换机端口为TrunkTYPE=VlanVLAN_ID=97
5、修改ifcfg-eno50接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-eno50
修改成如下:
TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=dhcpDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=eno50UUID=dae63958-841f-4666-9308-28bda92dc66fDEVICE=eno50.97ONBOOT=yesMASTER=bond0SLAVE=yesVLAN=yesTYPE=VlanVLAN_ID=97
6、测试
重启网络服务
systemctl restart network
查看bond0的接口状态信息 ( 如果报错说明没做成功,很有可能是bond0接口没起来)
# cat /proc/net/bonding/bond0
[root@bogon ~]# cat /proc/net/bonding/bond0Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: IEEE 802.3ad Dynamic link aggregation // 绑定模式: 当前是ald模式(mode 4), 也就是802.3ad 动态链路聚合Transmit Hash Policy: layer2 (0)MII Status: up // 接口状态: up(MII是Media Independent Interface简称, 接口的意思)MII Polling Interval (ms): 100 // 接口轮询的时间隔(这里是100ms)Up Delay (ms): 0Down Delay (ms): 0802.3ad info //802.3ad 信息LACP rate: slowMin links: 0Aggregator selection policy (ad_select): stableSystem priority: 65535System MAC address: 20:67:7c:1f:15:f0Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 15 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00Slave Interface: eno49.97 // 备接口: eno49.97MII Status: upSpeed: 10000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr: 20:67:7c:1f:15:f0Slave queue ID: 0Aggregator ID: 1Actor Churn State: monitoringPartner Churn State: monitoringActor Churned Count: 0Partner Churned Count: 0details actor lacp pdu: system priority: 65535 system mac address: 20:67:7c:1f:15:f0 port key: 15 port priority: 255 port number: 1 port state: 197details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 3Slave Interface: eno50.97 // 备接口: eno50.97MII Status: upSpeed: 10000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr: 20:67:7c:1f:15:f8Slave queue ID: 0Aggregator ID: 2Actor Churn State: monitoringPartner Churn State: monitoringActor Churned Count: 0Partner Churned Count: 0details actor lacp pdu: system priority: 65535 system mac address: 20:67:7c:1f:15:f0 port key: 15 port priority: 255 port number: 2 port state: 197details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 3
通过ifconfig命令查看下网络的接口信息
# ifconfig
[root@bogon ~]# ifconfig bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 10.162.97.41 netmask 255.255.255.0 broadcast 10.162.97.255 ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet) RX packets 22039 bytes 1436892 (1.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6687 bytes 678240 (662.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eno49: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet) RX packets 16645 bytes 1894648 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7228 bytes 833488 (813.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 memory 0x96000000-967fffff eno50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f8 txqueuelen 1000 (Ethernet) RX packets 11163 bytes 1107408 (1.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 791 bytes 119264 (116.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 17 memory 0x95000000-957fffff eno51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f1 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 17 memory 0x94000000-947fffff eno52: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f9 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 18 memory 0x93000000-937fffff eno49.97: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet) RX packets 13004 bytes 1017228 (993.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6404 bytes 658552 (643.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0eno50.97: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet) RX packets 7632 bytes 351072 (342.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 180 (180.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens2f0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 48:df:37:36:a9:24 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc8300000-c83fffff ens2f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 48:df:37:36:a9:25 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc8200000-c82fffff ens2f2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 48:df:37:36:a9:26 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc8100000-c81fffff ens2f3: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 48:df:37:36:a9:27 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xc8000000-c80fffff lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 16 bytes 1356 (1.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16 bytes 1356 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
测试网络高可用, 我们拔掉其中一根网线进行测试, 结论是:
- 在本次mode=6模式下丢包1个, 恢复网络时( 网络插回去 ) 丢包在5-6个左右,说明高可用功能正常但恢复的时候丢包会比较多
- 测试mode=1模式下丢包1个,恢复网络时( 网线插回去 ) 基本上没有丢包,说明高可用功能和恢复的时候都正常
- mode6这种负载模式除了故障恢复的时候有丢包之外其它都挺好的,如果能够忽略这点的话可以这种模式;而mode1故障的切换和恢复都很快,基本没丢包和延时。但端口利用率比较低,因为这种主备的模式只有一张网卡在工作.
引自于: