Internal Load Balancing 内部负载平衡
Ingress Network:外部访问的负载均衡,我们在节点访问地址和端口服务的时候,都可以请求到数据,他的
原理是通过LVS把真正的服务转发到真正具有服务的节点上。
例如,访问docker3的8080,但是docker3 8080并没有这个服务,他会将这个服务转发到有服务docker2的主机上,然后在吧数据返回。
在Manager节点上运行:docker service ps whoami
[root@docker-host ~]# docker service ps whoami
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
11tlqjmfdj2a whoami.1 jwilder/whoami:latest docker-host Running Running 5 hours ago
bmgogzuzmu4l _ whoami.1 jwilder/whoami:latest docker-host Shutdown Rejected 5 hours ago "No such image: jwilder/whoami…"
kck0nb10xndl whoami.2 jwilder/whoami:latest docker-node3 Running Running 4 hours ago
可以,看到在whoami运行在Manager和node3上
[root@docker-host ~]# curl 127.0.0.1:8000
I'm 03a295109ae3
[root@docker-host ~]# curl 127.0.0.1:8000
I'm 5b2b7f0d72ce
[root@docker-host ~]# curl 127.0.0.1:8000
I'm 03a295109ae3
[root@docker-host ~]# curl 127.0.0.1:8000
I'm 5b2b7f0d72ce
[root@docker-host ~]# curl 127.0.0.1:8000
I'm 03a295109ae3
每次访问,返回的主机名都是不一样的,一共有两台主机
在一个没有whoami服务的node2上,运行127.0.0.1:8000返回时一样的
为什么node2没有whoami服务,但是也可以请求的到?
在node2查看一下防火墙的规则:
[root@docker-node2 ~]# iptables -nL -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match src-type LOCAL
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
MASQUERADE all -- 172.18.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 to:172.18.0.2:8000
RETURN all -- 0.0.0.0/0 0.0.0.0/0
通过发现,访问8000端口的都被转发到了172.18.0.2:8000上面了。
在node2上查看是否有172.18.0.2
[root@docker-node2 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:6c:3e:95 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 65232sec preferred_lft 65232sec
inet6 fe80::a00:27ff:fe6c:3e95/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:00:4b:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.205.20/24 brd 192.168.205.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe00:4b85/64 scope link
valid_lft forever preferred_lft forever
4: docker0: mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:99:c8:5e:ac brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:99ff:fec8:5eac/64 scope link
valid_lft forever preferred_lft forever
5: docker_gwbridge: mtu 1500 qdisc noqueue state UP
link/ether 02:42:72:92:41:85 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:72ff:fe92:4185/64 scope link
valid_lft forever preferred_lft forever
11: vethc3c8221@if10: mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 8a:3b:e1:61:51:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::883b:e1ff:fe61:516b/64 scope link
valid_lft forever preferred_lft forever
38: vethabc1bf7@if37: mtu 1500 qdisc noqueue master docker_gwbridge state UP
link/ether 4a:47:93:c0:6c:a1 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::4847:93ff:fec0:6ca1/64 scope link
valid_lft forever preferred_lft forever
可以看到:
5: docker_gwbridge: mtu 1500 qdisc noqueue state UP
link/ether 02:42:72:92:41:85 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:72ff:fe92:4185/64 scope link
valid_lft forever preferred_lft forever
里面有个172.18.0.1
很接近了,在一个网关。
yum install bridge-utils -y
brctl show 查看网关
[root@docker-node2 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024299c85eac no
docker_gwbridge 8000.024272924185 no vethabc1bf7
vethc3c8221
[root@docker-node2 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f82c9355f4a7 bridge bridge local
zy22adwah6yi demo overlay swarm
e4d36d217a0c docker_gwbridge bridge local
04e15809d178 host host local
pjbn7946dtoi ingress overlay swarm
eef5555525fd none null local
docker network inspect docker_gwbridge
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"d8bf026e9d87a559908cbd543571573f262aacece1288742339e5038bb4c6ab9": {
"Name": "gateway_7cf14610e7c6",
"EndpointID": "4a3dd75b512d7b787be31a996e0e0509373cdee3a60f421699d6e70fc9028308",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "gateway_ingress-sbox",
"EndpointID": "533d63336f584e9dd7d8702dbb83dad3a25ca9c2aae50af33716d1825372a4d6",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
由此发现,路由表中是这个:
"ingress-sbox": {
"Name": "gateway_ingress-sbox",
"EndpointID": "533d63336f584e9dd7d8702dbb83dad3a25ca9c2aae50af33716d1825372a4d6",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
也就是数据都进入ingress-sbox,那我们进入:
[root@docker-node2 ~]# ls /var/run/docker/netns/
1-pjbn7946dt 1-zy22adwah6 3866254dd020 7cf14610e7c6 ingress_sbox
[root@docker-node2 ~]# nsenter --net=/var/run/docker/netns/ingress_sbox
使用这个地址,使用上面的命令后,已经不是原来的地址了,而是使用了ingress_sbox地址
ip a
[root@docker-node2 ~]# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:ff:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.3/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.7/32 brd 10.255.0.7 scope global eth0
valid_lft forever preferred_lft forever
10: eth1@if11: mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
为了看lvs
先退出上面登录的网关
安装yum install ipvsadm
继续进入:
[root@docker-node2 ~]# nsenter --net=/var/run/docker/netns/ingress_sbox
[root@docker-node2 ~]# iptables -nL -t mangle
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
MARK tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 MARK set 0x103
Chain INPUT (policy ACCEPT)
target prot opt source destination
MARK all -- 0.0.0.0/0 10.255.0.7 MARK set 0x103
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
[root@docker-node2 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 259 rr
-> 10.255.0.9:0 Masq 1 0 0
-> 10.255.0.10:0 Masq 1 0 0
上面已经说了,运行whoami容器的是Manager和node3
进入Manager
[root@docker-host ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03a295109ae3 jwilder/whoami:latest "/app/http" 6 hours ago Up 6 hours 8000/tcp whoami.1.11tlqjmfdj2ajhvn5fz9wfl7u
[root@docker-host ~]# docker exec -it 03a ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:00:00:0d brd ff:ff:ff:ff:ff:ff
inet 10.0.0.13/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
25: eth2@if26: mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:16:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.22.0.3/16 brd 172.22.255.255 scope global eth2
valid_lft forever preferred_lft forever
27: eth1@if28: mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:ff:00:09 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.9/16 brd 10.255.255.255 scope global eth1
valid_lft forever preferred_lft forever
进入node3
[root@docker-node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b2b7f0d72ce jwilder/whoami:latest "/app/http" 4 hours ago Up 4 hours 8000/tcp whoami.2.kck0nb10xndlmz2nl7c9itcce
[root@docker-node3 ~]# docker exec -it 5b ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
27: eth1@if28: mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:ff:00:0a brd ff:ff:ff:ff:ff:ff
inet 10.255.0.10/16 brd 10.255.255.255 scope global eth1
valid_lft forever preferred_lft forever
29: eth2@if30: mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth2
valid_lft forever preferred_lft forever
31: eth0@if32: mtu 1450 qdisc noqueue state UP
link/ether 02:42:0a:00:00:12 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.18/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
是不是都发现10.255.0.9和10.255.0.10呢?
所以说数据进入8000端口进入了LVS,然后转发到了这两个地址。