[root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE cd97bb997b84 bridge bridge local 0a04824fc9b6 host host local 4dcb8fbdb599 none null local
Docker 使用 Linux 桥接,在宿主机虚拟一个 Docker 容器网桥(docker0),Docker 启动一个容器时会根据 Docker 网桥的网段分配给容器一个 IP 地址,称为 Container-IP,同时 Docker 网桥是每个容器的默认网关。因为在同一宿主机内的容器都接入同一个网桥,这样容器之间就能够通过容器的 Container-IP 直接通信。
docker 的 4 种网络模式
网络模式
配置
说明
host
--network host
容器和宿主机共享 Network namespace
container
--network container:NAME_OR_ID
容器和另外一个容器共享 Network namespace
none
--network none
容器有独立的 Network namespace,但并没有对其进行任何网络设置,如分配 veth pair 和网桥连接,配置 IP 等
可以通过 ip netns 命令完成对 Network Namespace 的相关操作,可以通过 ip netns help 查看命令帮助信息:
[root@localhost ~]# ip netns help Usage: ip netns list ip netns add NAME ip netns set NAME NETNSID ip [-all] netns delete [NAME] ip netns identify [PID] ip netns pids NAME ip [-all] netns exec [NAME] cmd ... ip netns monitor ip netns list-id
默认情况下,Linux 系统中是没有任何 Network Namespace 的,所以 ip netns list 命令不会返回任何信息。
创建 Network Namespace
通过命令创建一个名为 ns0 的命名空间:
[root@localhost ~]# ip netns list [root@localhost ~]# ip netns add ns0 [root@localhost ~]# ip netns list ns0
ip 命令提供了 ip netns exec 子命令可以在对应的 Network Namespace 中执行命令。
查看新创建 Network Namespace 的网卡信息
[root@localhost ~]# ip netns exec ns0 ip addr 1: lo: mtu 65536 qdisc noop state DOWN groupdefault qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
可以看到,新创建的 Network Namespace 中会默认创建一个 lo 回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该 lo 回环网卡,会提示 Network is unreachable
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 connect: Network is unreachable 127.0.0.1是默认回环网卡
通过下面的命令启用 lo 回环网卡:
[root@localhost ~]# ip netns exec ns0 ip link set lo up [root@localhost ~]# ip netns exec ns0 ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data. 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms ^C --- 127.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1036ms rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
[root@localhost ~]# ip link add type veth [root@localhost ~]# ip a
4: veth0@veth1: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff 5: veth1@veth0: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns1 ns0
然后我们将 veth0 加入到 ns0,将 veth1 加入到 ns1
[root@localhost ~]# ip link set veth0 netns ns0 [root@localhost ~]# ip link set veth1 netns ns1
然后我们分别为这对 veth pair 配置上 ip 地址,并启用它们
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up [root@localhost ~]# ip netns exec ns0 ip addr add 192.0.0.1/24 dev veth0 [root@localhost ~]# ip netns exec ns1 ip link set veth1 up [root@localhost ~]# ip netns exec ns1 ip addr add 192.0.0.2/24 dev veth1
查看这对 veth pair 的状态
[root@localhost ~]# ip netns exec ns0 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: veth0@if5: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff link-netns ns1 inet 192.0.0.1/24 scope global veth0
valid_lft forever preferred_lft forever inet6 fe80::8f4:e2ff:fe2d:37fb/64 scope link valid_lft forever preferred_lft forever
``` ```sql [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: veth1@if4: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff link-netns ns0 inet 192.0.0.2/24 scope global veth1 valid_lft forever preferred_lft forever inet6 fe80::5c7e:f6ff:fe59:f04f/64 scope link valid_lft forever preferred_lft forever
从上面可以看出,我们已经成功启用了这个 veth pair,并为每个 veth 设备分配了对应的 ip 地址。我们尝试在 ns1 中访问 ns0 中的 ip 地址
[root@localhost ~]# ip netns exec ns1 ping 192.0.0.1 PING 192.0.0.1 (192.0.0.1) 56(84) bytes of data. 64 bytes from192.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from192.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms ^C --- 192.0.0.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.033/0.037/0.041/0.004 ms [root@localhost ~]# ip netns exec ns0 ping 192.0.0.2 PING 192.0.0.2 (192.0.0.2) 56(84) bytes of data. 64 bytes from192.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 64 bytes from192.0.0.2: icmp_seq=2 ttl=64 time=0.025 ms ^C --- 192.0.0.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1038ms rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
[root@localhost ~]# docker -H 192.168.203.138:2375 ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e97bc1774e40 httpd "httpd-foreground" 30 minutes ago Up 11 seconds 192.168.203.138:49153->80/tcp web1 af5ba32f990e busybox "sh" About an hour ago Up 14 seconds b3
创建新网络
[root@localhost ~]# docker network create ljl -d bridge 883eda50812bb214c04986ca110dbbcb7600eba8b033f2084cd4d750b0436e12 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 0c5f4f114c27 bridge bridge local 8c2d14f1fb82 host host local 883eda50812b ljl bridge local 85ed12d38815 none null local
创建一个额外的自定义桥,区别于 docker0
[root@localhost ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0 af9ba80deb619de3167939ec5b6d6136a45dce90907695a5bc5ed4608d188b99 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE af9ba80deb61 br0 bridge local 0c5f4f114c27 bridge bridge local 8c2d14f1fb82 host host local 883eda50812b ljl bridge local 85ed12d38815 none null local
使用新创建的自定义桥来创建容器:
[root@localhost ~]# docker run -it --name b1 --network br0 busybox / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:02:02 inet addr:192.168.2.2Bcast:192.168.2.255Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500Metric:1 RX packets:11errors:0dropped:0overruns:0frame:0 TX packets:0errors:0dropped:0overruns:0carrier:0 collisions:0txqueuelen:0 RX bytes:962 (962.0 B) TX bytes:0 (0.0 B)
再创建一个容器,使用默认的 bridge 桥:
[root@localhost ~]# docker run --name b2 -it busybox / # ls bin dev etc home proc root sys tmp usr var / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:01:03 inet addr:192.168.1.3Bcast:192.168.1.255Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500Metric:1 RX packets:6errors:0dropped:0overruns:0frame:0 TX packets:0errors:0dropped:0overruns:0carrier:0 collisions:0txqueuelen:0 RX bytes:516 (516.0 B) TX bytes:0 (0.0 B)