天空闲话 发表于 2024-9-9 07:04:56

linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式

linux 网卡模式

linux网卡支持非vlan模式、vlan模式、bond模式、bridge模式,macvlan模式、ipvlan模式等,下面介绍互换机端及服务器端配置示例。
前置要求:


[*]预备一台物理互换机,以 H3C S5130 三层互换机为例
[*]预备一台物理服务器,以 Ubuntu 22.04 LTS 操作体系为例
互换机创建2个示例VLAN,vlan10和vlan20,及VLAN接口。
<H3C>system-view

vlan 10 20

interface Vlan-interface 10
ip address 172.16.10.1 24
undo shutdown
exit


interface Vlan-interface 20
ip address 172.16.20.1 24
undoshutdown
exit

网卡非vlan模式

网卡非vlan模式,一样平常直接配置IP地点,对端上连互换机配置为access口,access口一样平常用于连接纯物理服务器或办公终端设备。
示意图如下
https://i-blog.csdnimg.cn/blog_migrate/8da847d5341e65e6f61f1d3cb2027697.png
互换机配置,互换机接口配置为access模式,并参加对应vlan
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type access
port access vlan 10
exit

interface GigabitEthernet 1/0/2
port link-type access
port access vlan 20
exit

服务器1配置,服务器网卡直接配置IP地点
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.10.1
version: 2
服务器2配置,服务器网卡直接配置IP地点
root@server2:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.20.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.20.1
version: 2
应用网卡配置
netplan apply



查看服务器接口信息
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
通过server1 ping server2测试连通性,三层互换机支持路由功能,可以或许打通二层隔离的vlan网段。
root@server1:~# ping 172.16.20.10 -c 4
PING 172.16.20.10 (172.16.20.10) 56(84) bytes of data.
64 bytes from 172.16.20.10: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.10: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
网卡vlan模式

vlan模式下,对端上连互换机须要配置为trunk口,允许多个vlan通过。
示意图如下
https://i-blog.csdnimg.cn/blog_migrate/2fadb48c1f16e9f880de8ad8344d3ac4.png
互换机配置,互换机须要配置为trunk口,允许多个vlan通过
H3C>system-view
interface GigabitEthernet 1/0/1
port link-type trunk
port trunk permit vlan 10 20
exit

服务器配置,服务器须要配置vlan子接口
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: true
vlans:
    vlan10:
      id: 10
      link: enp1s0
      addresses: [ "172.16.10.10/24" ]
      routes:
      - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: enp1s0
      addresses: [ "172.16.20.10/24" ]
      routes:
      - to: default
          via: 172.16.20.1
          metric: 300
version: 2
查看接口信息,新建了两个vlan子接口vlan10和vlan20
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
10: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
11: vlan20@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
通过vlan10 和 vlan20测试与网关连通性
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
root@server1:~#
root@server1:~# ping 172.16.20.1 -c 4
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
网卡bond模式

bond模式下,对端互换机须要配置bond聚合口。
示意图如下
https://i-blog.csdnimg.cn/blog_migrate/de4fe77ff8c113dfd54ebd74173b6492.png
互换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3参加聚合组。然后将bond口配置为trunk模式。
<H3C>system-view
interface Bridge-Aggregation 1
link-aggregation mode dynamic
quit

interface GigabitEthernet 1/0/1
port link-aggregation group 1
exit

interface GigabitEthernet 1/0/3
port link-aggregation group 1
exit

interface Bridge-Aggregation 1
port link-type trunk
port trunk permit vlan 10 20
exit
服务器配置
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
    enp1s0:
      dhcp4: no
    enp2s0:
      dhcp4: no
bonds:
    bond0:
      interfaces:
      - enp1s0
      - enp2s0
      parameters:
      mode: 802.3ad
      lacp-rate: fast
      mii-monitor-interval: 100
      transmit-hash-policy: layer2+3
vlans:
    vlan10:
      id: 10
      link: bond0
      addresses: [ "172.16.10.10/24" ]
      routes:
      - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: bond0
      addresses: [ "172.16.20.10/24" ]
      routes:
      - to: default
          via: 172.16.20.1
          metric: 300
查看网卡信息,新建了bond0网口,而且基于bond0网口创建了两个vlan子接口vlan10和vlan20,enp1s0和enp2s0体现master bond0,阐明两个网卡属于bond0成员接口。
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link
       valid_lft forever preferred_lft forever
8: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link
       valid_lft forever preferred_lft forever
9: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link
       valid_lft forever preferred_lft forever
查看bond状态,Bonding Mode体现为IEEE 802.3ad Dynamic link aggregation,而且下面Slave Interface体现了两个成员接口的信息。
root@server1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-60-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: ae:fd:60:48:84:1a
Active Aggregator Info:
      Aggregator ID: 1
      Number of ports: 2
      Actor Key: 9
      Partner Key: 1
      Partner Mac Address: fc:60:9b:35:ad:18

Slave Interface: enp1s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 7c:b5:9b:59:0a:71
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 2
    port state: 61

Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: e4:54:e8:dc:e5:88
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 1
    port state: 61
测试连通性,测试与互换机网关地点的连通性:
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.59 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.95 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.93 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.589/1.776/1.953/0.165 ms
root@server1:~#
关闭一个接口,再次测试连通性,依然可以或许ping通
root@server1:~# ip link set dev enp2s0 down
root@server1:~# ip link show enp2s0
3: enp2s0: <BROADCAST,MULTICAST,SLAVE> mtu 1500 qdisc fq_codel master bond0 state DOWN mode DEFAULT group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
root@server1:~#
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.54 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.73 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.47 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.470/1.844/2.732/0.516 ms
网卡桥接模式

桥接模式下,对端互换机可配置access模式或trunk模式。
示意图如下
https://i-blog.csdnimg.cn/blog_migrate/6349b26d66b545d2f5bb06aff5c39601.png
互换机配置,互换机接口配置为access模式为例,并参加对应vlan
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type access
port access vlan 10
exit

服务器配置,物理网卡参加到网桥中,IP地点配置到网桥接口br0上。
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
bridges:
    br0:
      interfaces:
      addresses:
      routes:
      - to: default
      via: 172.16.10.1
      metric: 100
      on-link: true
      mtu: 1500
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      parameters:
      stp: true
      forward-delay: 4
查看网卡信息
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:d0:7e:31:9c:74 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::cd0:7eff:fe31:9c74/64 scope link
       valid_lft forever preferred_lft forever
查看网桥及接口,当前网桥上只有一个物理接口enp1s0。
root@server1:~# apt install -y bridge-utils
root@ubuntu:~# brctl show
bridge name   bridge id               STP enabled   interfaces
br0             8000.0ed07e319c74       yes             enp1s0
root@server1:~#
这样在KVM捏造化情况,捏造机实例连接到网桥后,捏造机可以配置与物理网卡类似网段的IP地点。访问捏造机可以像访问物理机一样方便。
网卡macvlan模式

macvlan(MAC Virtual LAN)是Linux内核提供的一种网络捏造化技术,它允许在一个物理网卡接口上创建多个捏造网卡接口,每个捏造接口都有自己独立的MAC地点,也可以配置上 IP 地点进行通信。Macvlan 下的捏造机或者容器网络和主机在同一个网段中,共享同一个广播域。
macvlan模式下,对端互换机可配置access模式或trunk模式,trunk模式下macvlan可以或许与vlan很好的结合使用。
示意图如下:
https://i-blog.csdnimg.cn/blog_migrate/077458893be6b20965eef4a5b37fc25b.png
macvlan IP模式

该模式下,上连互换机接口配置为access模式,服务器macvlan主网卡和子接口直接配置类似网段的IP地点。
互换机配置
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type access
port access vlan 10
exit
服务器配置,macvlan支持多种模式,这里使用bridge模式,并长期化配置
cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh
配置netplan
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.10.1
    macvlan0:
      addresses:
      - 172.16.10.11/24
    macvlan1:
      addresses:
      - 172.16.10.12/24
version: 2
应用网卡配置
netplan apply



查看网卡信息,新建了两个macvlan接口,IP地点与主网卡位于同一网段,而且每个接口都有独立的MAC地点。
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
13: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
       valid_lft forever preferred_lft forever
14: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global macvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link
       valid_lft forever preferred_lft forever
测试与网关的连通性
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~#
macvlan vlan模式

该模式下,上连互换机接口配置为trunk模式,服务器macvlan主网卡不配置IP地点,每个macvlan子接口配置为差别的vlan子接口。
互换机配置
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type trunk
port trunk permit vlan 10 20
exit

服务器配置,macvlan支持多种模式,这里使用bridge模式,并长期化配置
cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh
配置netplan,两个macvlan接口macvlan0和macvlan1分别配置vlan子接口vlan10和vlan20。
root@ubuntu:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
    macvlan0:
      dhcp4: false
    macvlan1:
      dhcp4: false
vlans:
    vlan10:
      id: 10
      link: macvlan0
      addresses: [ "172.16.10.10/24" ]
      routes:
      - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: macvlan1
      addresses: [ "172.16.20.10/24" ]
      routes:
      - to: default
          via: 172.16.20.1
          metric: 300
version: 2
应用网卡配置
netplan apply



查看网卡信息,新建了两个macvlan接口,以及对应的两个vlan子接口。
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
11: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
       valid_lft forever preferred_lft forever
12: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d073:75ff:fe14:b204/64 scope link
       valid_lft forever preferred_lft forever
13: vlan10@macvlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
       valid_lft forever preferred_lft forever
14: vlan20@macvlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link
       valid_lft forever preferred_lft forever
测试两个VLAN接口与外部网关的连通性
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~#
root@server1:~# ping -c 3 172.16.20.1 PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.35 ms64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=1.48 ms64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=1.46 ms--- 172.16.20.1 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2004msrtt min/avg/max/mdev = 1.353/1.429/1.477/0.054 msroot@server1:~# 网卡ipvlan模式

IPVLAN(IP Virtual LAN)是Linux内核提供的一种网络捏造化技术,它可以在一个物理网卡上创建多个捏造网卡接口,每个捏造接口都有自己独立的IP地点。
IPVLAN和macvlan类似,都是从一个主机接口捏造出多个捏造网络接口。唯一比较大的区别就是ipvlan捏造出的子接口都有类似的mac地点(与物理接口共用同个mac地点),但可配置差别的ip地点。
ipvlan模式下,对端互换机也可以配置access模式或trunk模式,trunk模式下ipvlan可以或许与vlan很好的结合使用。
示意图如下
https://i-blog.csdnimg.cn/blog_migrate/620390def582970d029ba479d4542bd0.png
互换机配置
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type access
port access vlan 10
exit

服务器配置,ipvlan支持三种模式(l2、l3、l3s),这里使用l3模式,并长期化配置
cat >/etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add ipvlan0 link enp1s0 type ipvlan mode l3
ip link add ipvlan1 link enp1s0 type ipvlan mode l3
EOF
chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh
配置netplan
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.10.1
    ipvlan0:
      addresses:
      - 172.16.10.11/24
    ipvlan1:
      addresses:
      - 172.16.10.12/24
version: 2
应用网卡配置
netplan apply



查看网卡信息,新建了两ipvlan接口,IP地点与主网卡位于同一网段,而且每个接口都有与主网卡类似的MAC地点。
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
9: ipvlan0@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff    inet 172.16.10.11/24 brd 172.16.10.255 scope global ipvlan0       valid_lft forever preferred_lft forever    inet6 fe80::7cb5:9b00:159:a71/64 scope link      valid_lft forever preferred_lft forever10: ipvlan1@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff    inet 172.16.10.12/24 brd 172.16.10.255 scope global ipvlan1       valid_lft forever preferred_lft forever    inet6 fe80::7cb5:9b00:259:a71/64 scope link      valid_lft forever preferred_lft forever 测试与网关的连通性
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~#
网卡 macvtap 模式

使用 bridge 使 KVM 捏造机可以或许进行外部通信的另一种替代方法是使用 Linux MacVTap 驱动步伐。当不想创建平常网桥,但希望本地网络中的用户访问捏造机时,可以使用 MacVTap。
与使用bridge 的一个主要区别是 MacVTap 直接连接到 KVM 主机中的网络接口。这种直接连接绕过了 KVM 主机中与连接和使用软件bridge 相关的大部门代码和组件,有效地缩短了代码路径。这种较短的代码路径通常会提高吞吐量并减少外部体系的耽误。
示意图如下:
https://i-blog.csdnimg.cn/blog_migrate/25a47a62c5cb6419a9ad72aecc809eea.png
互换机配置
<H3C>system-view
interface GigabitEthernet 1/0/1
port link-type access
port access vlan 10
exit
主机网卡配置
root@server1:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 192.168.137.2
version: 2
安装kvm捏造化情况,创建两个捏造机,指定从enp1s0主网卡分配mavtap子接口。
virt-install \
--name vm1 \
--vcpus 1 \
--memory 2048 \
--disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
--os-variant ubuntu22.04 \
--noautoconsole \
--import \
--autostart \
--network type=direct,source=enp1s0,source_mode=bridge,model=virtio

virt-install \
--name vm2 \
--vcpus 1 \
--memory 2048 \
--disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
--os-variant ubuntu22.04 \
--noautoconsole \
--import \
--autostart \
--network type=direct,source=enp1s0,source_mode=bridge,model=virtio
查看网卡信息,新创建了两个macvtap接口
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000    link/ether 52:54:00:bb:15:22 brd ff:ff:ff:ff:ff:ff    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0       valid_lft forever preferred_lft forever6: macvtap0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500    link/ether 52:54:00:41:8f:a3 brd ff:ff:ff:ff:ff:ff    inet6 fe80::5054:ff:fe41:8fa3/64 scope link      valid_lft forever preferred_lft forever7: macvtap1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500    link/ether 52:54:00:93:2c:4a brd ff:ff:ff:ff:ff:ff    inet6 fe80::5054:ff:fe93:2c4a/64 scope link      valid_lft forever preferred_lft forever 捏造机1配置IP地点
root@vm1:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.11/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.10.1
version: 2
捏造机2配置IP地点
root@vm2:~# cat /etc/netplan/00-installer-config.yaml
network:
ethernets:
    enp1s0:
      dhcp4: false
      addresses:
      - 172.16.10.12/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.10.1
version: 2
测试与网关的连通性
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.38 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.75 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=4.34 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.382/2.491/4.344/1.318 ms
bond、vlan与桥接混合配置

将服务器两块网卡构成bond口,在bond口之上创建两个vlan子接口,分别参加两个linux bridge中,然后在差别bridge下创建捏造机,捏造机将属于差别vlan。
示意图如下:
https://i-blog.csdnimg.cn/blog_migrate/0f9426cff8215a15d4768b728c923b34.png
互换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3参加聚合组。将聚合口配置为trunk模式,允许vlan 8 10 20通过,而且将vlan8 配置为聚合口的native vlan,作为管理使用。
<H3C>system-view
interface Vlan-interface 8
ip address 172.16.8.1 24
exit


interface Bridge-Aggregation 1
link-aggregation mode dynamic
quit

interface GigabitEthernet 1/0/1
port link-aggregation group 1
exit

interface GigabitEthernet 1/0/3
port link-aggregation group 1
exit

interface Bridge-Aggregation 1
port link-type trunk
port trunk permit vlan 8 10 20
port trunk pvid vlan 8
undo port trunk permit vlan 1
exit

服务器网卡配置,注意bond0配置了管理IP地点,匹配互换机native vlan 8。
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
    enp1s0:
      dhcp4: false
    enp2s0:
      dhcp4: false
bonds:
    bond0:
      dhcp4: false
      dhcp6: false
      interfaces:
      - enp1s0
      - enp2s0
      addresses:
      - 172.16.8.10/24
      nameservers:
      addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
      - to: default
          via: 172.16.8.1
      parameters:
      mode: 802.3ad
      lacp-rate: fast
      mii-monitor-interval: 100
      transmit-hash-policy: layer2+3
bridges:
    br10:
      interfaces: [ vlan10 ]
    br20:
      interfaces: [ vlan20 ]
vlans:
    vlan10:
      id: 10
      link: bond0
    vlan20:
      id: 20
      link: bond0
查看网卡信息,新建了bond0网口,而且基于bond0网口创建了两个vlan子接口vlan10和vlan20。
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link
       valid_lft forever preferred_lft forever
16: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:df:66:ab:c2:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ecdf:66ff:feab:c24b/64 scope link
       valid_lft forever preferred_lft forever
17: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:4d:f4:0a:6d:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c4d:f4ff:fe0a:6d13/64 scope link
       valid_lft forever preferred_lft forever
18: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
19: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
查看创建的网桥
root@server1:~# brctl show
bridge name   bridge id               STP enabled   interfaces
br10            8000.eedf66abc24b       no            vlan10
br20            8000.9e4df40a6d13       no            vlan20
测试bond0 IP与外部网关连通性
root@server1:~# ping 172.16.8.1 -c 3
PING 172.16.8.1 (172.16.8.1) 56(84) bytes of data.
64 bytes from 172.16.8.1: icmp_seq=1 ttl=255 time=1.55 ms
64 bytes from 172.16.8.1: icmp_seq=2 ttl=255 time=1.61 ms
64 bytes from 172.16.8.1: icmp_seq=3 ttl=255 time=1.62 ms

--- 172.16.8.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.550/1.593/1.620/0.030 ms
root@server1:~#
在server1安装kvm捏造化情况,然后创建两个新的kvm网络,分别绑定到差别网桥
cat >br10-network.xml<<EOF
<network>
<name>br10-net</name>
<forward mode="bridge"/>
<bridge name="br10"/>
</network>
EOF
cat >br20-network.xml<<EOF
<network>
<name>br20-net</name>
<forward mode="bridge"/>
<bridge name="br20"/>
</network>
EOF

virsh net-define br10-network.xml
virsh net-define br20-network.xml
virsh net-start br10-net
virsh net-start br20-net
virsh net-autostart br10-net
virsh net-autostart br20-net
查看新建的网络
root@server1:~# virsh net-list
Name       State    Autostart   Persistent
---------------------------------------------
br10-net   active   yes         yes
br20-net   active   yes         yes
default    active   yes         yes
创建两个捏造机,指定使用差别网络
virt-install \
--name vm1 \
--vcpus 1 \
--memory 2048 \
--disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
--os-variant ubuntu22.04 \
--import \
--autostart \
--noautoconsole \
--network network=br10-net

virt-install \
--name vm2 \
--vcpus 1 \
--memory 2048 \
--disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
--os-variant ubuntu22.04 \
--import \
--autostart \
--noautoconsole \
--network network=br20-net
查看创建的捏造机
root@server1:~# virsh list
Id   Name   State
----------------------
13   vm1    running
14   vm2    running
为vm1配置vlan10的IP地点
virsh console vm1
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
ethernets:
    enp1s0:
      addresses:
      - 172.16.10.10/24
      nameservers:
      addresses:
      - 223.5.5.5
      routes:
      - to: default
      via: 172.16.10.1
version: 2
EOF
netplan apply




为vm2配置vlan20的IP地点
virsh console vm2
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
ethernets:
    enp1s0:
      addresses:
      - 172.16.20.10/24
      nameservers:
      addresses:
      - 223.5.5.5
      routes:
      - to: default
      via: 172.16.20.1
version: 2
EOF
netplan apply




登录到vm1,测试vm1与外部网关连通性
root@vm1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a4:aa:9d brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea4:aa9d/64 scope link
       valid_lft forever preferred_lft forever
root@vm1:~#
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.51 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=7.10 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.10 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.505/3.568/7.101/2.509 ms
root@vm1:~#
登录到vm2,测试vm2与外部网关连通性
root@vm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:89:61:da brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe89:61da/64 scope link
       valid_lft forever preferred_lft forever
root@vm2:~#
root@vm2:~# ping 172.16.20.1 -c 3
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.73 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=2.00 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=2.00 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.732/1.911/2.003/0.126 ms
root@vm2:~#

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页: [1]
查看完整版本: linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式