redis、LVS、nginx
redis搭建哨兵原理
哨兵搭建至少要有3个机器,且必须为奇数个
redis搭建哨兵之前要先实现主从复制;master的配置文件中的masterauth和slave都必须相同
实现主从复制之所有从节点配置文件- [root@slave ~]# yum install -y redis
- [root@slave ~]# vim /etc/redis.conf
- replicaof 10.0.0.8 6379
- masterauth "123456"
- [root@server2 ~]# systemctl enable redis --now
复制代码 所有主从节点配置- [root@centos8 ~]# vim /etc/redis.conf
- bind 0.0.0.0
- masterauth "123456"
- requirepass "123456"
复制代码 主从复制实现- [root@centos8 ~]# redis-cli
- 127.0.0.1:6379> auth 123456
- OK
- 127.0.0.1:6379> info replication
- # Replication
- role:master
- connected_slaves:2
- slave0:ip=10.0.0.12,port=6379,state=online,offset=24332,lag=1
- slave1:ip=10.0.0.9,port=6379,state=online,offset=24332,lag=1
- master_replid:a2bbc2342854b7f8fb2f7b15623cd0645a946c0c
- master_replid2:0000000000000000000000000000000000000000
- master_repl_offset:24332
- second_repl_offset:-1
- repl_backlog_active:1
- repl_backlog_size:1048576
- repl_backlog_first_byte_offset:1
- repl_backlog_histlen:24332
复制代码 哨兵配置
所有redis节点使用相同的配置文件- [root@server1 ~]# grep -vE "^$|^#" /etc/redis-sentinel.conf
- sentinel monitor mymaster 10.0.0.8 6379 2
- sentinel down-after-milliseconds mymaster 3000
- sentinel auth-pass mymaster 123456
- [root@centos8 ~]# scp /etc/redis-sentinel.conf 10.0.0.9:/etc/
- [root@centos8 ~]# scp /etc/redis-sentinel.conf 10.0.0.12:/etc/
- [root@centos8 ~]# systemctl enable --now redis-sentinel.service
- [root@slave1 ~]# systemctl enable --now redis-sentinel.service
- [root@slave2 ~]# systemctl enable --now redis-sentinel.service
复制代码 查看sentinel状态- [root@server1 ~]# redis-cli -p 26379
- 127.0.0.1:26379> INFO sentinel
- # Sentinel
- sentinel_masters:1
- sentinel_tilt:0
- sentinel_running_scripts:0
- sentinel_scripts_queue_length:0
- sentinel_simulate_failure_flags:0
- master0:name=mymaster,status=ok,address=10.0.0.8:6379,slaves=2,sentinels=3
复制代码 制止主节点实现故障转移- [root@server1 ~]# killall redis-server
- [root@server1 ~]# redis-cli -a 123456 -p 26379
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- 127.0.0.1:26379> INFO sentinel
- # Sentinel
- sentinel_masters:1
- sentinel_tilt:0
- sentinel_running_scripts:0
- sentinel_scripts_queue_length:0
- sentinel_simulate_failure_flags:0
- master0:name=mymaster,status=ok,address=10.0.0.12:6379,slaves=2,sentinels=3
复制代码 日志跟踪- [root@centos8 ~]# tail -f /var/log/redis/sentinel.log
- 87189:X 09 Jan 2024 20:41:45.783 # Configuration loaded
- 87189:X 09 Jan 2024 20:41:45.783 * supervised by systemd, will signal readiness
- 87189:X 09 Jan 2024 20:41:45.785 * Running mode=sentinel, port=26379.
- 87189:X 09 Jan 2024 20:41:45.785 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
- 87189:X 09 Jan 2024 20:41:45.789 # Sentinel ID is 33700c2c21a86f0e15975e23e0ab04409e0f744c
- 87189:X 09 Jan 2024 20:41:45.789 # +monitor master mymaster 10.0.0.8 6379 quorum 2
- 87189:X 09 Jan 2024 20:41:45.791 * +slave slave 10.0.0.12:6379 10.0.0.12 6379 @ mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 20:41:45.792 * +slave slave 10.0.0.9:6379 10.0.0.9 6379 @ mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 20:42:06.649 * +sentinel sentinel 777f3b239c0c58b3a997366fa3d8f67c30b1d2ca 10.0.0.9 26379 @ mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 20:42:10.529 * +sentinel sentinel aa648c3792f52f620ab0c9e34365177ead8adec6 10.0.0.12 26379 @ mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 21:39:19.980 # +new-epoch 1
- 87189:X 09 Jan 2024 21:39:19.982 # +vote-for-leader aa648c3792f52f620ab0c9e34365177ead8adec6 1
- 87189:X 09 Jan 2024 21:39:19.991 # +sdown master mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 21:39:20.063 # +odown master mymaster 10.0.0.8 6379 #quorum 3/2
- 87189:X 09 Jan 2024 21:39:20.063 # Next failover delay: I will not start a failover before Tue Jan 9 21:45:20 2024
- 87189:X 09 Jan 2024 21:39:21.068 # +config-update-from sentinel aa648c3792f52f620ab0c9e34365177ead8adec6 10.0.0.12 26379 @ mymaster 10.0.0.8 6379
- 87189:X 09 Jan 2024 21:39:21.068 # +switch-master mymaster 10.0.0.8 6379 10.0.0.12 6379
- 87189:X 09 Jan 2024 21:39:21.069 * +slave slave 10.0.0.9:6379 10.0.0.9 6379 @ mymaster 10.0.0.12 6379
- 87189:X 09 Jan 2024 21:39:21.069 * +slave slave 10.0.0.8:6379 10.0.0.8 6379 @ mymaster 10.0.0.12 6379
- 87189:X 09 Jan 2024 21:39:24.133 # +sdown slave 10.0.0.8:6379 10.0.0.8 6379 @ mymaster 10.0.0.12 6379
复制代码 文件的内容会被自动修改- [root@slave1 ~]# grep ^replicaof /etc/redis.conf
- replicaof 10.0.0.12 6379
复制代码 哨兵文件也会被修改- [root@slave1 ~]# grep "^[a-z]" /etc/redis-sentinel.conf
- port 26379
- daemonize no
- pidfile "/var/run/redis-sentinel.pid"
- logfile "/var/log/redis/sentinel.log"
- dir "/tmp"
- sentinel myid 777f3b239c0c58b3a997366fa3d8f67c30b1d2ca
- sentinel deny-scripts-reconfig yes
- sentinel monitor mymaster 10.0.0.12 6379 2
- sentinel down-after-milliseconds mymaster 3000
- sentinel auth-pass mymaster 123456
- sentinel config-epoch mymaster 1
- protected-mode no
- supervised systemd
- sentinel leader-epoch mymaster 1
- sentinel known-replica mymaster 10.0.0.8 6379
- sentinel known-replica mymaster 10.0.0.9 6379
- sentinel known-sentinel mymaster 10.0.0.12 26379 aa648c3792f52f620ab0c9e34365177ead8adec6
- sentinel known-sentinel mymaster 10.0.0.8 26379 33700c2c21a86f0e15975e23e0ab04409e0f744c
- sentinel current-epoch 1
复制代码- [root@slave2 ~]# grep "^[a-z]" /etc/redis-sentinel.conf
- port 26379
- daemonize no
- pidfile "/var/run/redis-sentinel.pid"
- logfile "/var/log/redis/sentinel.log"
- dir "/tmp"
- sentinel myid aa648c3792f52f620ab0c9e34365177ead8adec6
- sentinel deny-scripts-reconfig yes
- sentinel monitor mymaster 10.0.0.12 6379 2
- sentinel down-after-milliseconds mymaster 3000
- sentinel auth-pass mymaster 123456
- sentinel config-epoch mymaster 1
- protected-mode no
- supervised systemd
- sentinel leader-epoch mymaster 1
- sentinel known-replica mymaster 10.0.0.8 6379
- sentinel known-replica mymaster 10.0.0.9 6379
- sentinel known-sentinel mymaster 10.0.0.9 26379 777f3b239c0c58b3a997366fa3d8f67c30b1d2ca
- sentinel known-sentinel mymaster 10.0.0.8 26379 33700c2c21a86f0e15975e23e0ab04409e0f744c
- sentinel current-epoch 1
复制代码 新的master状态- [root@slave2 ~]# redis-cli -a 123456
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- 127.0.0.1:6379> INFO replication
- # Replication
- role:master
- connected_slaves:1
- slave0:ip=10.0.0.9,port=6379,state=online,offset=921399,lag=0
- master_replid:a529afbe029c99f25d442dd97046ae03ea369c6c
- master_replid2:a2bbc2342854b7f8fb2f7b15623cd0645a946c0c
- master_repl_offset:921399
- second_repl_offset:694285
- repl_backlog_active:1
- repl_backlog_size:1048576
- repl_backlog_first_byte_offset:309
- repl_backlog_histlen:921091
复制代码 redis集群实现
6个集群节点- 10.0.0.8
- 10.0.0.9
- 10.0.0.10
- 10.0.0.11
- 10.0.0.12
- 10.0.0.13
复制代码 创建集群- [root@centos8 ~]# redis-cli -a 123456 --cluster create 10.0.0.8:6379 10.0.0.9:6379 10.0.0.10:6379 10.0.0.11:6379 10.0.0.12:6379 10.0.0.13:6379 --cluster-replicas 1
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- >>> Performing hash slots allocation on 6 nodes...
- Master[0] -> Slots 0 - 5460
- Master[1] -> Slots 5461 - 10922
- Master[2] -> Slots 10923 - 16383
- Adding replica 10.0.0.11:6379 to 10.0.0.8:6379
- Adding replica 10.0.0.12:6379 to 10.0.0.9:6379
- Adding replica 10.0.0.13:6379 to 10.0.0.10:6379
- M: b2672437f224c869af7f2634e2edcf39c419b9d8 10.0.0.8:6379
- slots:[0-5460] (5461 slots) master
- M: 67de6b7274ce80052a7ef44511568ccc4f031859 10.0.0.9:6379
- slots:[5461-10922] (5462 slots) master
- M: a126ea9c3b933c448c32a30a5bac9ab2635b446f 10.0.0.10:6379
- slots:[10923-16383] (5461 slots) master
- S: 9e28728a72a83ed93112162803beb9ea0773cb23 10.0.0.11:6379
- replicates b2672437f224c869af7f2634e2edcf39c419b9d8
- S: 12f7df4f822a27b23458103a7a63534d0e9f8af4 10.0.0.12:6379
- replicates 67de6b7274ce80052a7ef44511568ccc4f031859
- S: 17361c4bf708f52830bce1969618105027a34042 10.0.0.13:6379
- replicates a126ea9c3b933c448c32a30a5bac9ab2635b446f
- Can I set the above configuration? (type 'yes' to accept): yes
- >>> Nodes configuration updated
- >>> Assign a different config epoch to each node
- >>> Sending CLUSTER MEET messages to join the cluster
- Failed to send CLUSTER MEET command.
复制代码 出现了错误,由于哨兵模式sentinel在运行,杀掉该进程。- [root@centos8 ~]# ps -aux | grep redis
- redis 1086 0.2 0.5 263696 4700 ? Ssl 15:57 0:01 /usr/bin/redis-sentinel *:26379 [sentinel]
- redis 2778 0.1 1.1 266256 8768 ? Ssl 15:59 0:00 /usr/bin/redis-server 0.0.0.0:6379 [cluster]
- root 2843 0.0 0.1 221940 1096 pts/0 S+ 16:05 0:00 grep --color=auto redis
- [root@centos8 ~]# kill -9 26379
- -bash: kill: (26379) - No such process
- [root@centos8 ~]# kill -9 1086
复制代码 继续- [root@centos8 ~]# redis-cli -a 123456 --cluster create 10.0.0.8:6379 10.0.0.9:6379 10.0.0.10:6379 10.0.0.11:6379 10.0.0.12:6379 10.0.0.13:6379 --cluster-replicas 1
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- >>> Performing hash slots allocation on 6 nodes...
- Master[0] -> Slots 0 - 5460
- Master[1] -> Slots 5461 - 10922
- Master[2] -> Slots 10923 - 16383
- Adding replica 10.0.0.11:6379 to 10.0.0.8:6379
- Adding replica 10.0.0.12:6379 to 10.0.0.9:6379
- Adding replica 10.0.0.13:6379 to 10.0.0.10:6379
- M: b2672437f224c869af7f2634e2edcf39c419b9d8 10.0.0.8:6379
- slots:[0-5460] (5461 slots) master
- M: 67de6b7274ce80052a7ef44511568ccc4f031859 10.0.0.9:6379
- slots:[5461-10922] (5462 slots) master
- M: a126ea9c3b933c448c32a30a5bac9ab2635b446f 10.0.0.10:6379
- slots:[10923-16383] (5461 slots) master
- S: 9e28728a72a83ed93112162803beb9ea0773cb23 10.0.0.11:6379
- replicates b2672437f224c869af7f2634e2edcf39c419b9d8
- S: 12f7df4f822a27b23458103a7a63534d0e9f8af4 10.0.0.12:6379
- replicates 67de6b7274ce80052a7ef44511568ccc4f031859
- S: 17361c4bf708f52830bce1969618105027a34042 10.0.0.13:6379
- replicates a126ea9c3b933c448c32a30a5bac9ab2635b446f
- Can I set the above configuration? (type 'yes' to accept): yes
- >>> Nodes configuration updated
- >>> Assign a different config epoch to each node
- >>> Sending CLUSTER MEET messages to join the cluster
- Waiting for the cluster to join
- .....
- >>> Performing Cluster Check (using node 10.0.0.8:6379)
- M: b2672437f224c869af7f2634e2edcf39c419b9d8 10.0.0.8:6379
- slots:[0-5460] (5461 slots) master
- 1 additional replica(s)
- M: a126ea9c3b933c448c32a30a5bac9ab2635b446f 10.0.0.10:6379
- slots:[10923-16383] (5461 slots) master
- 1 additional replica(s)
- M: 67de6b7274ce80052a7ef44511568ccc4f031859 10.0.0.9:6379
- slots:[5461-10922] (5462 slots) master
- 1 additional replica(s)
- S: 17361c4bf708f52830bce1969618105027a34042 10.0.0.13:6379
- slots: (0 slots) slave
- replicates a126ea9c3b933c448c32a30a5bac9ab2635b446f
- S: 9e28728a72a83ed93112162803beb9ea0773cb23 10.0.0.11:6379
- slots: (0 slots) slave
- replicates b2672437f224c869af7f2634e2edcf39c419b9d8
- S: 12f7df4f822a27b23458103a7a63534d0e9f8af4 10.0.0.12:6379
- slots: (0 slots) slave
- replicates 67de6b7274ce80052a7ef44511568ccc4f031859
- [OK] All nodes agree about slots configuration.
- >>> Check for open slots...
- >>> Check slots coverage...
- [OK] All 16384 slots covered.
复制代码 查看指定master节点的slave节点信息- [root@centos8 ~]# redis-cli -a 123456 cluster nodes
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- a126ea9c3b933c448c32a30a5bac9ab2635b446f 10.0.0.10:6379@16379 master - 0 1704984060648 3 connected 10923-16383
- 67de6b7274ce80052a7ef44511568ccc4f031859 10.0.0.9:6379@16379 master - 0 1704984062664 2 connected 5461-10922
- 17361c4bf708f52830bce1969618105027a34042 10.0.0.13:6379@16379 slave a126ea9c3b933c448c32a30a5bac9ab2635b446f 0 1704984061654 6 connected
- 9e28728a72a83ed93112162803beb9ea0773cb23 10.0.0.11:6379@16379 slave b2672437f224c869af7f2634e2edcf39c419b9d8 0 1704984060000 4 connected
- 12f7df4f822a27b23458103a7a63534d0e9f8af4 10.0.0.12:6379@16379 slave 67de6b7274ce80052a7ef44511568ccc4f031859 0 1704984059639 5 connected
- b2672437f224c869af7f2634e2edcf39c419b9d8 10.0.0.8:6379@16379 myself,master - 0 1704984061000 1 connected 0-5460
复制代码 验证集群状态- [root@centos8 ~]# redis-cli -a 123456 cluster info
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- cluster_state:ok
- cluster_slots_assigned:16384
- cluster_slots_ok:16384
- cluster_slots_pfail:0
- cluster_slots_fail:0
- cluster_known_nodes:6
- cluster_size:3
- cluster_current_epoch:6
- cluster_my_epoch:1
- cluster_stats_messages_ping_sent:6247
- cluster_stats_messages_pong_sent:6168
- cluster_stats_messages_sent:12415
- cluster_stats_messages_ping_received:6163
- cluster_stats_messages_pong_received:6247
- cluster_stats_messages_meet_received:5
- cluster_stats_messages_received:12415
复制代码 尝试写入数据- [root@centos8 ~]# redis-cli -a 123456 -h 10.0.0.9 set car bwm
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- OK
- [root@centos8 ~]# redis-cli -a 123456 -h 10.0.0.10 set bb cc
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- (error) MOVED 8620 10.0.0.9:6379 #经过算法计算,需要写入指定的槽位对应node
- [root@centos8 ~]# redis-cli -a 123456 -h 10.0.0.9 set bb cc
- Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
- OK
复制代码 LVS集群的工作模式
nat:修改请求报文的目标IP,多目标IP的DNAT
本质是多目标IP的DNAT,通过将请求报文中的目标地址和目标端口修改为某挑出的RS的RIP和PORT实现转发- 1、RIP和DIP应在同一个IP网络,且应使用私网地址;RS的网关要指向DIP
- 2、请求报文和响应报文都必须有Diretor转发,Diretor易于成为系统瓶颈
- 3、支持端口映射,可修改请求报文的目标PORT
- 4、VS必须是Linux系统,RS可以是任意OS系统
复制代码
dr:操纵封装新的MAC地址
直接路由,LVS默认模式,应用最广泛,通过为请求报文重新封装一个MAC首部进行转发,源MAC是DIP所在的接口的MAC,目标MAC是某挑选出的RS的RIP所在接口的MAC地址;源IP/PORT,以及目标IP/PORT均保持不变。
tun:在原请求IP报文之外新加一个IP首部
转发方式:不修改请求报文的IP首部(源IP为CIP,目标IP为VIP),而在原IP报文之外再封装一个IP首部(源IP是DIP,目标为RIP),将报文发往挑选出的目标RS;RS直接相应给客户端(源IP是VIP,目标IP是CIP)
fullnat:修改请求报文的源和目标IP,默认内核不支持
通过同时修改请求报文的源IP地址和目标地址进行转发
CIP --> DIP
VIP --> RIP
LVS调度算法
ipvs scheduler:根据其调度时是否思量各RS当前的负载状态
分为两种:静态方法和动态方法
静态方法
- RR:roundrobin,轮询,较常用,雨露均沾
- WRR:weighted RR,加权轮询,较常用
- SH:Source Hashing,实现session sticky,源IP地址hash;将来自于同一个IP地址的请求始终发往第一次挑中的RS,从而实现会话绑定
- DH:Destination Hashing;目标地址哈希,第一次轮询调度至RS,后续将发往同一个目标地址的请求始终转发至第一次挑中的RS,典型使用场景是正向署理缓存场景中的负载均衡,如:Web缓存
动态方法
- LC:least connections 适用于长连接应用
- WLC:Weighted LC,默认调度方法,较常用
- SED:Shortest Expection Delay,初始连接高权重优先,只检查活动连接,而不思量非活动连接
- NQ:Never Queue,第一轮匀称分配,后续SED
- LBLC:Locality-Based LC,动态的DH算法,使用场景:根据负载状态实现正向署理,实现Web、Cache等
- LBLCR:LBLC with Replication,带复制功能的LBLC,解决LBLC负载不均衡问题,从负载重的复制到负载轻的RS,实现Web Cache等
LVS-NAT
注意·:rs-server只配置一个IP地址,不要多配会造成访问失败。
准备四台主机- client:192.168.0.130/24
- lvs:
- eth0:192.168.0.132/24
- eth1:10.0.0.13/24
- rs1:10.0.0.9/24 GW:10.0.0.13
- rs2:10.0.0.11/24 GW:10.0.0.13
复制代码 配置IP地址
- [root@internet network-scripts]# cat ifcfg-eth0
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- IPADDR=192.168.0.130
- NETMASK=255.255.255.0
- DEFROUTE=yes
- NAME=eth0
- DEVICE=eth0
- ONBOOT=yes
- [root@lvs-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- DEFROUTE=yes
- NAME=eth0
- DEVICE=eth0
- IPADDR=192.168.0.132
- NETMASK=255.255.255.0
- DNS1=8.8.8.8
- DNS2=114.114.114.114
- ONBOOT=yes
- [root@lvs-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- DEFROUTE=yes
- IPADDR=10.0.0.13
- NETMASK=255.255.255.0
- NAME=eth1
- DEVICE=eth1
- ONBOOT=yes
- [root@rs1-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
- TYPE=Ethernet
- BOOTPROTO=static
- IPADDR=10.0.0.9
- GATEWAY=10.0.0.13
- NETMASK=255.255.255.0
- NAME=eth1
- DEVICE=eth1
- ONBOOT=yes
- [root@rs2-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
- TYPE=Ethernet
- BOOTPROTO=static
- DEVICE=eth1
- ONBOOT=yes
- NAME=eth1
- IPADDR=10.0.0.11
- GATEWAY=10.0.0.13
- NETMASK=255.255.255.0
复制代码 lvs配置
- [root@lvs-server /]# hostname -I
- 192.168.0.132 10.0.0.13
- [root@lvs-server ~]# cat /etc/sysctl.conf
- net.ipv4.ip_forward = 1
- [root@lvs-server ~]# sysctl -p
- net.ipv4.ip_forward = 1
- [root@lvs-server /]# ipvsadm -A -t 192.168.0.132:80 -s wrr
- [root@lvs-server /]# ipvsadm -a -t 192.168.0.132:80 -r 10.0.0.9:80 -m
- [root@lvs-server /]# ipvsadm -a -t 192.168.0.132:80 -r 10.0.0.9:17 -m
- [root@lvs-server /]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 192.168.0.132:80 wrr
- -> 10.0.0.9:80 Masq 1 0 0
- -> 10.0.0.11:80 Masq 1 0 0
复制代码 rs服务器配置- [root@rs1-server ~]# yum install httpd -y
- [root@rs1-server ~]# echo "`hostname` rs1 " >> /var/www/html/index.html
- [root@rs1-server ~]# systemctl enable httpd --now
复制代码- [root@rs2-server ~]# yum install httpd -y
- [root@rs2-server ~]# echo "`hostname` rs1 " >> /var/www/html/index.html
- [root@rs2-server ~]# systemctl enable httpd --now
复制代码 测试- [root@internet ~]# while :; do curl 192.168.0.132; sleep 0.5;done
- rs2-server rs2
- rs1-server rs1
- rs2-server rs2
- rs1-server rs1
- rs2-server rs2
- rs1-server rs1
- rs2-server rs2
- rs1-server rs1
- rs2-server rs2
- rs1-server rs1
- rs2-server rs2
复制代码 永久生存?- [root@lvs-server ~]# ipvsadm -Sn > /etc/sysconfig/ipvsadm
- [root@lvs-server ~]# systemctl enable ipvsadm --now
复制代码 LVS-DR模式单网段
准备5台主机- Client: 仅主机 10.0.0.10 GW:10.0.0.8
- Router: eth0: NAT 192.168.0.128/24 eth1: 仅主机 10.0.0.8/24 开启IP_forward
- LVS:eth0: NAT 192.168.0.132/24 GW: 192.168.0.128
- RS1:eth0: NAT 192.168.0.129/24 GW: 192.168.0.128
- RS2:eth0: NAT 192.168.0.131/24 GW: 192.168.0.128
复制代码 LVS配置网络
- [root@internet ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- IPADDR=10.0.0.10
- GATEWAY=10.0.0.8
- NETMASK=255.255.255.0
- DEFROUTE=yes
- NAME=eth1
- DEVICE=eth1
- ONBOOT=yes
- [root@router-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
- TYPE=Ethernet
- BOOTPROTO=static
- IPADDR=192.168.0.128
- NETMASK=255.255.255.0
- NAME=eth0
- DEVICE=eth0
- ONBOOT=yes
- [root@router-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- IPADDR=10.0.0.8
- NETMASK=255.255.255.0
- NAME=eth1
- DEVICE=eth1
- ONBOOT=yes
- [root@router-server ~]# cat /etc/sysctl.conf #开启带内转发
- net.ipv4.ip_forward = 1
- [root@router-server ~]# sysctl -p
- net.ipv4.ip_forward = 1
- [root@lvs-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- DEFROUTE=yes
- NAME=eth0
- DEVICE=eth0
- IPADDR=192.168.0.132
- NETMASK=255.255.255.0
- GATEWAY=192.168.0.128
- ONBOOT=yes
- [root@rs1-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=static
- IPADDR=192.168.0.129
- NETMASK=255.255.255.0
- GATEWAY=192.168.0.128
- NAME=eth0
- DEVICE=eth0
- ONBOOT=yes
- [root@rs2-server ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
- TYPE=Ethernet
- PROXY_METHOD=none
- BROWSER_ONLY=no
- BOOTPROTO=dhcp
- NAME=eth0
- DEVICE=eth0
- IPADDR=192.168.0.131
- NETMASK=255.255.255.0
- GATEWAY=192.168.0.128
- ONBOOT=yes
复制代码 RS两台服务器- [root@rs1-server ~]# yum install httpd -y
- [root@rs1-server ~]# echo "`hostname` rs1 " >> /var/www/html/index.html
- [root@rs1-server ~]# systemctl enable httpd --now [root@rs1-server ~]# ping 10.0.0.10 -c1PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.64 bytes from 10.0.0.10: icmp_seq=1 ttl=63 time=0.637 ms[root@rs2-server ~]# yum install httpd -y
- [root@rs2-server ~]# echo "`hostname` rs1 " >> /var/www/html/index.html
- [root@rs2-server ~]# systemctl enable httpd --now [root@rs2-server ~]# ping 10.0.0.10 -c1PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.64 bytes from 10.0.0.10: icmp_seq=1 ttl=63 time=0.910 ms
复制代码 RS的IPVS配置
RS1配置- [root@rs1-server ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- [root@rs1-server ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
- [root@rs1-server ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
- [root@rs1-server ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
- [root@rs1-server ~]# hostname -I
- 192.168.0.129
- [root@rs1-server ~]# ifconfig lo:1 192.168.0.100/32
- [root@rs1-server ~]# ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet 192.168.0.100/0 scope global lo:1
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
- link/ether 00:0c:29:cf:95:47 brd ff:ff:ff:ff:ff:ff
- altname enp3s0
- altname ens160
- inet 192.168.0.129/24 brd 192.168.0.255 scope global noprefixroute eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::20c:29ff:fecf:9547/64 scope link
- valid_lft forever preferred_lft forever
复制代码 RS2配置- [root@rs2-server ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
- [root@rs2-server ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
- [root@rs2-server ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
- [root@rs2-server ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
- [root@rs2-server ~]# ifconfig lo:1 192.168.0.100/32
- [root@rs2-server ~]# ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet 192.168.0.100/0 scope global lo:1
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
- link/ether 00:0c:29:43:14:7d brd ff:ff:ff:ff:ff:ff
- altname enp3s0
- altname ens160
- inet 192.168.0.131/24 brd 192.168.0.255 scope global noprefixroute eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::20c:29ff:fe43:147d/64 scope link
- valid_lft forever preferred_lft forever
复制代码 LVS主机配置
- [root@lvs-server ~]# ifconfig lo:1 192.168.0.100/32
- [root@lvs-server ~]# ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet 192.168.0.100/0 scope global lo:1
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
- link/ether 00:0c:29:b1:38:0c brd ff:ff:ff:ff:ff:ff
- altname enp3s0
- altname ens160
- inet 192.168.0.132/24 brd 192.168.0.255 scope global noprefixroute eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::20c:29ff:feb1:380c/64 scope link
- valid_lft forever preferred_lft forever
- [root@lvs-server ~]# ipvsadm -A -t 192.168.0.100:80 -s rr
- [root@lvs-server ~]# ipvsadm -a -t 192.168.0.100:80 -r 192.168.0.129:80 -g
- [root@lvs-server ~]# ipvsadm -a -t 192.168.0.100:80 -r 192.168.0.131:80 -g
- [root@lvs-server ~]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 192.168.0.100:80 rr
- -> 192.168.0.129:80 Route 1 0 0
- -> 192.168.0.131:80 Route 1 0 0
复制代码 测试访问- [root@internet ~]# curl 192.168.0.129
- rs1-server rs1
- [root@internet ~]# curl 192.168.0.131
- rs2-server rs2
复制代码- [root@rs1-server ~]# tail -f /var/log/httpd/access_log -n0
- 10.0.0.10 - - [31/Jan/2024:14:51:57 +0800] "GET / HTTP/1.1" 200 15 "-" "curl/7.61.1"
复制代码 web http协议通信过程
Client Server
创建新的套接字(socket)
将套接字绑定到端口80上去(bind)
允许套接字进行连接(listen)
期待连接(accept)
获取IP地址和端口号
创建新的套接字(socket)
连接到服务器IP:port上去(connect)
通知应用程序有连接到来
开始读取请求(read)
连接成功
发送HTTP请求
期待HTTP相应
处理HTTP请求报文
回送HTTP相应(write)
关闭连接(close)
处理HTTP相应
关闭连接(close)
网络I/O模型
阻塞型、非阻塞型、复用型、信号驱动型、异步
阻塞I/O模型
阻塞IO模型是最简单的I/O模型,用户线程内核进行IO操作时被阻塞
用户线程通过体系调用read发起I/O读操作,由用户空间转到内核空间。内核等到数据包到达后,然后将接收的数据拷贝到用户空间,完成read操作
用户必要期待read将数据读取到buffer后,才继续处理接收的数据。整个I/O请求的过程中,用户线程是被阻塞的,这导致用户在发起IO请求时,不能做任何事变,对CPU的资源利用率不够
非阻塞型I/O模型
用户线程发起IO请求时立即返回。但并未读取到任何数据 ,用户线程必要不停地发起IO请求,直到数据到达后,才真正读取数据,继续实验。即“轮询”机制存在两个问题:如果有大量文件描述符都要等,那么就得一个一个的read。这会带来大量的Context Switch(read是体系调用,每调用一次就得在用户态和焦点态切换一次)。轮询的时间不好把握。这里是要猜多久之后数据才气到。期待时间设的太长,程序相应延迟就过大;设的太短,就会造成过于频繁的重试,干耗CPU而已。是比较浪费CPU的方式,一样平常很少直接使用这种模型。而是在其他IO模型中使用非阻塞IO这一特性。
I/O多路复用型
多路复用IO指一个线程可以同时(实际是交替实现,即并发完成)监控和处理多个文件描述符对应各自的IO,即复用同一个线程
一个线程之所以能实现同时处理多个IO,是由于这个线程调用了内核中的SELECT,POLL或EPOLL等体系调用。从而实现多路复用IO
信号驱动I/O模型
信号驱动I/O的意思就是进程现在不用傻等,也不用去轮询。而是让内核在数据就绪时,发送信号通知进程。
调用的步调是,通过体系调用sigaction,并注册一个信号处理的回调函数,该调用会立即返回,然后主程序可以继续向下实验,当有I/O操作准备就绪时,内核会为该进程产生一个SIGIO信号,并回调注册的信号回调函数,如许就可以在信号回调函数中体系调用recvfrom获取数据,将用户进程所必要的数据从内核空间拷贝到用户空间
此模型的优势在于期待数据报到达期间进程不被阻塞。用户主程序可以继续实验,只要期待来自信号处理函数的通知。
异步I/O模型
异步I/O与信号驱动I/O最大区别在于,信号驱动是内核通知用户进程何时开始一个I/O操作,而异步I/O是由内核通知用户进程I/O操作何时完成,两者有本质区别,相当于不用去饭店场吃饭,直接点个外卖,把期待上菜的时间也给省了
相对于同步I/O,异步I/O不是顺序实验。用户进程进行aio_read体系调用之后,无论内核数据是否准备好,都会直接返回给用户进程,然后用户态进程可以去做别的事变。等到socket数据准备好了,内核直接复制数据给进程,然后从内核向进程发送通知。IO两个阶段,进程都是非阻塞的。
Nginx
架构
进程结构
web请求处理机制
多进程方式:服务器每接收到一个客户端请求就有服务器的主进程生成一个子进程相应客户端,直到用户关闭连接,如许的优势是处理速率快,子进程之间相互独立,但是如果访问过大会导致服务器资源耗尽而无法提供请求。
多线程方式:与多进程方式类似,但是每收到一个客户端请求会有服务进程派生出一个线程和此客户端进行交互,一个线程的开销远远小于一个进程,因此多线程方式在很大程序减轻了web服务器对体系资源的要求,但是多线程也有自己的缺点,即多个线程位于同一个进程内工作的时候,可以相互访问同样的内存地址空间,所有他们相互影响,一旦主进程挂掉则所有子进程都不能工作了,IIS服务器使用多线程的方式,必要间隔一段时间就重启一次才气稳定。
Nginx是多进程组织模型,而且是一个由Master主进程和Worker工作进程组成。
焦点配置
- [root@centos nginx]# cat nginx.conf
- #全局配置端,对全局生效,主要配置nginx的启动用户/组,启动的工作进程数量,工作模式,nginx的PID路径,日志路径
- user nginx;
- worker_processes auto; #启动工作进程数数量
- error_log /var/log/nginx/error.log;
- pid /run/nginx.pid;
- # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
- include /usr/share/nginx/modules/*.conf;
- events {
- #events设置块,主要影响nginx服务器与用户的网络连接,比如是否允许同时接受多个网络连接,使用哪种事件驱动型模型处理请求,每个工作进程可以同时支持的最大连接数,是否开启对多工作进程下的网络连接进行序列化等。
- worker_connections 1024; #设置单个nginx工作进程可以接受的最大并发,作为web服务器的时候最大并发数为worker_connections * worker_processes,作为反向代理的时候为(worker_connections * worker_processes)/2
- }
- http {
- #http块是nginx服务器配置中的重要部分,缓存、代理和日志格式定义等绝大多数功能和第三方模块都可以在这设置,http块可以包含多个server块,而一个server块中又可以包含多个location块,server块可以配置文件引入、MIME-Type定义、日志定义、是否启用sendfile、连接超时时间和单个链接的请求上限等。
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
- access_log /var/log/nginx/access.log main;
- sendfile on; #作为web服务器的时候打开sendfile加快静态文件传输,指定是否使用sendfile系统调用来传输文件,sendfile系统调用在两个文件描述符之间直接传递数据(完全在内核中操作),从而避免了数据在内核缓冲区和用户缓冲区之间的拷贝,操作效率很高,被称之为零拷贝,硬盘 >> kernel buffer(快速拷贝到kernelsocket buffer) >> 协议栈。
- tcp_nopush on;
- tcp_nodelay on;
- keepalive_timeout 65; #长连接超时时间,单位是秒
- types_hash_max_size 2048;
- include /etc/nginx/mime.types;
- default_type application/octet-stream;
- # Load modular configuration files from the /etc/nginx/conf.d directory.
- # See http://nginx.org/en/docs/ngx_core_module.html#include
- # for more information.
- include /etc/nginx/conf.d/*.conf;
- server {
- #设置一个虚拟主机,可以包含自己的全局块,同时也可以包含多个location模块,比如本虚拟机监听的端口、本虚拟机的名称和IP配置,多个server可以使用一个端口,比如都使用80端口提供web服务。
- listen 80 default_server;
- listen [::]:80 default_server;
- server_name _; #本server的名称,当访问此名称的时候nginx会调用当前server内部的配置进程匹配。
- root /usr/share/nginx/html; #相当于默认页面的目录名称,默认是安装目录的相对路径,可以使用绝对路径配置。
- # Load configuration files for the default server block.
- include /etc/nginx/default.d/*.conf;
- location / {
- #location其实是server的一个指令,为nginx服务器提供比较多而灵活的指令,都是location中体现的,主要是基于nginx接受到的请求字符串,对用户请求的UIL进行匹配,并对特定的指令进行处理。
- }
- error_page 404 /404.html;
- location = /40x.html {
- #location处理对应的不同错误码的页面定义到/40x.html,这跟对应其server中定义的目录下。
- }
- error_page 500 502 503 504 /50x.html;
- location = /50x.html {
- #location处理错误码(500 502 503 504)的页面定义到/50x.html,这跟对应其server中定义的目录下。
- }
- }
- # Settings for a TLS enabled server.
- #
- # server {
- # listen 443 ssl http2 default_server;
- # listen [::]:443 ssl http2 default_server;
- # server_name _;
- # root /usr/share/nginx/html;
- #
- # ssl_certificate "/etc/pki/nginx/server.crt";
- # ssl_certificate_key "/etc/pki/nginx/private/server.key";
- # ssl_session_cache shared:SSL:1m;
- # ssl_session_timeout 10m;
- # ssl_ciphers PROFILE=SYSTEM;
- # ssl_prefer_server_ciphers on;
- #
- # # Load configuration files for the default server block.
- # include /etc/nginx/default.d/*.conf;
- #
- # location / {
- # }
- #
- # error_page 404 /404.html;
- # location = /40x.html {
- # }
- #
- # error_page 500 502 503 504 /50x.html;
- # location = /50x.html {
- # }
- # }
- }
复制代码 优化- user nginx;
- worker_processes [number| auto];#启动nginx工作进程的数量,一般设为和CPU核心数相同
- worker_cpu_affinity 00000001 00000010 00000100 00001000 | auto; #将Nginx工作进程绑定到指定的CPU核心,默认Nginx是不进行进程绑定的,绑定并不是意味着当前nginx进程独占以一核心CPU,但是可以保证此进程不会运行在其他核心上,这就极大减少了nginx的工作进程在不同的cpu核心上的来回跳转,减少了CPU对进程的资源分配与回收以及内存管理,因此可以有效的提升nginx服务器的性能。
- CPU MASk: 00000001:0号CPU
- 00000010:1号CPU
- 10000000:7号CPU
- #示例
- work_cpu_affinity 0001 0010 0100 1000;第0号---第3号CPU
- error_log /apps/nginx/logs/error.log error; #错误日志记录配置
- pid /apps/nginx/logs/nginx.pid; #pid文件保存路径
- worker_priority 0; #工作进程优先级,-20~20(19)
- worker_rlimit_nofile 65536; #所有worker进程能打开的文件数量上限,包括:Nginx的所有连接,而不仅仅是与客户端的连接,另一个考虑因素是实际的并发连接数不能超过系统级别的限制,最好与ulimit -n 或者 limits.conf的值保持一致
- daemon off; #前台运行Nginx服务用于测试、docker等环境。
- master_process off|on #是否开启Nginx的master-worker工作模式,仅用于开发调试场景,默认为on
- events {
- worker_connections 1024; #设置单个工作进程的最大并发连接数
- }
复制代码 实现nginx的高并发配置- [root@centos nginx]# while true;do ab -c 5000 -n 10000 http://10.0.0.8/;sleep 0.5;done
- This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
- Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
- Licensed to The Apache Software Foundation, http://www.apache.org/
- Benchmarking 10.0.0.8 (be patient)
- Completed 1000 requests
- Completed 2000 requests
- Completed 3000 requests
- Completed 4000 requests
- Completed 5000 requests
- Completed 6000 requests
- Completed 7000 requests
- Completed 8000 requests
- Completed 9000 requests
- [root@centos nginx]# tail -f /var/log/nginx/error.log
- 2024/02/22 11:26:24 [crit] 40383#0: accept4() failed (24: Too many open files)
- 2024/02/22 11:26:24 [crit] 40382#0: accept4() failed (24: Too many open files)
- 2024/02/22 11:26:24 [crit] 40383#0: accept4() failed (24: Too many open files)
- 2024/02/22 11:26:24 [crit] 40382#0: accept4() failed (24: Too many open files)
- 2024/02/22 11:26:25 [crit] 40383#0: accept4() failed (24: Too many open files)
复制代码 调大并发连接数- [root@centos nginx]# ulimit -n 102400
- [root@centos nginx]# cat /usr/lib/systemd/system/nginx.service
- LimitNOFILE=100000
- [root@centos nginx]# systemctl daemon-reload
- [root@centos nginx]# cat nginx.conf
- worker_rlimit_nofile 102400;
- events {
- worker_connections 102400;
- }
- [root@centos nginx]# systemctl restart nginx
复制代码 脚本实现一键编译安装NGINX
[code][root@centos data]# cat install.sh #!/bin/bash##*******************************************************************#Author: yoobak#QQ: 877441893#Date: 2024-02-23#FileName: instal-nginx.sh#URL: https://www.cnblogs.com/yoobak#Description: The test script#Copyright (C): 2024 All rights reserved#*******************************************************************SRC_DIR=/usr/local/src/NGINX_INSTALL_DIR=/apps/nginxversion=nginx-1.24.0download=https://nginx.org/download/install() { yum install -y gcc pcre-devel openssl-devel zlib-devel cd /data && wget ${download}${version}.tar.gz && tar xf ${version}.tar.gz cd $version && ./configure --prefix=${NGINX_INSTALL_DIR} --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module make -j `lscpu | awk '/^CPU\(s\)/{print $2}'` && make install } useradd(){ useradd -s /sbin/nologin nginx chown -R nginx. /${NGINX_INSTALL_DIR} ln -s /apps/nginx/sbin/nginx /usr/sbin/}cat > /lib/systemd/system/nginx.service |