k8s实战案例之部署redis单机和redis cluster

打印 上一主题 下一主题

主题 874|帖子 874|积分 2626

1、在k8s上部署redis单机

1.1、redis简介

redis是一款基于BSD协议,开源的非关系型数据库(nosql数据库),作者是意大利开发者Salvatore Sanfilippo在2009年发布,使用C语言编写;redis是基于内存存储,而且是目前比较流行的键值数据库(key-value database),它提供将内存通过网络远程共享的一种服务,提供类似功能的还有memcache,但相比 memcache,redis 还提供了易扩展、高性能、具备数据持久性等功能。主要的应用场景有session共享,常用于web集群中的tomcat或PHP中多web服务器的session共享;消息队列,ELK的日志缓存,部分业务的订阅发布系统;计数器,常用于访问排行榜,商品浏览数等和次数相关的数值统计场景;缓存,常用于数据查询、电商网站商品信息、新闻内容等;相对memcache,redis支持数据的持久化,可以将内存的数据保存在磁盘中,重启redis服务或者服务器之后可以从备份文件中恢复数据到内存继续使用;
1.2、PV/PVC 及 Redis 单机


由于redis的数据(主要是redis快照)都存放在存储系统中,即便redis pod挂掉,对应数据都不会丢;因为在k8s上部署redis单机,redis pod挂了,k8s会将对应pod重建,重建时会把对应pvc挂载至pod中,加载快照,从而使得redis的数据不被pod的挂掉而丢数据;
1.3、构建redis镜像
  1. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# ll
  2. total 1784
  3. drwxr-xr-x  2 root root    4096 Jun  5 15:22 ./
  4. drwxr-xr-x 11 root root    4096 Aug  9  2022 ../
  5. -rw-r--r--  1 root root     717 Jun  5 15:20 Dockerfile
  6. -rwxr-xr-x  1 root root     235 Jun  5 15:21 build-command.sh*
  7. -rw-r--r--  1 root root 1740967 Jun 22  2021 redis-4.0.14.tar.gz
  8. -rw-r--r--  1 root root   58783 Jun 22  2021 redis.conf
  9. -rwxr-xr-x  1 root root      84 Jun  5 15:21 run_redis.sh*
  10. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile
  11. #Redis Image
  12. # 导入自定义centos基础镜像
  13. FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009
  14. # 添加redis源码包至/usr/local/src
  15. ADD redis-4.0.14.tar.gz /usr/local/src
  16. # 编译安装redis
  17. RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data
  18. # 添加redis配置文件
  19. ADD redis.conf /usr/local/redis/redis.conf
  20. # 暴露redis服务端口
  21. EXPOSE 6379
  22. #ADD run_redis.sh /usr/local/redis/run_redis.sh
  23. #CMD ["/usr/local/redis/run_redis.sh"]
  24. # 添加启动脚本
  25. ADD run_redis.sh /usr/local/redis/entrypoint.sh
  26. # 启动redis
  27. ENTRYPOINT ["/usr/local/redis/entrypoint.sh"]
  28. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh
  29. #!/bin/bash
  30. TAG=$1
  31. #docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .
  32. #sleep 3
  33. #docker push  harbor.ik8s.cc/magedu/redis:${TAG}
  34. nerdctl build -t  harbor.ik8s.cc/magedu/redis:${TAG} .
  35. nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}
  36. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh
  37. #!/bin/bash
  38. # Redis启动命令
  39. /usr/sbin/redis-server /usr/local/redis/redis.conf
  40. # 使用tail -f 在pod内部构建守护进程
  41. tail -f  /etc/hosts
  42. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v '^#\|^$' redis.conf
  43. bind 0.0.0.0
  44. protected-mode yes
  45. port 6379
  46. tcp-backlog 511
  47. timeout 0
  48. tcp-keepalive 300
  49. daemonize yes
  50. supervised no
  51. pidfile /var/run/redis_6379.pid
  52. loglevel notice
  53. logfile ""
  54. databases 16
  55. always-show-logo yes
  56. save 900 1
  57. save 5 1
  58. save 300 10
  59. save 60 10000
  60. stop-writes-on-bgsave-error no
  61. rdbcompression yes
  62. rdbchecksum yes
  63. dbfilename dump.rdb
  64. dir /data/redis-data
  65. slave-serve-stale-data yes
  66. slave-read-only yes
  67. repl-diskless-sync no
  68. repl-diskless-sync-delay 5
  69. repl-disable-tcp-nodelay no
  70. slave-priority 100
  71. requirepass 123456
  72. lazyfree-lazy-eviction no
  73. lazyfree-lazy-expire no
  74. lazyfree-lazy-server-del no
  75. slave-lazy-flush no
  76. appendonly no
  77. appendfilename "appendonly.aof"
  78. appendfsync everysec
  79. no-appendfsync-on-rewrite no
  80. auto-aof-rewrite-percentage 100
  81. auto-aof-rewrite-min-size 64mb
  82. aof-load-truncated yes
  83. aof-use-rdb-preamble no
  84. lua-time-limit 5000
  85. slowlog-log-slower-than 10000
  86. slowlog-max-len 128
  87. latency-monitor-threshold 0
  88. notify-keyspace-events ""
  89. hash-max-ziplist-entries 512
  90. hash-max-ziplist-value 64
  91. list-max-ziplist-size -2
  92. list-compress-depth 0
  93. set-max-intset-entries 512
  94. zset-max-ziplist-entries 128
  95. zset-max-ziplist-value 64
  96. hll-sparse-max-bytes 3000
  97. activerehashing yes
  98. client-output-buffer-limit normal 0 0 0
  99. client-output-buffer-limit slave 256mb 64mb 60
  100. client-output-buffer-limit pubsub 32mb 8mb 60
  101. hz 10
  102. aof-rewrite-incremental-fsync yes
  103. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis#
复制代码

1.3.1、验证rdis镜像是否上传至harbor?


1.4、测试redis 镜像

1.4.1、验证将redis镜像运行为容器,看看是否正常运行?


1.4.2、远程连接redis,看看是否可正常连接?


能够将redis镜像运行为容器,并且能够通过远程主机连接至redis进行数据读写,说明我们构建的reids镜像没有问题;
1.5、创建PV和PVC

1.5.1、在nfs服务器上准备redis数据存储目录
  1. root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1
  2. mkdir: created directory '/data/k8sdata/magedu/redis-datadir-1'
  3. root@harbor:~# cat /etc/exports
  4. # /etc/exports: the access control list for filesystems which may be exported
  5. #               to NFS clients.  See exports(5).
  6. #
  7. # Example for NFSv2 and NFSv3:
  8. # /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
  9. #
  10. # Example for NFSv4:
  11. # /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
  12. # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
  13. #
  14. /data/k8sdata/kuboard *(rw,no_root_squash)
  15. /data/volumes *(rw,no_root_squash)
  16. /pod-vol *(rw,no_root_squash)
  17. /data/k8sdata/myserver *(rw,no_root_squash)
  18. /data/k8sdata/mysite *(rw,no_root_squash)
  19. /data/k8sdata/magedu/images *(rw,no_root_squash)
  20. /data/k8sdata/magedu/static *(rw,no_root_squash)
  21. /data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)
  22. /data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)
  23. /data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)
  24. /data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash)
  25. root@harbor:~# exportfs -av
  26. exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  27.   Assuming default behaviour ('no_subtree_check').
  28.   NOTE: this default has changed since nfs-utils version 1.0.x
  29. exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  30.   Assuming default behaviour ('no_subtree_check').
  31.   NOTE: this default has changed since nfs-utils version 1.0.x
  32. exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  33.   Assuming default behaviour ('no_subtree_check').
  34.   NOTE: this default has changed since nfs-utils version 1.0.x
  35. exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
  36.   Assuming default behaviour ('no_subtree_check').
  37.   NOTE: this default has changed since nfs-utils version 1.0.x
  38. exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
  39.   Assuming default behaviour ('no_subtree_check').
  40.   NOTE: this default has changed since nfs-utils version 1.0.x
  41. exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
  42.   Assuming default behaviour ('no_subtree_check').
  43.   NOTE: this default has changed since nfs-utils version 1.0.x
  44. exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
  45.   Assuming default behaviour ('no_subtree_check').
  46.   NOTE: this default has changed since nfs-utils version 1.0.x
  47. exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
  48.   Assuming default behaviour ('no_subtree_check').
  49.   NOTE: this default has changed since nfs-utils version 1.0.x
  50. exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
  51.   Assuming default behaviour ('no_subtree_check').
  52.   NOTE: this default has changed since nfs-utils version 1.0.x
  53. exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
  54.   Assuming default behaviour ('no_subtree_check').
  55.   NOTE: this default has changed since nfs-utils version 1.0.x
  56. exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
  57.   Assuming default behaviour ('no_subtree_check').
  58.   NOTE: this default has changed since nfs-utils version 1.0.x
  59. exporting *:/data/k8sdata/magedu/redis-datadir-1
  60. exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
  61. exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
  62. exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
  63. exporting *:/data/k8sdata/magedu/static
  64. exporting *:/data/k8sdata/magedu/images
  65. exporting *:/data/k8sdata/mysite
  66. exporting *:/data/k8sdata/myserver
  67. exporting *:/pod-vol
  68. exporting *:/data/volumes
  69. exporting *:/data/k8sdata/kuboard
  70. root@harbor:~#
复制代码
1.5.2、创建pv
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml     
  2. ---
  3. apiVersion: v1
  4. kind: PersistentVolume
  5. metadata:
  6.   name: redis-datadir-pv-1
  7. spec:
  8.   capacity:
  9.     storage: 10Gi
  10.   accessModes:
  11.     - ReadWriteOnce
  12.   nfs:
  13.     path: /data/k8sdata/magedu/redis-datadir-1
  14.     server: 192.168.0.42
  15. root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
复制代码

1.5.3、创建pvc
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml
  2. ---
  3. apiVersion: v1
  4. kind: PersistentVolumeClaim
  5. metadata:
  6.   name: redis-datadir-pvc-1
  7.   namespace: magedu
  8. spec:
  9.   volumeName: redis-datadir-pv-1
  10.   accessModes:
  11.     - ReadWriteOnce
  12.   resources:
  13.     requests:
  14.       storage: 10Gi
  15. root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv#
复制代码

1.6、部署redis服务
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yaml
  2. kind: Deployment
  3. #apiVersion: extensions/v1beta1
  4. apiVersion: apps/v1
  5. metadata:
  6.   labels:
  7.     app: devops-redis
  8.   name: deploy-devops-redis
  9.   namespace: magedu
  10. spec:
  11.   replicas: 1
  12.   selector:
  13.     matchLabels:
  14.       app: devops-redis
  15.   template:
  16.     metadata:
  17.       labels:
  18.         app: devops-redis
  19.     spec:
  20.       containers:
  21.         - name: redis-container
  22.           image: harbor.ik8s.cc/magedu/redis:v4.0.14
  23.           imagePullPolicy: Always
  24.           volumeMounts:
  25.           - mountPath: "/data/redis-data/"
  26.             name: redis-datadir
  27.       volumes:
  28.         - name: redis-datadir
  29.           persistentVolumeClaim:
  30.             claimName: redis-datadir-pvc-1
  31. ---
  32. kind: Service
  33. apiVersion: v1
  34. metadata:
  35.   labels:
  36.     app: devops-redis
  37.   name: srv-devops-redis
  38.   namespace: magedu
  39. spec:
  40.   type: NodePort
  41.   ports:
  42.   - name: http
  43.     port: 6379
  44.     targetPort: 6379
  45.     nodePort: 36379
  46.   selector:
  47.     app: devops-redis
  48.   sessionAffinity: ClientIP
  49.   sessionAffinityConfig:
  50.     clientIP:
  51.       timeoutSeconds: 10800
  52. root@k8s-master01:~/k8s-data/yaml/magedu/redis#
复制代码

上述报错说我们的服务端口超出范围,这是因为我们在初始化k8s集群时指定的服务端口范围;
1.6.1、修改nodeport端口范围


编辑/etc/systemd/system/kube-apiserver.service,将其--service-node-port-range选项指定的值修改即可;其他两个master节点也需要修改哦
1.6.2、重载kube-apiserver.service,重启kube-apiserver
  1. root@k8s-master01:~# systemctl daemon-reload                 
  2. root@k8s-master01:~# systemctl restart kube-apiserver.service
  3. root@k8s-master01:~#
复制代码
再次部署redis

1.7、验证redis数据读写

1.7.1、连接k8s任意节点的36376端口,测试redis读写数据


1.8、验证redis pod 重建对应数据是否丢失?

1.8.1、查看redis快照文件是否存储到存储上呢?
  1. root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1
  2. total 12
  3. drwxr-xr-x 2 root root 4096 Jun  5 16:29 ./
  4. drwxr-xr-x 8 root root 4096 Jun  5 15:53 ../
  5. -rw-r--r-- 1 root root  116 Jun  5 16:29 dump.rdb
  6. root@harbor:~#
复制代码
可以看到刚才我们向redis写入数据,对应redis在规定时间内发现key的变化就做了快照,因为redis数据目录时通过pv/pvc挂载的nfs,所以我们在nfs对应目录里时可以正常看到这个快照文件的;
1.8.2、删除redis pod 等待k8s重建redis pod


1.8.3、验证重建后的redis pod数据


可以看到k8s重建后的redis pod 还保留着原有pod的数据;这说明k8s重建时挂载了前一个pod的pvc;
2、在k8s上部署redis集群

2.1、PV/PVC及Redis Cluster-StatefulSet


redis cluster相比redis单机要稍微复杂一点,我们也是通过pv/pvc将redis cluster数据存放在存储系统中,不同于redis单机,redis cluster对存入的数据会做crc16计算,然后和16384做取模计算,得出一个数字,这个数字就是存入redis cluster的一个槽位;即redis cluster将16384个槽位,平均分配给集群所有master节点,每个master节点存放整个集群数据的一部分;这样一来就存在一个问题,如果master宕机,那么对应槽位的数据也就不可用,为了防止master单点故障,我们还需要对master做高可用,即专门用一个slave节点对master做备份,master宕机的情况下,对应slave会接管master继续向集群提供服务,从而实现redis cluster master的高可用;如上图所示,我们使用3主3从的redis cluster,redis0,1,2为master,那么3,4,5就对应为0,1,2的slave,负责备份各自对应的master的数据;这六个pod都是通过k8s集群的pv/pvc将数据存放在存储系统中;
2.2、创建PV

2.2.1、在nfs上准备redis cluster 数据目录
  1. root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}
  2. mkdir: created directory '/data/k8sdata/magedu/redis0'
  3. mkdir: created directory '/data/k8sdata/magedu/redis1'
  4. mkdir: created directory '/data/k8sdata/magedu/redis2'
  5. mkdir: created directory '/data/k8sdata/magedu/redis3'
  6. mkdir: created directory '/data/k8sdata/magedu/redis4'
  7. mkdir: created directory '/data/k8sdata/magedu/redis5'
  8. root@harbor:~# tail -6 /etc/exports
  9. /data/k8sdata/magedu/redis0 *(rw,no_root_squash)
  10. /data/k8sdata/magedu/redis1 *(rw,no_root_squash)
  11. /data/k8sdata/magedu/redis2 *(rw,no_root_squash)
  12. /data/k8sdata/magedu/redis3 *(rw,no_root_squash)
  13. /data/k8sdata/magedu/redis4 *(rw,no_root_squash)
  14. /data/k8sdata/magedu/redis5 *(rw,no_root_squash)
  15. root@harbor:~# exportfs  -av
  16. exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  17.   Assuming default behaviour ('no_subtree_check').
  18.   NOTE: this default has changed since nfs-utils version 1.0.x
  19. exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  20.   Assuming default behaviour ('no_subtree_check').
  21.   NOTE: this default has changed since nfs-utils version 1.0.x
  22. exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  23.   Assuming default behaviour ('no_subtree_check').
  24.   NOTE: this default has changed since nfs-utils version 1.0.x
  25. exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
  26.   Assuming default behaviour ('no_subtree_check').
  27.   NOTE: this default has changed since nfs-utils version 1.0.x
  28. exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
  29.   Assuming default behaviour ('no_subtree_check').
  30.   NOTE: this default has changed since nfs-utils version 1.0.x
  31. exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
  32.   Assuming default behaviour ('no_subtree_check').
  33.   NOTE: this default has changed since nfs-utils version 1.0.x
  34. exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
  35.   Assuming default behaviour ('no_subtree_check').
  36.   NOTE: this default has changed since nfs-utils version 1.0.x
  37. exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
  38.   Assuming default behaviour ('no_subtree_check').
  39.   NOTE: this default has changed since nfs-utils version 1.0.x
  40. exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
  41.   Assuming default behaviour ('no_subtree_check').
  42.   NOTE: this default has changed since nfs-utils version 1.0.x
  43. exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
  44.   Assuming default behaviour ('no_subtree_check').
  45.   NOTE: this default has changed since nfs-utils version 1.0.x
  46. exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
  47.   Assuming default behaviour ('no_subtree_check').
  48.   NOTE: this default has changed since nfs-utils version 1.0.x
  49. exportfs: /etc/exports [18]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis0".
  50.   Assuming default behaviour ('no_subtree_check').
  51.   NOTE: this default has changed since nfs-utils version 1.0.x
  52. exportfs: /etc/exports [19]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis1".
  53.   Assuming default behaviour ('no_subtree_check').
  54.   NOTE: this default has changed since nfs-utils version 1.0.x
  55. exportfs: /etc/exports [20]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis2".
  56.   Assuming default behaviour ('no_subtree_check').
  57.   NOTE: this default has changed since nfs-utils version 1.0.x
  58. exportfs: /etc/exports [21]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis3".
  59.   Assuming default behaviour ('no_subtree_check').
  60.   NOTE: this default has changed since nfs-utils version 1.0.x
  61. exportfs: /etc/exports [22]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis4".
  62.   Assuming default behaviour ('no_subtree_check').
  63.   NOTE: this default has changed since nfs-utils version 1.0.x
  64. exportfs: /etc/exports [23]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis5".
  65.   Assuming default behaviour ('no_subtree_check').
  66.   NOTE: this default has changed since nfs-utils version 1.0.x
  67. exporting *:/data/k8sdata/magedu/redis5
  68. exporting *:/data/k8sdata/magedu/redis4
  69. exporting *:/data/k8sdata/magedu/redis3
  70. exporting *:/data/k8sdata/magedu/redis2
  71. exporting *:/data/k8sdata/magedu/redis1
  72. exporting *:/data/k8sdata/magedu/redis0
  73. exporting *:/data/k8sdata/magedu/redis-datadir-1
  74. exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
  75. exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
  76. exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
  77. exporting *:/data/k8sdata/magedu/static
  78. exporting *:/data/k8sdata/magedu/images
  79. exporting *:/data/k8sdata/mysite
  80. exporting *:/data/k8sdata/myserver
  81. exporting *:/pod-vol
  82. exporting *:/data/volumes
  83. exporting *:/data/k8sdata/kuboard
  84. root@harbor:~#
复制代码
2.2.2、创建pv
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5.   name: redis-cluster-pv0
  6. spec:
  7.   capacity:
  8.     storage: 5Gi
  9.   accessModes:
  10.     - ReadWriteOnce
  11.   nfs:
  12.     server: 192.168.0.42
  13.     path: /data/k8sdata/magedu/redis0
  14. ---
  15. apiVersion: v1
  16. kind: PersistentVolume
  17. metadata:
  18.   name: redis-cluster-pv1
  19. spec:
  20.   capacity:
  21.     storage: 5Gi
  22.   accessModes:
  23.     - ReadWriteOnce
  24.   nfs:
  25.     server: 192.168.0.42
  26.     path: /data/k8sdata/magedu/redis1
  27. ---
  28. apiVersion: v1
  29. kind: PersistentVolume
  30. metadata:
  31.   name: redis-cluster-pv2
  32. spec:
  33.   capacity:
  34.     storage: 5Gi
  35.   accessModes:
  36.     - ReadWriteOnce
  37.   nfs:
  38.     server: 192.168.0.42
  39.     path: /data/k8sdata/magedu/redis2
  40. ---
  41. apiVersion: v1
  42. kind: PersistentVolume
  43. metadata:
  44.   name: redis-cluster-pv3
  45. spec:
  46.   capacity:
  47.     storage: 5Gi
  48.   accessModes:
  49.     - ReadWriteOnce
  50.   nfs:
  51.     server: 192.168.0.42
  52.     path: /data/k8sdata/magedu/redis3
  53. ---
  54. apiVersion: v1
  55. kind: PersistentVolume
  56. metadata:
  57.   name: redis-cluster-pv4
  58. spec:
  59.   capacity:
  60.     storage: 5Gi
  61.   accessModes:
  62.     - ReadWriteOnce
  63.   nfs:
  64.     server: 192.168.0.42
  65.     path: /data/k8sdata/magedu/redis4
  66. ---
  67. apiVersion: v1
  68. kind: PersistentVolume
  69. metadata:
  70.   name: redis-cluster-pv5
  71. spec:
  72.   capacity:
  73.     storage: 5Gi
  74.   accessModes:
  75.     - ReadWriteOnce
  76.   nfs:
  77.     server: 192.168.0.42
  78.     path: /data/k8sdata/magedu/redis5
  79. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码

2.3、部署redis cluster

2.3.1、基于redis.conf文件创建configmap
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf
  2. appendonly yes
  3. cluster-enabled yes
  4. cluster-config-file /var/lib/redis/nodes.conf
  5. cluster-node-timeout 5000
  6. dir /var/lib/redis
  7. port 6379
  8. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
2.3.2、创建configmap
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu
  2. configmap/redis-conf created
  3. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu
  4. NAME               DATA   AGE
  5. kube-root-ca.crt   1      35h
  6. redis-conf         1      6s
  7. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
2.3.3、验证configmap
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu
  2. Name:         redis-conf
  3. Namespace:    magedu
  4. Labels:       <none>
  5. Annotations:  <none>
  6. Data
  7. ====
  8. redis.conf:
  9. ----
  10. appendonly yes
  11. cluster-enabled yes
  12. cluster-config-file /var/lib/redis/nodes.conf
  13. cluster-node-timeout 5000
  14. dir /var/lib/redis
  15. port 6379
  16. BinaryData
  17. ====
  18. Events:  <none>
  19. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
2.3.4、部署redis cluster
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5.   name: redis
  6.   namespace: magedu
  7.   labels:
  8.     app: redis
  9. spec:
  10.   selector:
  11.     app: redis
  12.     appCluster: redis-cluster
  13.   ports:
  14.   - name: redis
  15.     port: 6379
  16.   clusterIP: None
  17.   
  18. ---
  19. apiVersion: v1
  20. kind: Service
  21. metadata:
  22.   name: redis-access
  23.   namespace: magedu
  24.   labels:
  25.     app: redis
  26. spec:
  27.   type: NodePort
  28.   selector:
  29.     app: redis
  30.     appCluster: redis-cluster
  31.   ports:
  32.   - name: redis-access
  33.     protocol: TCP
  34.     port: 6379
  35.     targetPort: 6379
  36.     nodePort: 36379
  37. ---
  38. apiVersion: apps/v1
  39. kind: StatefulSet
  40. metadata:
  41.   name: redis
  42.   namespace: magedu
  43. spec:
  44.   serviceName: redis
  45.   replicas: 6
  46.   selector:
  47.     matchLabels:
  48.       app: redis
  49.       appCluster: redis-cluster
  50.   template:
  51.     metadata:
  52.       labels:
  53.         app: redis
  54.         appCluster: redis-cluster
  55.     spec:
  56.       terminationGracePeriodSeconds: 20
  57.       affinity:
  58.         podAntiAffinity:
  59.           preferredDuringSchedulingIgnoredDuringExecution:
  60.           - weight: 100
  61.             podAffinityTerm:
  62.               labelSelector:
  63.                 matchExpressions:
  64.                 - key: app
  65.                   operator: In
  66.                   values:
  67.                   - redis
  68.               topologyKey: kubernetes.io/hostname
  69.       containers:
  70.       - name: redis
  71.         image: redis:4.0.14
  72.         command:
  73.           - "redis-server"
  74.         args:
  75.           - "/etc/redis/redis.conf"
  76.           - "--protected-mode"
  77.           - "no"
  78.         resources:
  79.           requests:
  80.             cpu: "500m"
  81.             memory: "500Mi"
  82.         ports:
  83.         - containerPort: 6379
  84.           name: redis
  85.           protocol: TCP
  86.         - containerPort: 16379
  87.           name: cluster
  88.           protocol: TCP
  89.         volumeMounts:
  90.         - name: conf
  91.           mountPath: /etc/redis
  92.         - name: data
  93.           mountPath: /var/lib/redis
  94.       volumes:
  95.       - name: conf
  96.         configMap:
  97.           name: redis-conf
  98.           items:
  99.           - key: redis.conf
  100.             path: redis.conf
  101.   volumeClaimTemplates:
  102.   - metadata:
  103.       name: data
  104.       namespace: magedu
  105.     spec:
  106.       accessModes: [ "ReadWriteOnce" ]
  107.       resources:
  108.         requests:
  109.           storage: 5Gi
  110. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
上述配置清单,主要用sts控制器创建了6个pod副本,每个副本都使用configmap中的配置文件作为redis配置文件,使用pvc模板指定pod在k8s上自动关联pv,并在magedu名称空间创建pvc,即只要k8s上有空余的pv,对应pod就会在magedu这个名称空间按pvc模板信息创建pvc;当然我们可以使用存储类自动创建pvc,也可以提前创建好pvc,一般情况下使用sts控制器,我们可以使用pvc模板的方式来指定pod自动创建pvc(前提是k8s有足够的pv可用);
应用配置清单部署redis cluster

使用sts控制器创建pod,pod名称是sts控制器的名称-id,使用pvc模板创建pvc的名称为pvc模板名称-pod名称,即pvc模板名-sts控制器名-id;
2.4、初始化redis cluster

2.4.1、在k8s上创建临时容器,安装redis cluster 初始化工具
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bash
  2. If you don't see a command prompt, try pressing enter.
  3. root@ubuntu1804:/#
  4. root@ubuntu1804:/# apt update
  5. # 安装必要工具
  6. root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools
  7. # 更新pip
  8. root@ubuntu1804:/# pip install --upgrade pip
  9. # 使用pip安装redis cluster初始化工具redis-trib
  10. root@ubuntu1804:/# pip install redis-trib==0.5.1
  11. root@ubuntu1804:/#
复制代码
2.4.2、初始化redis cluster
  1. root@ubuntu1804:/# redis-trib.py create \
  2. `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \
  3. `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \
  4. `dig +short redis-2.redis.magedu.svc.cluster.local`:6379
复制代码

在k8s上我们使用sts创建pod,对应pod的名称是固定不变的,所以我们初始化redis 集群就直接使用redis pod名称就可以直接解析到对应pod的IP地址;在传统虚拟机或物理机上初始化redis集群,我们可用直接使用IP地址,原因是物理机或虚拟机IP地址是固定的,在k8s上pod的IP地址是不固定的;
2.4.3、给master指定slave


  • 给redis-0指定slave为 redis-3
  1. root@ubuntu1804:/# redis-trib.py replicate \
  2. --master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \
  3. --slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
复制代码


  • 给redis-1指定slave为 redis-4
  1. root@ubuntu1804:/# redis-trib.py replicate \
  2. --master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \
  3. --slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
复制代码


  • 给redis-2指定slave为 redis-5
  1. root@ubuntu1804:/# redis-trib.py replicate \
  2. --master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \
  3. --slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
复制代码

2.5、验证redis cluster状态

2.5.1、进入redis cluster 任意pod 查看集群信息


2.5.2、查看集群节点


集群节点信息中记录了master节点id和slave id,其中slave后面会对应master的id,表示该slave备份对应master数据;
2.5.3、查看当前节点信息
  1. 127.0.0.1:6379> info
  2. # Server
  3. redis_version:4.0.14
  4. redis_git_sha1:00000000
  5. redis_git_dirty:0
  6. redis_build_id:165c932261a105d7
  7. redis_mode:cluster
  8. os:Linux 5.15.0-73-generic x86_64
  9. arch_bits:64
  10. multiplexing_api:epoll
  11. atomicvar_api:atomic-builtin
  12. gcc_version:8.3.0
  13. process_id:1
  14. run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3
  15. tcp_port:6379
  16. uptime_in_seconds:4303
  17. uptime_in_days:0
  18. hz:10
  19. lru_clock:8272053
  20. executable:/data/redis-server
  21. config_file:/etc/redis/redis.conf
  22. # Clients
  23. connected_clients:1
  24. client_longest_output_list:0
  25. client_biggest_input_buf:0
  26. blocked_clients:0
  27. # Memory
  28. used_memory:2642336
  29. used_memory_human:2.52M
  30. used_memory_rss:5353472
  31. used_memory_rss_human:5.11M
  32. used_memory_peak:2682248
  33. used_memory_peak_human:2.56M
  34. used_memory_peak_perc:98.51%
  35. used_memory_overhead:2559936
  36. used_memory_startup:1444856
  37. used_memory_dataset:82400
  38. used_memory_dataset_perc:6.88%
  39. total_system_memory:16740012032
  40. total_system_memory_human:15.59G
  41. used_memory_lua:37888
  42. used_memory_lua_human:37.00K
  43. maxmemory:0
  44. maxmemory_human:0B
  45. maxmemory_policy:noeviction
  46. mem_fragmentation_ratio:2.03
  47. mem_allocator:jemalloc-4.0.3
  48. active_defrag_running:0
  49. lazyfree_pending_objects:0
  50. # Persistence
  51. loading:0
  52. rdb_changes_since_last_save:0
  53. rdb_bgsave_in_progress:0
  54. rdb_last_save_time:1685992849
  55. rdb_last_bgsave_status:ok
  56. rdb_last_bgsave_time_sec:0
  57. rdb_current_bgsave_time_sec:-1
  58. rdb_last_cow_size:245760
  59. aof_enabled:1
  60. aof_rewrite_in_progress:0
  61. aof_rewrite_scheduled:0
  62. aof_last_rewrite_time_sec:-1
  63. aof_current_rewrite_time_sec:-1
  64. aof_last_bgrewrite_status:ok
  65. aof_last_write_status:ok
  66. aof_last_cow_size:0
  67. aof_current_size:0
  68. aof_base_size:0
  69. aof_pending_rewrite:0
  70. aof_buffer_length:0
  71. aof_rewrite_buffer_length:0
  72. aof_pending_bio_fsync:0
  73. aof_delayed_fsync:0
  74. # Stats
  75. total_connections_received:7
  76. total_commands_processed:17223
  77. instantaneous_ops_per_sec:1
  78. total_net_input_bytes:1530962
  79. total_net_output_bytes:108793
  80. instantaneous_input_kbps:0.04
  81. instantaneous_output_kbps:0.00
  82. rejected_connections:0
  83. sync_full:1
  84. sync_partial_ok:0
  85. sync_partial_err:1
  86. expired_keys:0
  87. expired_stale_perc:0.00
  88. expired_time_cap_reached_count:0
  89. evicted_keys:0
  90. keyspace_hits:0
  91. keyspace_misses:0
  92. pubsub_channels:0
  93. pubsub_patterns:0
  94. latest_fork_usec:853
  95. migrate_cached_sockets:0
  96. slave_expires_tracked_keys:0
  97. active_defrag_hits:0
  98. active_defrag_misses:0
  99. active_defrag_key_hits:0
  100. active_defrag_key_misses:0
  101. # Replication
  102. role:master
  103. connected_slaves:1
  104. slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1
  105. master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ff
  106. master_replid2:0000000000000000000000000000000000000000
  107. master_repl_offset:1120
  108. second_repl_offset:-1
  109. repl_backlog_active:1
  110. repl_backlog_size:1048576
  111. repl_backlog_first_byte_offset:1
  112. repl_backlog_histlen:1120
  113. # CPU
  114. used_cpu_sys:12.50
  115. used_cpu_user:7.51
  116. used_cpu_sys_children:0.01
  117. used_cpu_user_children:0.00
  118. # Cluster
  119. cluster_enabled:1
  120. # Keyspace
  121. 127.0.0.1:6379>
复制代码
2.5.4、验证redis cluster读写数据是否正常?

2.5.4.1、手动连接redis cluster 进行数据读写


手动连接redis 集群master节点进行数据读写,存在一个问题就是当我们写入的key经过crc16计算对16384取模后,对应槽位可能不在当前节点,redis它会告诉我们该key该在哪里去写;从上面的截图可用看到,现在redis cluster 是可用正常读写数据的
2.5.4.2、使用python脚本连接redis cluster 进行数据读写
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py
  2. #!/usr/bin/env python
  3. #coding:utf-8
  4. #Author:Zhang ShiJie
  5. #python 2.7/3.8
  6. #pip install redis-py-cluster
  7. import sys,time
  8. from rediscluster import RedisCluster
  9. def init_redis():
  10.     startup_nodes = [
  11.         {'host': '192.168.0.34', 'port': 36379},
  12.         {'host': '192.168.0.35', 'port': 36379},
  13.         {'host': '192.168.0.36', 'port': 36379},
  14.         {'host': '192.168.0.34', 'port': 36379},
  15.         {'host': '192.168.0.35', 'port': 36379},
  16.         {'host': '192.168.0.36', 'port': 36379},
  17.     ]
  18.     try:
  19.         conn = RedisCluster(startup_nodes=startup_nodes,
  20.                             # 有密码要加上密码哦
  21.                             decode_responses=True, password='')
  22.         print('连接成功!!!!!1', conn)
  23.         #conn.set("key-cluster","value-cluster")
  24.         for i in range(100):
  25.             conn.set("key%s" % i, "value%s" % i)
  26.             time.sleep(0.1)
  27.             data = conn.get("key%s" % i)
  28.             print(data)
  29.         #return conn
  30.     except Exception as e:
  31.         print("connect error ", str(e))
  32.         sys.exit(1)
  33. init_redis()
  34. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
运行脚本,向redis cluster 写入数据
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.py
  2. Traceback (most recent call last):
  3.   File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in <module>
  4.     from rediscluster import RedisCluster
  5. ModuleNotFoundError: No module named 'rediscluster'
  6. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
这里提示没有找到rediscluster模块,解决办法就是通过pip安装redis-py-cluster模块即可;
安装redis-py-cluster模块

运行脚本连接redis cluster进行数据读写

连接redis pod,验证数据是否正常写入?



从上面的截图可用看到三个reids cluster master pod各自都存放了一部分key,并非全部;说明刚才我们用python脚本把数据正常写入了redis cluster;
验证在slave 节点是否可用正常读取数据?

从上面的截图可以了解到在slave节点是不可以读取数据;
到slave对应的master节点读取数据

上述验证说明了redis cluster 只有master可以读写数据,slave只是对master数据做备份,不可以在slave上读写数据;
2.6、验证验证redis cluster高可用

2.6.1、在k8s node节点将redis:4.0.14镜像上传至本地harbor


  • 修改镜像tag
  1. root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
复制代码

  • 上传redis镜像至本地harbor
  1. root@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14
  2. INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625)
  3. WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc"
  4. index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625:    done           |++++++++++++++++++++++++++++++++++++++|
  5. manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done           |++++++++++++++++++++++++++++++++++++++|
  6. config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8:   done           |++++++++++++++++++++++++++++++++++++++|
  7. elapsed: 1.4 s                                                                    total:  8.5 Ki (6.1 KiB/s)                                       
  8. root@k8s-node01:~#
复制代码
2.6.2、修改redis cluster部署清单镜像和镜像拉取策略


修改镜像为本地harbor镜像和拉取策略是方便我们测试redis cluster的高可用;
2.6.3、重新apply redis cluster部署清单
  1. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yaml
  2. service/redis unchanged
  3. service/redis-access unchanged
  4. statefulset.apps/redis configured
  5. root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
复制代码
这里相当于给redis cluster更新,他们之间的集群关系还存在,因为集群关系配置都保存在远端存储之上;


  • 验证pod是否都正常running?

  • 验证集群状态和集群关系

不同于之前,这里rdis-0变成了slave ,redis-3变成了master;从上面的截图我们也发现,在k8s上部署redis cluster pod重建以后(IP地址发生变化),对应集群关系不会发生变化;对应master和salve一对关系始终只是再对应的master和salve两个pod中切换,这其实就是高可用;
2.6.4、停掉本地harbor,删除redis master pod,看看对应slave是否会提升为master?


  • 停止harbor服务
  1. root@harbor:~# systemctl stop harbor
复制代码

  • 删除redis-3,看看redis-0是否会提升为master?

可用看到我们把redis-3删除(相当于master宕机)以后,对应slave提升为master了;
2.6.5、恢复harbor服务,看看对应redis-3恢复会议后是否还是redis-0的slave呢?


  • 恢复harbor服务

  • 验证redis-3pod是否恢复?

再次删除redis-3以后,对应pod正常被重建,并处于running状态;


  • 验证redis-3的主从关系

可以看到redis-3恢复以后,对应自动加入集群成为redis-0的slave;
        出处:https://www.cnblogs.com/qiuhom-1874/        本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利.
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

东湖之滨

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表