k8s实战案例之运行Java单体服务-jenkins

打印 上一主题 下一主题

主题 552|帖子 552|积分 1656

1、jenkins架构


基于java命令,运⾏java war包或jar包,本次以jenkins.war 包部署⽅式为例,且要求jenkins的数据保存⾄外部存储(NFS或者PVC),其他java应⽤看实际需求是否需要将数据保存⾄外部存储。
从上述架构图可以看到,Jenkins通过k8s上的pv/pvc来连接外部存储,通过svc的方式向外暴露服务,在集群内部通过直接访问svc就可以正常访问到jenkins,对于集群外部成员,通过外部负载均衡器来访问Jenkins;
2、镜像准备

2.1、Jenkins镜像目录文件
  1. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# tree
  2. .
  3. ├── Dockerfile
  4. ├── build-command.sh
  5. ├── jenkins-2.319.2.war
  6. └── run_jenkins.sh
  7. 0 directories, 4 files
  8. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins#
复制代码
2.2、构建Jenkins镜像

2.2.1、构建Jenkins镜像Dockerfile
  1. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat Dockerfile
  2. #Jenkins Version 2.190.1
  3. FROM harbor.ik8s.cc/pub-images/jdk-base:v8.212
  4. ADD jenkins-2.319.2.war /apps/jenkins/jenkins.war
  5. ADD run_jenkins.sh /usr/bin/
  6. EXPOSE 8080
  7. CMD ["/usr/bin/run_jenkins.sh"]
  8. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins#
复制代码
上述Dockerfile主要引用了一个jdk-base的镜像,该镜像有java环境,然后再这基础之上将jenkins war包和运行脚本加进去,然后暴露8080端口,最后给出运行jenkins的cmd命令;
2.2.2、运行jenkins 脚本
  1. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat run_jenkins.sh
  2. #!/bin/bash
  3. cd /apps/jenkins && java -server -Xms1024m -Xmx1024m -Xss512k -jar jenkins.war --webroot=/apps/jenkins/jenkins-data --httpPort=8080
  4. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins#
复制代码
2.2.3、构建Jenkins镜像脚本
  1. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins# cat build-command.sh
  2. #!/bin/bash
  3. #docker build -t  harbor.ik8s.cc/magedu/jenkins:v2.319.2 .
  4. #echo "镜像制作完成,即将上传至Harbor服务器"
  5. #sleep 1
  6. #docker push harbor.ik8s.cc/magedu/jenkins:v2.319.2
  7. #echo "镜像上传完成"
  8. echo "即将开始就像构建,请稍等!" && echo 3 && sleep 1 && echo 2 && sleep 1 && echo 1
  9. nerdctl build -t  harbor.ik8s.cc/magedu/jenkins:v2.319.2 .
  10. if [ $? -eq 0 ];then
  11.   echo "即将开始镜像上传,请稍等!" && echo 3 && sleep 1 && echo 2 && sleep 1 && echo 1
  12.   nerdctl push harbor.ik8s.cc/magedu/jenkins:v2.319.2
  13.   if [ $? -eq 0 ];then
  14.     echo "镜像上传成功!"
  15.   else
  16.     echo "镜像上传失败"
  17.   fi
  18. else
  19.   echo "镜像构建失败,请检查构建输出信息!"
  20. fi
  21. root@k8s-master01:~/k8s-data/dockerfile/web/magedu/jenkins#
复制代码
运行脚本构建镜像

2.3、验证Jenkins镜像

2.3.1、在harbor上查看对应jenkins镜像是否正常上传?


2.3.2、测试Jenkins镜像是否可以正常运行?


2.3.3、web访问jenkins是否可以正常访问?


能够看到上述页面,说明jenkins镜像制作没有问题;
3、准备PV/PVC

3.1、在nfs服务器上准备jenkins数据目录
  1. root@harbor:~# mkdir -p /data/k8sdata/magedu/{jenkins-data,jenkins-root-data}
  2. root@harbor:~# ll /data/k8sdata/magedu/{jenkins-data,jenkins-root-data}
  3. /data/k8sdata/magedu/jenkins-data:
  4. total 8
  5. drwxr-xr-x  2 root root 4096 Aug  6 03:35 ./
  6. drwxr-xr-x 21 root root 4096 Aug  6 03:35 ../
  7. /data/k8sdata/magedu/jenkins-root-data:
  8. total 8
  9. drwxr-xr-x  2 root root 4096 Aug  6 03:35 ./
  10. drwxr-xr-x 21 root root 4096 Aug  6 03:35 ../
  11. root@harbor:~# tail  /etc/exports
  12. /data/k8sdata/magedu/mysql-datadir-1 *(rw,no_root_squash)
  13. /data/k8sdata/magedu/mysql-datadir-2 *(rw,no_root_squash)
  14. /data/k8sdata/magedu/mysql-datadir-3 *(rw,no_root_squash)
  15. /data/k8sdata/magedu/mysql-datadir-4 *(rw,no_root_squash)
  16. /data/k8sdata/magedu/mysql-datadir-5 *(rw,no_root_squash)
  17. /data/k8sdata/magedu/jenkins-data *(rw,no_root_squash)
  18. /data/k8sdata/magedu/jenkins-root-data *(rw,no_root_squash)
  19. root@harbor:~# exportfs -av
  20. exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/kuboard".
  21.   Assuming default behaviour ('no_subtree_check').
  22.   NOTE: this default has changed since nfs-utils version 1.0.x
  23. exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/volumes".
  24.   Assuming default behaviour ('no_subtree_check').
  25.   NOTE: this default has changed since nfs-utils version 1.0.x
  26. exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/pod-vol".
  27.   Assuming default behaviour ('no_subtree_check').
  28.   NOTE: this default has changed since nfs-utils version 1.0.x
  29. exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/myserver".
  30.   Assuming default behaviour ('no_subtree_check').
  31.   NOTE: this default has changed since nfs-utils version 1.0.x
  32. exportfs: /etc/exports [5]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/mysite".
  33.   Assuming default behaviour ('no_subtree_check').
  34.   NOTE: this default has changed since nfs-utils version 1.0.x
  35. exportfs: /etc/exports [7]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/images".
  36.   Assuming default behaviour ('no_subtree_check').
  37.   NOTE: this default has changed since nfs-utils version 1.0.x
  38. exportfs: /etc/exports [8]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/static".
  39.   Assuming default behaviour ('no_subtree_check').
  40.   NOTE: this default has changed since nfs-utils version 1.0.x
  41. exportfs: /etc/exports [11]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".
  42.   Assuming default behaviour ('no_subtree_check').
  43.   NOTE: this default has changed since nfs-utils version 1.0.x
  44. exportfs: /etc/exports [12]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".
  45.   Assuming default behaviour ('no_subtree_check').
  46.   NOTE: this default has changed since nfs-utils version 1.0.x
  47. exportfs: /etc/exports [13]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".
  48.   Assuming default behaviour ('no_subtree_check').
  49.   NOTE: this default has changed since nfs-utils version 1.0.x
  50. exportfs: /etc/exports [16]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis-datadir-1".
  51.   Assuming default behaviour ('no_subtree_check').
  52.   NOTE: this default has changed since nfs-utils version 1.0.x
  53. exportfs: /etc/exports [18]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis0".
  54.   Assuming default behaviour ('no_subtree_check').
  55.   NOTE: this default has changed since nfs-utils version 1.0.x
  56. exportfs: /etc/exports [19]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis1".
  57.   Assuming default behaviour ('no_subtree_check').
  58.   NOTE: this default has changed since nfs-utils version 1.0.x
  59. exportfs: /etc/exports [20]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis2".
  60.   Assuming default behaviour ('no_subtree_check').
  61.   NOTE: this default has changed since nfs-utils version 1.0.x
  62. exportfs: /etc/exports [21]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis3".
  63.   Assuming default behaviour ('no_subtree_check').
  64.   NOTE: this default has changed since nfs-utils version 1.0.x
  65. exportfs: /etc/exports [22]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis4".
  66.   Assuming default behaviour ('no_subtree_check').
  67.   NOTE: this default has changed since nfs-utils version 1.0.x
  68. exportfs: /etc/exports [23]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/redis5".
  69.   Assuming default behaviour ('no_subtree_check').
  70.   NOTE: this default has changed since nfs-utils version 1.0.x
  71. exportfs: /etc/exports [27]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-1".
  72.   Assuming default behaviour ('no_subtree_check').
  73.   NOTE: this default has changed since nfs-utils version 1.0.x
  74. exportfs: /etc/exports [28]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-2".
  75.   Assuming default behaviour ('no_subtree_check').
  76.   NOTE: this default has changed since nfs-utils version 1.0.x
  77. exportfs: /etc/exports [29]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-3".
  78.   Assuming default behaviour ('no_subtree_check').
  79.   NOTE: this default has changed since nfs-utils version 1.0.x
  80. exportfs: /etc/exports [30]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-4".
  81.   Assuming default behaviour ('no_subtree_check').
  82.   NOTE: this default has changed since nfs-utils version 1.0.x
  83. exportfs: /etc/exports [31]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/mysql-datadir-5".
  84.   Assuming default behaviour ('no_subtree_check').
  85.   NOTE: this default has changed since nfs-utils version 1.0.x
  86. exportfs: /etc/exports [34]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/jenkins-data".
  87.   Assuming default behaviour ('no_subtree_check').
  88.   NOTE: this default has changed since nfs-utils version 1.0.x
  89. exportfs: /etc/exports [35]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data/k8sdata/magedu/jenkins-root-data".
  90.   Assuming default behaviour ('no_subtree_check').
  91.   NOTE: this default has changed since nfs-utils version 1.0.x
  92. exporting *:/data/k8sdata/magedu/jenkins-root-data
  93. exporting *:/data/k8sdata/magedu/jenkins-data
  94. exporting *:/data/k8sdata/magedu/mysql-datadir-5
  95. exporting *:/data/k8sdata/magedu/mysql-datadir-4
  96. exporting *:/data/k8sdata/magedu/mysql-datadir-3
  97. exporting *:/data/k8sdata/magedu/mysql-datadir-2
  98. exporting *:/data/k8sdata/magedu/mysql-datadir-1
  99. exporting *:/data/k8sdata/magedu/redis5
  100. exporting *:/data/k8sdata/magedu/redis4
  101. exporting *:/data/k8sdata/magedu/redis3
  102. exporting *:/data/k8sdata/magedu/redis2
  103. exporting *:/data/k8sdata/magedu/redis1
  104. exporting *:/data/k8sdata/magedu/redis0
  105. exporting *:/data/k8sdata/magedu/redis-datadir-1
  106. exporting *:/data/k8sdata/magedu/zookeeper-datadir-3
  107. exporting *:/data/k8sdata/magedu/zookeeper-datadir-2
  108. exporting *:/data/k8sdata/magedu/zookeeper-datadir-1
  109. exporting *:/data/k8sdata/magedu/static
  110. exporting *:/data/k8sdata/magedu/images
  111. exporting *:/data/k8sdata/mysite
  112. exporting *:/data/k8sdata/myserver
  113. exporting *:/pod-vol
  114. exporting *:/data/volumes
  115. exporting *:/data/k8sdata/kuboard
  116. root@harbor:~#
复制代码
3.2、在k8s上创建pv
  1. ---
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5.   name: jenkins-datadir-pv
  6.   namespace: magedu
  7. spec:
  8.   capacity:
  9.     storage: 100Gi
  10.   accessModes:
  11.     - ReadWriteOnce
  12.   nfs:
  13.     server: 192.168.0.42
  14.     path: /data/k8sdata/magedu/jenkins-data
  15. ---
  16. apiVersion: v1
  17. kind: PersistentVolume
  18. metadata:
  19.   name: jenkins-root-datadir-pv
  20.   namespace: magedu
  21. spec:
  22.   capacity:
  23.     storage: 100Gi
  24.   accessModes:
  25.     - ReadWriteOnce
  26.   nfs:
  27.     server: 192.168.0.42
  28.     path: /data/k8sdata/magedu/jenkins-root-data
复制代码
  1. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# kubectl apply  -f jenkins-persistentvolume.yaml
  2. persistentvolume/jenkins-datadir-pv created
  3. persistentvolume/jenkins-root-datadir-pv created
  4. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv#
复制代码
3.3、验证pv


3.4、在k8s上创建pvc
  1. ---
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5.   name: jenkins-datadir-pvc
  6.   namespace: magedu
  7. spec:
  8.   volumeName: jenkins-datadir-pv
  9.   accessModes:
  10.     - ReadWriteOnce
  11.   resources:
  12.     requests:
  13.       storage: 80Gi
  14. ---
  15. apiVersion: v1
  16. kind: PersistentVolumeClaim
  17. metadata:
  18.   name: jenkins-root-data-pvc
  19.   namespace: magedu
  20. spec:
  21.   volumeName: jenkins-root-datadir-pv
  22.   accessModes:
  23.     - ReadWriteOnce
  24.   resources:
  25.     requests:
  26.       storage: 80Gi
复制代码
  1. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv# kubectl apply -f jenkins-persistentvolumeclaim.yaml
  2. persistentvolumeclaim/jenkins-datadir-pvc created
  3. persistentvolumeclaim/jenkins-root-data-pvc created
  4. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins/pv#
复制代码
3.5、验证pvc


4、准备在k8s上运行jenkins的yaml文件
  1. kind: Deployment
  2. #apiVersion: extensions/v1beta1
  3. apiVersion: apps/v1
  4. metadata:
  5.   labels:
  6.     app: magedu-jenkins
  7.   name: magedu-jenkins-deployment
  8.   namespace: magedu
  9. spec:
  10.   replicas: 1
  11.   selector:
  12.     matchLabels:
  13.       app: magedu-jenkins
  14.   template:
  15.     metadata:
  16.       labels:
  17.         app: magedu-jenkins
  18.     spec:
  19.       containers:
  20.       - name: magedu-jenkins-container
  21.         image: harbor.ik8s.cc/magedu/jenkins:v2.319.2
  22.         #imagePullPolicy: IfNotPresent
  23.         imagePullPolicy: Always
  24.         ports:
  25.         - containerPort: 8080
  26.           protocol: TCP
  27.           name: http
  28.         volumeMounts:
  29.         - mountPath: "/apps/jenkins/jenkins-data/"
  30.           name: jenkins-datadir-magedu
  31.         - mountPath: "/root/.jenkins"
  32.           name: jenkins-root-datadir
  33.       volumes:
  34.         - name: jenkins-datadir-magedu
  35.           persistentVolumeClaim:
  36.             claimName: jenkins-datadir-pvc
  37.         - name: jenkins-root-datadir
  38.           persistentVolumeClaim:
  39.             claimName: jenkins-root-data-pvc
  40. ---
  41. kind: Service
  42. apiVersion: v1
  43. metadata:
  44.   labels:
  45.     app: magedu-jenkins
  46.   name: magedu-jenkins-service
  47.   namespace: magedu
  48. spec:
  49.   type: NodePort
  50.   ports:
  51.   - name: http
  52.     port: 80
  53.     protocol: TCP
  54.     targetPort: 8080
  55.     nodePort: 38080
  56.   selector:
  57.     app: magedu-jenkins
复制代码
5、应用配置清单运行Jenkins
  1. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl apply -f jenkins.yaml
  2. deployment.apps/magedu-jenkins-deployment created
  3. service/magedu-jenkins-service created
  4. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins#
复制代码
6、验证

6.1、验证Jenkins Pod是否正常运行?


6.2、验证web访问jenkins是否可正常访问?


查看jenkins密码
  1. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl get pods -n magedu
  2. NAME                                             READY   STATUS      RESTARTS        AGE
  3. magedu-jenkins-deployment-5f6899db-zn4xg         1/1     Running     0               11m
  4. magedu-nginx-deployment-5589bbf4bc-6gd2w         1/1     Running     12 (107m ago)   62d
  5. magedu-tomcat-app1-deployment-7754c8549c-c7rtb   1/1     Running     6 (108m ago)    62d
  6. magedu-tomcat-app1-deployment-7754c8549c-prglk   1/1     Running     6 (108m ago)    62d
  7. mysql-0                                          2/2     Running     4 (108m ago)    51d
  8. mysql-1                                          2/2     Running     4 (108m ago)    51d
  9. mysql-2                                          2/2     Running     4 (108m ago)    51d
  10. redis-0                                          1/1     Running     4 (108m ago)    60d
  11. redis-1                                          1/1     Running     4 (108m ago)    60d
  12. redis-2                                          1/1     Running     4 (108m ago)    60d
  13. redis-3                                          1/1     Running     4 (108m ago)    60d
  14. redis-4                                          1/1     Running     4 (108m ago)    60d
  15. redis-5                                          1/1     Running     4 (108m ago)    60d
  16. ubuntu1804                                       0/1     Completed   0               60d
  17. zookeeper1-675c5477cb-vmwwq                      1/1     Running     6 (108m ago)    62d
  18. zookeeper2-759fb6c6f-7jktr                       1/1     Running     6 (108m ago)    62d
  19. zookeeper3-5c78bb5974-vxpbh                      1/1     Running     6 (108m ago)    62d
  20. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins# kubectl exec -it magedu-jenkins-deployment-5f6899db-zn4xg -n magedu  cat /root/.jenkins/secrets/initialAdminPassword
  21. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  22. 8c4a17a8ecfe4fb88ed8701cb18340df
  23. root@k8s-master01:~/k8s-data/yaml/magedu/jenkins#
复制代码


ok,能够通过web网页正常访问到jenkins pod,到此jenkins服务就正常运行至k8s上了;后续可以通过外部负载均衡器将jenkins发布到集群外部成员访问;
7、在外部负载均衡器上发布jenkins

ha01
  1. root@k8s-ha01:~# cat /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3.   
  4. global_defs {
  5.    notification_email {
  6.      acassen
  7.    }
  8.    notification_email_from Alexandre.Cassen@firewall.loc
  9.    smtp_server 192.168.200.1
  10.    smtp_connect_timeout 30
  11.    router_id LVS_DEVEL
  12. }
  13.   
  14. vrrp_instance VI_1 {
  15.     state MASTER
  16.     interface ens160
  17.     garp_master_delay 10
  18.     smtp_alert
  19.     virtual_router_id 51
  20.     priority 100
  21.     advert_int 1
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass 1111
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.0.111 dev ens160 label ens160:0
  28.         192.168.0.112 dev ens160 label ens160:1
  29.     }
  30. }
  31. root@k8s-ha01:~# cat /etc/haproxy/haproxy.cfg
  32. global
  33.         log /dev/log    local0
  34.         log /dev/log    local1 notice
  35.         chroot /var/lib/haproxy
  36.         stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
  37.         stats timeout 30s
  38.         user haproxy
  39.         group haproxy
  40.         daemon
  41.         # Default SSL material locations
  42.         ca-base /etc/ssl/certs
  43.         crt-base /etc/ssl/private
  44.         # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
  45.         ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  46.         ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
  47.         ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
  48. defaults
  49.         log     global
  50.         mode    http
  51.         option  httplog
  52.         option  dontlognull
  53.         timeout connect 5000
  54.         timeout client  50000
  55.         timeout server  50000
  56.         errorfile 400 /etc/haproxy/errors/400.http
  57.         errorfile 403 /etc/haproxy/errors/403.http
  58.         errorfile 408 /etc/haproxy/errors/408.http
  59.         errorfile 500 /etc/haproxy/errors/500.http
  60.         errorfile 502 /etc/haproxy/errors/502.http
  61.         errorfile 503 /etc/haproxy/errors/503.http
  62.         errorfile 504 /etc/haproxy/errors/504.http
  63. listen k8s_apiserver_6443
  64. bind 192.168.0.111:6443
  65. mode tcp
  66. #balance leastconn
  67. server k8s-master01 192.168.0.31:6443 check inter 2000 fall 3 rise 5
  68. server k8s-master02 192.168.0.32:6443 check inter 2000 fall 3 rise 5
  69. server k8s-master03 192.168.0.33:6443 check inter 2000 fall 3 rise 5
  70. listen jenkins_80
  71. bind 192.168.0.112:80
  72. mode tcp
  73. server k8s-node01 192.168.0.34:38080 check inter 2000 fall 3 rise 5
  74. server k8s-node02 192.168.0.35:38080 check inter 2000 fall 3 rise 5
  75. server k8s-node03 192.168.0.36:38080 check inter 2000 fall 3 rise 5
  76. root@k8s-ha01:~# systemctl restart keepalived haproxy
  77. root@k8s-ha01:~# ss -tnl
  78. State            Recv-Q            Send-Q                       Local Address:Port                       Peer Address:Port           Process           
  79. LISTEN           0                 4096                         192.168.0.111:6443                            0.0.0.0:*                                
  80. LISTEN           0                 4096                         192.168.0.112:80                              0.0.0.0:*                                
  81. LISTEN           0                 4096                         127.0.0.53%lo:53                              0.0.0.0:*                                
  82. LISTEN           0                 128                                0.0.0.0:22                              0.0.0.0:*                                
  83. root@k8s-ha01:~#
复制代码
ha02
  1. root@k8s-ha02:~# cat /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3.   
  4. global_defs {
  5.    notification_email {
  6.      acassen
  7.    }
  8.    notification_email_from Alexandre.Cassen@firewall.loc
  9.    smtp_server 192.168.200.1
  10.    smtp_connect_timeout 30
  11.    router_id LVS_DEVEL
  12. }
  13.   
  14. vrrp_instance VI_1 {
  15.     state BACKUP
  16.     interface ens160
  17.     garp_master_delay 10
  18.     smtp_alert
  19.     virtual_router_id 51
  20.     priority 70
  21.     advert_int 1
  22.     authentication {
  23.         auth_type PASS
  24.         auth_pass 1111
  25.     }
  26.     virtual_ipaddress {
  27.         192.168.0.111 dev ens160 label ens160:0
  28.         192.168.0.112 dev ens160 label ens160:1
  29.     }
  30. }
  31. root@k8s-ha02:~# cat /etc/haproxy/haproxy.cfg
  32. global
  33.         log /dev/log    local0
  34.         log /dev/log    local1 notice
  35.         chroot /var/lib/haproxy
  36.         stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
  37.         stats timeout 30s
  38.         user haproxy
  39.         group haproxy
  40.         daemon
  41.         # Default SSL material locations
  42.         ca-base /etc/ssl/certs
  43.         crt-base /etc/ssl/private
  44.         # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
  45.         ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  46.         ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
  47.         ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
  48. defaults
  49.         log     global
  50.         mode    http
  51.         option  httplog
  52.         option  dontlognull
  53.         timeout connect 5000
  54.         timeout client  50000
  55.         timeout server  50000
  56.         errorfile 400 /etc/haproxy/errors/400.http
  57.         errorfile 403 /etc/haproxy/errors/403.http
  58.         errorfile 408 /etc/haproxy/errors/408.http
  59.         errorfile 500 /etc/haproxy/errors/500.http
  60.         errorfile 502 /etc/haproxy/errors/502.http
  61.         errorfile 503 /etc/haproxy/errors/503.http
  62.         errorfile 504 /etc/haproxy/errors/504.http
  63. listen k8s_apiserver_6443
  64. bind 192.168.0.111:6443
  65. mode tcp
  66. #balance leastconn
  67. server k8s-master01 192.168.0.31:6443 check inter 2000 fall 3 rise 5
  68. server k8s-master02 192.168.0.32:6443 check inter 2000 fall 3 rise 5
  69. server k8s-master03 192.168.0.33:6443 check inter 2000 fall 3 rise 5
  70. listen jenkins_80
  71. bind 192.168.0.112:80
  72. mode tcp
  73. server k8s-node01 192.168.0.34:38080 check inter 2000 fall 3 rise 5
  74. server k8s-node02 192.168.0.35:38080 check inter 2000 fall 3 rise 5
  75. server k8s-node03 192.168.0.36:38080 check inter 2000 fall 3 rise 5
  76. root@k8s-ha02:~# systemctl restart keepalived haproxy
  77. root@k8s-ha02:~#
  78. root@k8s-ha02:~# ss -tnl
  79. State            Recv-Q            Send-Q                       Local Address:Port                       Peer Address:Port           Process           
  80. LISTEN           0                 4096                         192.168.0.111:6443                            0.0.0.0:*                                
  81. LISTEN           0                 4096                         192.168.0.112:80                              0.0.0.0:*                                
  82. LISTEN           0                 4096                         127.0.0.53%lo:53                              0.0.0.0:*                                
  83. LISTEN           0                 128                                0.0.0.0:22                              0.0.0.0:*                                
  84. root@k8s-ha02:~#
复制代码
7.1、访问负载均衡器的vip,看看是否能够访问到jenkins ?


能够正常通过访问负载均衡器的vip访问到jenkins,说明jenkins服务正常被负载均衡器反代成功;
        出处:https://www.cnblogs.com/qiuhom-1874/        本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利.
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

尚未崩坏

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表