鲲鹏(ARM64)+麒麟(Kylin v10)离线摆设 KubeSphere

打印 上一主题 下一主题

主题 750|帖子 750|积分 2260

作者:社区用户-天行1st
本文将具体介绍,怎样基于鲲鹏 CPU(ARM64) 和操纵体系 Kylin V10 SP2/SP3,利用 KubeKey 制作 KubeSphere 和 Kubernetes 离线安装包,并实战摆设 KubeSphere 3.3.1Kubernetes 1.22.12 集群。
**实战服务器设置
主机名IPCPUOS用途master-1192.168.10.2Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-mastermaster-2192.168.10.3Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-mastermaster-3192.168.10.4Kunpeng-920Kylin V10 SP2离线环境 KubeSphere/k8s-masterdeploy192.168.200.7Kunpeng-920Kylin V10 SP3联网主机用于制作离线包实战环境涉及软件版本信息

  • 服务器芯片: Kunpeng-920
  • 操纵体系:麒麟 V10 SP2 aarch64
  • Docker: 24.0.7
  • Harbor: v2.7.1
  • KubeSphere:v3.3.1
  • Kubernetes:v1.22.12
  • KubeKey: v2.3.1
1. 本文简介

本文介绍了怎样在 麒麟 V10 aarch64 架构服务器上制品和离线摆设 KubeSphere 和 Kubernetes 集群。我们将使用 KubeSphere 开发的 KubeKey 工具实现自动化摆设,在三台服务器上实现高可用模式最小化摆设 Kubernetes 集群和 KubeSphere。
KubeSphere 和 Kubernetes 在 ARM 架构 和 X86 架构的服务器上摆设,最大的区别在于所有服务使用的容器镜像架构类型的不同,KubeSphere 开源版对于 ARM 架构的默认支持可以实现 KubeSphere-Core 功能,即可以实现最小化的 KubeSphere 和完备的 Kubernetes 集群的摆设。当启用了 KubeSphere 可插拔组件时,会遇到个别组件摆设失败的环境,需要我们手工替换官方或是第三方提供的 ARM 版镜像或是根据官方源码手工构建 ARM 版镜像。如果需要实现开箱即用及更多的技术支持,则需要购买企业版的 KubeSphere。
1.1 确认操纵体系设置

在实行下文的使命之前,先确认操纵体系相关设置。

  • 操纵体系类型
  1. [root@localhost ~]# cat /etc/os-release
  2. NAME="Kylin Linux Advanced Server"
  3. VERSION="V10 (Halberd)"
  4. ID="kylin"
  5. VERSION_ID="V10"
  6. PRETTY_NAME="Kylin Linux Advanced Server V10 (Halberd)"
  7. ANSI_COLOR="0;31
复制代码

  • 操纵体系内核
  1. [root@node1 ~]# uname -r
  2. Linux node1 4.19.90-52.22.v2207.ky10.aarch64
复制代码

  • 服务器 CPU 信息
  1. [root@node1 ~]# lscpu
  2. Architecture:                    aarch64
  3. CPU op-mode(s):                  64-bit
  4. Byte Order:                      Little Endian
  5. CPU(s):                          32
  6. On-line CPU(s) list:             0-31
  7. Thread(s) per core:              1
  8. Core(s) per socket:              1
  9. Socket(s):                       32
  10. NUMA node(s):                    2
  11. Vendor ID:                       HiSilicon
  12. Model:                           0
  13. Model name:                      Kunpeng-920
  14. Stepping:                        0x1
  15. BogoMIPS:                        200.00
  16. NUMA node0 CPU(s):               0-15
  17. NUMA node1 CPU(s):               16-31
  18. Vulnerability Itlb multihit:     Not affected
  19. Vulnerability L1tf:              Not affected
  20. Vulnerability Mds:               Not affected
  21. Vulnerability Meltdown:          Not affected
  22. Vulnerability Spec store bypass: Not affected
  23. Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
  24. Vulnerability Spectre v2:        Not affected
  25. Vulnerability Srbds:             Not affected
  26. Vulnerability Tsx async abort:   Not affected
  27. Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
复制代码
2. 安装 K8s 依靠服务

本文增长了一台能联网的 deploy 节点,用来制作离线摆设资源包。由于 Harbor 官方不支持 ARM,先使用在线安装 KubeSphere,后续根据 KubeKey 生成的文件作为伪制品。故在 192.168.200.7 服务器以单节点形式摆设 KubeSphere。
2.1 摆设 Docker 和 docker-compose

具体可参考 鲲鹏+欧拉摆设 KubeSphere3.4
安装包-百度云: https://pan.baidu.com/s/1lKtCRqxGMUxyumd4XIz4Bg?pwd=4ct2
解压后实行此中的 install.sh。
2.2 摆设 Harbor 仓库

安装包-百度云: https://pan.baidu.com/s/1fL69nDOG5j92bEk84UQk7g?pwd=uian
解压后实行此中的 install.sh。
2.3 下载麒麟体系 K8s 依靠包
  1. mkdir -p /root/kubesphere/k8s-init
  2. # 该命令将下载
  3. yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
  4. # 编写安装脚本
  5. vim install.sh
  6. #!/bin/bash
  7. #
  8. rpm -ivh *.rpm --force --nodeps
  9. # 打成压缩包,方便离线部署使用
  10. tar -czvf k8s-init-Kylin_V10-arm.tar.gz ./k8s-init/*
复制代码
2.4 下载镜像

下载 KubeSphere 3.3.1 所需要的 ARM 镜像。
  1. #!/bin/bash
  2. #
  3. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1
  4. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1
  5. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1
  6. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1
  7. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1
  8. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
  9. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
  10. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
  11. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
  12. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
  13. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
  14. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
  15. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
  16. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
  17. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
  18. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
  19. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
  20. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
  21. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
  22. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
  23. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
  24. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
  25. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
  26. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
  27. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
  28. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
  29. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
  30. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
  31. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
  32. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
  33. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
  34. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
  35. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
  36. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
  37. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
  38. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
  39. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
  40. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
  41. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  42. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  43. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
  44. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
  45. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
  46. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
  47. docker pull registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latest
  48. docker pull kubesphere/fluent-bit:v2.0.6
复制代码
这里使用 KubeSphere 阿里云镜像,此中有些镜像会下载失败。对于下载失败的镜像,可通过本地电脑,直接去 hub.docker.com 下载。例如:
  1. docker pull kubesphere/fluent-bit:v2.0.6 --platform arm64
  2. #官方ks-console:v3.3.1(arm版)在麒麟中跑不起来,据运维有术介绍,需要使用node14基础镜像。当在鲲鹏服务器准备自己构建时报错淘宝源https过期,使用https://registry.npmmirror.com仍然报错,于是放弃使用该3.3.0镜像,重命名为3.3.1
  3. docker pull zl862520682/ks-console:v3.3.0
  4. docker tag zl862520682/ks-console:v3.3.0 dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
  5. ## mc和minio也需要重新拉取打tag
  6. docker pull minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64
  7. docker tag  minio/minio:RELEASE.2020-11-25T22-36-25Z-arm64 dockerhub.kubekey.local/kubesphereio/minio:RELEASE
复制代码
2.5 重命名镜像

重新给镜像打 tag,标记为私有仓库镜像
  1. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.3  dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3
  2. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.3  dockerhub.kubekey.local/kubesphereio/cni:v3.27.3
  3. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.27.3  dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3
  4. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.3  dockerhub.kubekey.local/kubesphereio/node:v3.27.3
  5. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
  6. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14  dockerhub.kubekey.local/kubesphereio/alpine:3.14
  7. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20  dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20
  8. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1
  9. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1
  10. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1
  11. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1  dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1
  12. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12
  13. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/
  14. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12
  15. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12
  16. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0  dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0
  17. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0  dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0
  18. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0  dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0
  19. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11  dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11
  20. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
  21. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1
  22. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2  dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2
  23. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0  dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0
  24. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0  dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0
  25. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1  dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
  26. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0  dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0
  27. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0
  28. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0  dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0
  29. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0
  30. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0  dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
  31. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
  32. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03  dockerhub.kubekey.local/kubesphereio/docker:19.03
  33. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2  dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2
  34. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5  dockerhub.kubekey.local/kubesphereio/pause:3.5
  35. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0  dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0
  36. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0  dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0
  37. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z  dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  38. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z  dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  39. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0
  40. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0  dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
  41. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1  dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1
  42. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4  dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
  43. docker tag  registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
  44. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20 dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
  45. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2    dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
  46. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2   dockerhub.kubekey.local/kubesphereio/cni:v3.23.2
  47. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2   dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2
  48. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2  dockerhub.kubekey.local/kubesphereio/node:v3.23.2
  49. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0
  50. docker tag registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/busybox:latest
  51. docker tag kubesphere/fluent-bit:v2.0.6 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6 # 也可重命名为v1.8.11,可省下后续修改fluent的yaml,这里采用后修改方式
复制代码
2.6 推送镜像至 harbor 仓库
  1. #!/bin/bash
  2. #
  3. docker load < ks3.3.1-images.tar.gz
  4. docker login -u admin -p Harbor12345 dockerhub.kubekey.local
  5. docker push dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1
  6. docker push dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1
  7. docker push dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1
  8. docker push dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1
  9. docker push dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1
  10. docker push dockerhub.kubekey.local/kubesphereio/alpine:3.14
  11. docker push dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12
  12. docker push dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12
  13. docker push dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12
  14. docker push dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12
  15. docker push dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0
  16. docker push dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0
  17. docker push dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2
  18. docker push dockerhub.kubekey.local/kubesphereio/cni:v3.23.2
  19. docker push dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2
  20. docker push dockerhub.kubekey.local/kubesphereio/node:v3.23.2
  21. docker push dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0
  22. docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v1.8.11
  23. docker push dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
  24. docker push dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1
  25. docker push dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2
  26. docker push dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0
  27. docker push dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0
  28. docker push dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
  29. docker push dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0
  30. docker push dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0
  31. docker push dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0
  32. docker push dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0
  33. docker push dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
  34. docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
  35. docker push dockerhub.kubekey.local/kubesphereio/docker:19.03
  36. docker push dockerhub.kubekey.local/kubesphereio/pause:3.5
  37. docker push dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0
  38. docker push dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0
  39. docker push dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0
  40. docker push dockerhub.kubekey.local/kubesphereio/coredns:1.8.0
  41. docker push dockerhub.kubekey.local/kubesphereio/log-sidecar-injector:1.1
  42. docker push dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12
  43. docker push dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
  44. docker push dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
  45. docker push dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
  46. docker push dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine
  47. docker push dockerhub.kubekey.local/kubesphereio/haproxy:2.3
  48. docker push dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0
  49. docker push dockerhub.kubekey.local/kubesphereio/busybox:latest
  50. docker push dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6
复制代码
3. 使用 KubeKey 摆设 KubeSphere

3.1 移除麒麟体系自带的 podman

podman 是麒麟体系自带的容器引擎,为避免后续与 Docker 冲突,直接卸载。否则后续 CoreDNS/NodelocalDNS 也会受影响无法启动以及各种 Docker 权限问题。
  1. yum remove podman
复制代码
3.2 下载 KubeKey

下载 kubekey-v2.3.1-linux-arm64.tar.gz。具体 KubeKey 版本号可以在 KubeKey 发行页面 查看。

  • 方式一
  1. cd ~
  2. mkdir kubesphere
  3. cd kubesphere/
  4. # 选择中文区下载(访问 GitHub 受限时使用)
  5. export KKZONE=cn
  6. # 执行下载命令,获取最新版的 kk(受限于网络,有时需要执行多次)
  7. curl -sfL https://get-kk.kubesphere.io/v2.3.1/kubekey-v2.3.1-linux-arm64.tar.gz | tar xzf -
复制代码

  • 方式二
使用本地电脑,直接去 GitHub 下载 Releases · kubesphere/kubekey,上传至服务器 /root/kubesphere 目次解压。
  1. tar zxf kubekey-v2.3.1-linux-arm64.tar.gz
复制代码
3.3 生成集群创建设置文件

创建集群设置文件,本示例中,选择 KubeSphere 3.3.1 和 Kubernetes 1.22.12。
  1. ./kk create config -f kubesphere-v331-v12212.yaml --with-kubernetes v1.22.12 --with-kubesphere v3.3.1
复制代码
命令实行乐成后,在当前目次会生成文件名为 kubesphere-v331-v12212.yaml 的设置文件。
注意: 生成的默认设置文件内容较多,这里就不做过多展示了,更多具体的设置参数请参考 官方设置示例。
本文示例采取 3 个节点同时作为 control-plane、etcd 节点和 worker 节点。
编辑设置文件 kubesphere-v331-v12212.yaml,重要修改 kind: Clusterkind: ClusterConfiguration 两末节的相关设置。
修改 kind: Cluster 末节中 hosts 和 roleGroups 等信息,修改说明如下:

  • hosts:指定节点的 IP、ssh 用户、ssh 密码、ssh 端口。特别注意: 一定要手工指定 arch: arm64,否则摆设的时候会安装 X86 架构的软件包。
  • roleGroups:指定 3 个 etcd、control-plane 节点,复用相同的呆板作为 3 个 worker 节点。
  • internalLoadbalancer: 启用内置的 HAProxy 负载均衡器。
  • domain:自定义了一个 opsman.top。
  • containerManager:使用了 containerd。
  • storage.openebs.basePath:新增设置,指定默认存储路径为 /data/openebs/local
修改后的示例如下:
  1. apiVersion: kubekey.kubesphere.io/v1alpha2
  2. kind: Cluster
  3. metadata:
  4.   name: sample
  5. spec:
  6.   hosts:
  7.   - {name: node1, address: 192.168.200.7, internalAddress: 192.168.200.7, user: root, password: "123456", arch: arm64}
  8.   roleGroups:
  9.     etcd:
  10.     - node1
  11.     control-plane:
  12.     - node1
  13.     worker:
  14.     - node1
  15.     registry:
  16.     - node1
  17.   controlPlaneEndpoint:
  18.     ## Internal loadbalancer for apiservers
  19.     # internalLoadbalancer: haproxy
  20.     domain: lb.kubesphere.local
  21.     address: ""
  22.     port: 6443
  23.   kubernetes:
  24.     version: v1.22.12
  25.     clusterName: cluster.local
  26.     autoRenewCerts: true
  27.     containerManager: docker
  28.   etcd:
  29.     type: kubekey
  30.   network:
  31.     plugin: calico
  32.     kubePodsCIDR: 10.233.64.0/18
  33.     kubeServiceCIDR: 10.233.0.0/18
  34.     ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
  35.     multusCNI:
  36.       enabled: false
  37.   registry:
  38.     type: harbor
  39.     auths:
  40.       "dockerhub.kubekey.local":
  41.         username: admin
  42.         password: Harbor12345
  43.     privateRegistry: "dockerhub.kubekey.local"
  44.     namespaceOverride: "kubesphereio"
  45.     registryMirrors: []
  46.     insecureRegistries: []
  47.   addons: []
  48. ---
  49. apiVersion: installer.kubesphere.io/v1alpha1
  50. kind: ClusterConfiguration
  51. metadata:
  52.   name: ks-installer
  53.   namespace: kubesphere-system
  54.   labels:
  55.     version: v3.3.1
  56. spec:
  57.   persistence:
  58.     storageClass: ""
  59.   authentication:
  60.     jwtSecret: ""
  61.   zone: ""
  62.   local_registry: ""
  63.   namespace_override: ""
  64.   # dev_tag: ""
  65.   etcd:
  66.     monitoring: true
  67.     endpointIps: localhost
  68.     port: 2379
  69.     tlsEnable: true
  70.   common:
  71.     core:
  72.       console:
  73.         enableMultiLogin: true
  74.         port: 30880
  75.         type: NodePort
  76.     # apiserver:
  77.     #  resources: {}
  78.     # controllerManager:
  79.     #  resources: {}
  80.     redis:
  81.       enabled: false
  82.       volumeSize: 2Gi
  83.     openldap:
  84.       enabled: false
  85.       volumeSize: 2Gi
  86.     minio:
  87.       volumeSize: 20Gi
  88.     monitoring:
  89.       # type: external
  90.       endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
  91.       GPUMonitoring:
  92.         enabled: false
  93.     gpu:
  94.       kinds:
  95.       - resourceName: "nvidia.com/gpu"
  96.         resourceType: "GPU"
  97.         default: true
  98.     es:
  99.       # master:
  100.       #   volumeSize: 4Gi
  101.       #   replicas: 1
  102.       #   resources: {}
  103.       # data:
  104.       #   volumeSize: 20Gi
  105.       #   replicas: 1
  106.       #   resources: {}
  107.       logMaxAge: 7
  108.       elkPrefix: logstash
  109.       basicAuth:
  110.         enabled: false
  111.         username: ""
  112.         password: ""
  113.       externalElasticsearchHost: ""
  114.       externalElasticsearchPort: ""
  115.   alerting:
  116.     enabled: true
  117.     # thanosruler:
  118.     #   replicas: 1
  119.     #   resources: {}
  120.   auditing:
  121.     enabled: false
  122.     # operator:
  123.     #   resources: {}
  124.     # webhook:
  125.     #   resources: {}
  126.   devops:
  127.     enabled: false
  128.     # resources: {}
  129.     jenkinsMemoryLim: 8Gi
  130.     jenkinsMemoryReq: 4Gi
  131.     jenkinsVolumeSize: 8Gi
  132.   events:
  133.     enabled: false
  134.     # operator:
  135.     #   resources: {}
  136.     # exporter:
  137.     #   resources: {}
  138.     # ruler:
  139.     #   enabled: true
  140.     #   replicas: 2
  141.     #   resources: {}
  142.   logging:
  143.     enabled: true
  144.     logsidecar:
  145.       enabled: true
  146.       replicas: 2
  147.       # resources: {}
  148.   metrics_server:
  149.     enabled: false
  150.   monitoring:
  151.     storageClass: ""
  152.     node_exporter:
  153.       port: 9100
  154.       # resources: {}
  155.     # kube_rbac_proxy:
  156.     #   resources: {}
  157.     # kube_state_metrics:
  158.     #   resources: {}
  159.     # prometheus:
  160.     #   replicas: 1
  161.     #   volumeSize: 20Gi
  162.     #   resources: {}
  163.     #   operator:
  164.     #     resources: {}
  165.     # alertmanager:
  166.     #   replicas: 1
  167.     #   resources: {}
  168.     # notification_manager:
  169.     #   resources: {}
  170.     #   operator:
  171.     #     resources: {}
  172.     #   proxy:
  173.     #     resources: {}
  174.     gpu:
  175.       nvidia_dcgm_exporter:
  176.         enabled: false
  177.         # resources: {}
  178.   multicluster:
  179.     clusterRole: none
  180.   network:
  181.     networkpolicy:
  182.       enabled: false
  183.     ippool:
  184.       type: none
  185.     topology:
  186.       type: none
  187.   openpitrix:
  188.     store:
  189.       enabled: true
  190.   servicemesh:
  191.     enabled: false
  192.     istio:
  193.       components:
  194.         ingressGateways:
  195.         - name: istio-ingressgateway
  196.           enabled: false
  197.         cni:
  198.           enabled: false
  199.   edgeruntime:
  200.     enabled: false
  201.     kubeedge:
  202.       enabled: false
  203.       cloudCore:
  204.         cloudHub:
  205.           advertiseAddress:
  206.             - ""
  207.         service:
  208.           cloudhubNodePort: "30000"
  209.           cloudhubQuicNodePort: "30001"
  210.           cloudhubHttpsNodePort: "30002"
  211.           cloudstreamNodePort: "30003"
  212.           tunnelNodePort: "30004"
  213.         # resources: {}
  214.         # hostNetWork: false
  215.       iptables-manager:
  216.         enabled: true
  217.         mode: "external"
  218.         # resources: {}
  219.       # edgeService:
  220.       #   resources: {}
  221.   terminal:
  222.     timeout: 600
复制代码
3.4 实行安装
  1. ./kk create cluster -f kubesphere-v331-v122123.yaml
复制代码
此节点之所以安装 KubeSphere 是由于 KubeKey 在安装过程中会产生 KubeKey 文件夹并将 K8s 所需要的依靠都下载到 KubeKey 目次。后续我们离线安装重要使用 KubeKey 文件夹,共同一下脚本代替之前的制品。
4. 制作离线安装资源

4.1 导出 K8s 基础依靠包
  1. yum -y install openssl socat conntrack ipset ebtables chrony ipvsadm --downloadonly --downloaddir /root/kubesphere/k8s-init
  2. # 打成压缩包
  3. tar -czvf k8s-init-Kylin_V10-arm.tar.gz ./k8s-init/*
复制代码
4.2 导出 KubeSphere 需要的镜像

导出 KubeSphere 相关的镜像至 ks3.3.1-images.tar。
  1. docker save -o ks3.3.1-images.tar  dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.3  dockerhub.kubekey.local/kubesphereio/cni:v3.27.3  dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.27.3  dockerhub.kubekey.local/kubesphereio/node:v3.27.3  dockerhub.kubekey.local/kubesphereio/ks-console:v3.3.1  dockerhub.kubekey.local/kubesphereio/alpine:3.14  dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20  dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-installer:v3.3.1  dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.3.1  dockerhub.kubekey.local/kubesphereio/openpitrix-jobs:v3.3.1  dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.22.12  dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.22.12  dockerhub.kubekey.local/kubesphereio/provisioner-localpv:3.3.0  dockerhub.kubekey.local/kubesphereio/linux-utils:3.3.0  dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.5.0  dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6  dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1  dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1  dockerhub.kubekey.local/kubesphereio/thanos:v0.25.2  dockerhub.kubekey.local/kubesphereio/prometheus:v2.34.0  dockerhub.kubekey.local/kubesphereio/fluentbit-operator:v0.13.0   dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1  dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0  dockerhub.kubekey.local/kubesphereio/notification-manager:v1.4.0  dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0  dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v1.4.0  dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0  dockerhub.kubekey.local/kubesphereio/docker:19.03  dockerhub.kubekey.local/kubesphereio/metrics-server:v0.4.2  dockerhub.kubekey.local/kubesphereio/pause:3.5  dockerhub.kubekey.local/kubesphereio/configmap-reload:v0.5.0  dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0  dockerhub.kubekey.local/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z  dockerhub.kubekey.local/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z  dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.8.0  dockerhub.kubekey.local/kubesphereio/coredns:1.8.0   dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/kubesphereio/redis:5.0.14-alpine dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.15.12 dockerhub.kubekey.local/kubesphereio/node:v3.23.2 dockerhub.kubekey.local/kubesphereio/pod2daemon-flexvol:v3.23.2 dockerhub.kubekey.local/kubesphereio/cni:v3.23.2 dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.23.2 dockerhub.kubekey.local/kubesphereio/haproxy:2.3 dockerhub.kubekey.local/kubesphereio/busybox:latest dockerhub.kubekey.local/kubesphereio/opensearch:2.6.0 dockerhub.kubekey.local/kubesphereio/fluent-bit:v2.0.6
复制代码
压缩。
  1. gzip ks3.3.1-images.tar
复制代码
4.3 导出 KubeSphere 文件夹
  1. [root@node1 ~]# cd /root/kubesphere
  2. [root@node1 kubesphere]# ls
  3. create_project_harbor.sh  docker-24.0.7-arm.tar.gz  fluent-bit-daemonset.yaml  harbor-arm.tar.gz  harbor.tar.gz  install.sh  k8s-init-Kylin_V10-arm.tar.gz  ks3.3.1-images.tar.gz  ks3.3.1-offline  push-images.sh
  4. tar -czvf kubeshpere.tar.gz ./kubesphere/*
复制代码
install.sh 内容。
  1. #!/usr/bin/env bash
  2. read -p "请先修改机器配置文件ks3.3.1-offline/kubesphere-v331-v12212.yaml中相关IP地址,是否已修改(yes/no)" B
  3. do_k8s_init(){
  4.         echo "--------开始进行依赖包初始化------"
  5.         tar zxf k8s-init-Kylin_V10-arm.tar.gz
  6.         cd k8s-init && ./install.sh
  7.         cd -
  8.         rm -rf k8s-init
  9. }
  10. install_docker(){
  11.         echo "--------开始安装docker--------"
  12.         tar zxf docker-24.0.7-arm.tar.gz
  13.         cd docker && ./install.sh
  14.         cd -
  15. }
  16. install_harbor(){
  17.         echo "-------开始安装harbor----------"
  18.         tar zxf  harbor-arm.tar.gz
  19.         cd harbor && ./install.sh
  20.         cd -
  21.         echo "--------开始推送镜像----------"
  22.         source create_project_harbor.sh
  23.         source push-images.sh
  24.         echo "--------镜像推送完成--------"
  25. }
  26. install_ks(){
  27.         echo "--------开始安装kubesphere--------"
  28. #        tar zxf ks3.3.1-offline.tar.gz
  29.         cd ks3.3.1-offline && ./install.sh
  30. }
  31. if [ "$B" = "yes" ] || [ "$B" = "y" ]; then
  32.     do_k8s_init
  33.     install_docker
  34.     install_harbor
  35.     install_ks
  36. else
  37.     echo "请先配置集群配置文件"
  38.     exit 1
  39. fi
复制代码
5. 离线环境安装 KubeSphere

5.1 卸载 podman
  1. yum remove podman -y
复制代码
5.2 安装 K8s 依靠包

所有节点都需要操纵,上传 k8s-init-Kylin_V10-arm.tar.gz 并解压后实行 install.sh。
5.3 安装 KubeSphere 集群

上传 kubeshpere.tar.gz 并解压,修改 ./kubesphere/ks3.3.1-offline/kubesphere-v331-v12212.yaml 集群设置文件中相关 ip,密码等信息。
  1. **************************************************
  2. Waiting for all tasks to be completed ...
  3. task alerting status is successful  (1/6)
  4. task network status is successful  (2/6)
  5. task multicluster status is successful  (3/6)
  6. task openpitrix status is successful  (4/6)
  7. task logging status is successful  (5/6)
  8. task monitoring status is successful  (6/6)
  9. **************************************************
  10. Collecting installation results ...
  11. #####################################################
  12. ###              Welcome to KubeSphere!           ###
  13. #####################################################
  14. Console: http://192.168.10.2:30880
  15. Account: admin
  16. Password: P@88w0rd
  17. NOTES:
  18.   1. After you log into the console, please check the
  19.      monitoring status of service components in
  20.      "Cluster Management". If any service is not
  21.      ready, please wait patiently until all components
  22.      are up and running.
  23.   2. Please change the default password after login.
  24. #####################################################
  25. https://kubesphere.io             2024-07-03 11:10:11
  26. #####################################################
复制代码
5.4 修改 fluent-bit
  1. kubectl edit daemonsets fluent-bit -n kubesphere-logging-system
  2. 修改其中fluent-bit版本1.8.11为2.0.6
复制代码
5.5 关闭/删除 es 相关服务和负载

如果不需要日志功能,可以修改 KubeSphere 创建集群设置文件不安装 log 插件,此时 5.4、5.5 以及镜像可以更加简化。
6. 测试查看
  1. [root@node1 ~]# kubectl get nodes
  2. NAME    STATUS   ROLES                         AGE   VERSION
  3. node1   Ready    control-plane,master,worker   26h   v1.22.12
  4. node2   Ready    control-plane,master,worker   26h   v1.22.12
  5. node3   Ready    control-plane,master,worker   26h   v1.22.12
复制代码

7. 总结

本文重要实战演示了 ARM 版 麒麟 V10 服务器通过在线环境摆设 K8s 和 KubeSphere,并将基础依靠,需要的 Docker 镜像和 Harbor,以及 KubeKey 摆设 KubeSphere 下载的各类包一起打包。通过 shell 脚本编写简单的摆设过程,实现离线环境安装 K8s 和 KubeSphere。
离线安装重要知识点:

  • 卸载 podman
  • 安装 K8s 依靠包
  • 安装 Docker
  • 安装 Harbor
  • 将 K8s 和 KubeSphere 需要的镜像推送到 Harbor
  • 使用 KubeKey 摆设集群
本文由博客一文多发平台 OpenWrite 发布!

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

篮之新喜

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表