1、基础情况准备
当前国内无法访问docker.hub因此必要提前准备可以访问的镜像仓库
1、装备IP地址规划
名称IP地址体系Master192.168.110.133Centos stream8Slave01192.168.110.134Centos stream8Slave02192.168.110.135Centos stream8 2、操作体系要求
- # 1、关闭防火墙/SELINUX
- ufw status
- ufw disabel
- # 2、禁用selinux
- sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
- setenforce 0
- #3、关闭swap分区,K8S在使用CPU和内存为物理内存和CPU,Cgroup相关驱动无法对其进行有效管理
- sed -ri 's/.*swap.*/#&/' /etc/fstab
- swapoff -a # 查查是否关闭swap分区
- # 4、设置主机名称
- vim /etc/hosts
- 192.168.110.133 Master
- 192.168.110.134 Slave01
- 192.168.110.135 Slave02
- # 5、同步时间
- #查看时区,时间
- date
- #先查看时区是否正常,不正确则替换为上海时区
- timedatectl set-timezone Asia/Shanghai
- #安装chrony,联网同步时间
- apt install chrony -y && systemctl enable --now chronyd
- # 6、配置桥接的IPV4流量传递到iptables的链
- cat >> /etc/sysctl.d/k8s.conf <<EOF
- net.bridge.bridge-nf-call-ip6tables=1
- net.bridge.bridge-nf-call-iptables=1
- net.ipv4.ip_forward=1
- vm.swappiness=0
- EOF
- sysctl --system
- # 7、服务器之间设置免密登录
- ssh-keygen -t rsa
- ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.110.131
- ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.110.132
复制代码 2、利用Kubeadm安装K8s(全部主机)
1、配置内核转发以及网桥过滤
- # 创建加载内核模块 (主机启动后自动加载)
- cat << EOF | tee /etc/modules-load.d/k8s.conf
- overlay
- br_netfilter
- EOF
- # 手动执行,加载模块
- modprobe overlay
- modprobe br_netfilter
- # 查看以及已经加载模块
- lsmod | egrep "overlay"
- lsmod | egrep "br_netfilter"
- # 添加网桥过滤以及内核转发配置文件
- cat >> /etc/sysctl.d/k8s.conf <<EOF
- net.bridge.bridge-nf-call-ip6tables=1
- net.bridge.bridge-nf-call-iptables=1
- net.ipv4.ip_forward=1
- vm.swappiness=0
- EOF
- # 加载内核
- sysctl --system
复制代码 2、安装ipset 以及 ipvsadm
- apt install ipset ipvsadm -y
- cat << EOF | tee /etc/modules-load.d/ipvs.conf
- ip_vs
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- ip_vs_sh
- nf_conntrack
- EOF
- # 创建模块加载脚本
- cat << EOF | tee ipvs.sh
- #!/bin/sh
- modprobe -- ip_vs
- modprobe -- ip_vs_rr
- modprobe -- ip_vs_wrr
- modprobe -- ip_vs_sh
- modprobe -- ip_conntrack
- EOF
- # 执行脚本,加载模块
- sh ipvs.sh
复制代码 3、容器运行时与containerd(全部主机)
1、安装containerd(二进制安装)
- # 1、安装containerd
- wget https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz
- tar xvf containerd-1.7.14-linux-amd64.tar.gz
- #解压出来一个bin目录,containerd可执行文件都在bin目录里面
- mv bin/* /usr/local/bin/
- wget https://github.com/containerd/containerd/releases/download/v1.7.22/cri-containerd-1.7.22-linux-amd64.tar.gz
- tar xf cri-containerd-1.7.22-linux-amd64.tar.gz -C /
复制代码 2、Containerd配置文件修改,并启动containerd
- mkdir /etc/containerd
- containerd config default > /etc/containerd/config.toml
- # 修改配置文件,修改pause版本,1.29以后为3.9
- vim /etc/containerd/config.toml
- sandbox_image = "registry.k8s.io/pause:3.9"
- # 修改镜像仓库地址
- sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
- # 修改指定内核驱动为Cgroup,139行,修改runc中的配置
- [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
- SystemdCgroup = true
- #启动containerd
- systemctl enabled --now containerd
复制代码 4、K8S集群部署(全部主机)
1、下载安装文件
- sudo apt-get update
- # apt-transport-https 可能是一个虚拟包(dummy package);如果是的话,你可以跳过安装这个包
- sudo apt-get install -y apt-transport-https ca-certificates curl gpg
- curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- # 添加kubernetes仓库
- echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
- # 镜像仓库
- apt-get update && apt-get install -y apt-transport-https
- curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/deb/Release.key |
- gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/deb/ /" |
- tee /etc/apt/sources.list.d/kubernetes.list
- apt-get update
- apt-get install -y kubelet kubeadm kubectl
- # 获取版本
- apt-cache madison kubeadm
- # 安装指定版本
- apt install -y kubelet=1.31.0-00 kubeadm=1.31.0-00 kubectl=1.31.0-00
- #安装最新版本
- # 安装最新版本,并锁定版本
- sudo apt-get update
- sudo apt-get install -y kubelet kubeadm kubectl
- #锁定版本
- sudo apt-mark hold kubelet kubeadm kubectl
- # 取消版本锁定
- sudo apt-mark unhold kubelet kubeadm kubectl
复制代码 2、配置Kubelet
- # 为了实现容器运行时使用cgroupdrive与kubelet使用的cgroup的一致性
- vim /etc/default/kubelet
- KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
- # 设置kubelet为开机自启动,由于当前没有生成配置文件,集群初始化后自启动
- systemctl enable kubelet
复制代码 3、Master准备初始化配置文件
1、通过kubeadm-config文件在线初始化
- # 生成配置文件模板
- kubeadm config print init-defaults > kubeadm-config.yaml
- # 修改yaml文件
- vim kubeadm-config.yaml
- advertiseAddress: 192.168.110.130
- name: Master
- serviceSubnet: 10.96.0.0/12
- podSubnet: 10.244.0.0/16
- # 使用阿里云镜像仓库
- # imageRepository: registry.aliyuncs.com/google_containers (如果不能直接拉去镜像的)
- #查看指定版本的镜像
- kubeadm config images list --kubernetes-version=v1.31.0
- registry.k8s.io/kube-apiserver:v1.31.1
- registry.k8s.io/kube-controller-manager:v1.31.1
- registry.k8s.io/kube-scheduler:v1.31.1
- registry.k8s.io/kube-proxy:v1.31.1
- registry.k8s.io/coredns/coredns:v1.11.3
- registry.k8s.io/pause:3.10
- registry.k8s.io/etcd:3.5.15-0
- # 使用阿里云镜像仓库
- kubeadm config images list --kubernetes-version=v1.31.1 --image-repository=registry.aliyuncs.com/google_containers
- registry.aliyuncs.com/google_containers/kube-apiserver:v1.31.1
- registry.aliyuncs.com/google_containers/kube-controller-manager:v1.31.1
- registry.aliyuncs.com/google_containers/kube-scheduler:v1.31.1
- registry.aliyuncs.com/google_containers/kube-proxy:v1.31.1
- registry.aliyuncs.com/google_containers/coredns:v1.11.3
- registry.aliyuncs.com/google_containers/pause:3.10
- registry.aliyuncs.com/google_containers/etcd:3.5.15-0
- #下载镜像
- kubeadm config images pull --kubernetes-version=v1.31.1
- #下载镜像(指定仓库)
- kubeadm config images pull --kubernetes-version=v1.31.1 --image-repository=registry.aliyuncs.com/google_containers
- #crictl /crt images 查看下载的镜像
- ctr -n=k8s.io images list
- #从国内镜像拉取
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
- docker pull coredns/coredns:1.8.6
- #重新打标签
- #将拉取下来的images重命名为kubeadm config所需的镜像名字
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8 k8s.gcr.io/kube-apiserver:v1.23.8
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8 k8s.gcr.io/kube-controller-manager:v1.23.8
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8 k8s.gcr.io/kube-scheduler:v1.23.8
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8 k8s.gcr.io/kube-proxy:v1.23.8
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
- docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0
- docker tag coredns/coredns:1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6
- #使用部署文件出书化K8S集群
- kubeadm init --config kubeadm-config.yaml --upload-certs --v=9
- # 命令初始化集群
- kubeadm init --apiserver-advertise-address=192.168.110.137 --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16 --v=5 # 简化
- ----------------------------------------------------------------------
- kubeadm init --apiserver-advertise-address=192.168.110.133 --control-plane-endpoint=control-plane-endpoint.k8s.local --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --service-dns-domain=k8s.local --upload-certs --v=5 # 完整
- 然后按照系统提示指导部署集群
复制代码 2、利用kubeadm init 离线初始化
- # 当有本地无法联网时,使用该初始化方式
- 1、远端主机获取对应K8S版本的包
- # 查看K8S版本的组件
- kubeadm config images list --kubernetes-version=v1.31.0
- registry.k8s.io/kube-apiserver:v1.31.1
- registry.k8s.io/kube-controller-manager:v1.31.1
- registry.k8s.io/kube-scheduler:v1.31.1
- registry.k8s.io/kube-proxy:v1.31.1
- registry.k8s.io/coredns/coredns:v1.11.3
- registry.k8s.io/pause:3.10
- registry.k8s.io/etcd:3.5.15-0
- 2、拉取镜像
- docker pull kube-apiserver:v1.23.8
- docker pull kube-controller-manager:v1.23.8
- docker pull kube-scheduler:v1.23.8
- docker pull kube-proxy:v1.23.8
- docker pull pause:3.6
- docker pull etcd:3.5.1-0
- docker pull coredns/coredns:1.8.6
- # 指定国内仓库
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.8
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0
- docker pull registry.cn-hangzhou.aliyuncs.com/google_containerscoredns/coredns:1.8.6
- 3、远端主机将镜像保存
- docker save -o kube-apiserver-v1.31.1.tar k8s.gcr.io/kube-apiserver:v1.31.1
- docker save -o kube-controller-manager-v1.31.1.tar k8s.gcr.io/kube-controller-manager:v1.31.1
- docker save -o kube-scheduler-v1.31.1.tar k8s.gcr.io/kube-scheduler:v1.31.1
- docker save -o kube-proxy-v1.31.1.tar k8s.gcr.io/kube-proxy:v1.31.1
- docker save -o pause-3.6.tar k8s.gcr.io/pause:3.6
- docker save -o etcd-3.5.4-0.tar k8s.gcr.io/etcd:3.5.4-0
- docker save -o coredns-v1.9.3.tar k8s.gcr.io/coredns/coredns:v1.9.3
- 4、使用scp或其他方式将tar包同步到本地主机
- scp kube-apiserver-v1.31.1.tar root@192.168.110.138:/root/
- scp kube-controller-manager-v1.31.1.tar root@192.168.110.138:/root/
- scp kube-scheduler-v1.31.1.tar root@192.168.110.138:/root/
- scp kube-proxy-v1.31.1.tar root@192.168.110.138:/root/
- scp pause-3.6.tar root@192.168.110.138:/root/
- scp etcd-3.5.4-0.tar root@192.168.110.138:/root/
- scp coredns-v1.9.3.tar root@192.168.110.138:/root/
- 5、本机主机导入镜像
- ctr -n=k8s.io images import /path/to/save/kube-apiserver-v1.31.1.tar
- ctr -n=k8s.io images import /path/to/save/kube-controller-manager-v1.31.1.tar
- ctr -n=k8s.io images import /path/to/save/kube-scheduler-v1.31.1.tar
- ctr -n=k8s.io images import /path/to/save/kube-proxy-v1.31.1.tar
- ctr -n=k8s.io images import /path/to/save/pause-3.6.tar
- ctr -n=k8s.io images import /path/to/save/etcd-3.5.4-0.tar
- ctr -n=k8s.io images import /path/to/save/coredns-v1.9.3.tar
- 6、查看镜像是否导入
- ctr -n=k8s.io images list
- 7、指定初始化文件,进行初始化
- kubeadm config print init-defaults > kubeadm-config.yaml
- # 修改yaml文件
- vim kubeadm-config.yaml
- advertiseAddress: 192.168.110.130
- name: Master
- serviceSubnet: 10.96.0.0/12
- podSubnet: 10.244.0.0/16
- controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" # 负载均衡配置,按需配置替换为负载均衡器的 DNS 名称或 IP 地址,将 LOAD_BALANCER_PORT 替换为负载均衡器监听的端口(通常是 6443)。
- imageRepository: localhost:5000 # 指定本地镜像
- kubeadm init --config kubeadm-config.yaml --upload-certs --v=9
- # 如果直接使用registry.k8s.io仓库下载,使用命令初始化
- kubeadm init --apiserver-advertise-address=192.168.110.137 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16 --v=5
- #如果是使用阿里云registry.aliyuncs.com/google_containers镜像仓库下载
- kubeadm init --apiserver-advertise-address=192.168.110.137 --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.92.0.0/16 --v=5
复制代码 5、部署网络插件
1、在线标准化部署calico
- # 1、通过yaml文件部署
- wget https://docs.projectcalico.org/manifests/calico.yaml
- kubectl apply -f /root/calico.yaml
- # 2、查看状态
- kubectl get pods -n kube-system
- [root@Master01-Centos8 ~]# kubectl get pods -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-kube-controllers-b8d8894fb-nmznx 1/1 Running 0 106m
- calico-node-fn5q8 1/1 Running 0 106m
- calico-node-fn7rl 1/1 Running 0 106m
- calico-node-tkngk 1/1 Running 0 106m
- coredns-855c4dd65d-66vmm 1/1 Running 0 42h
- coredns-855c4dd65d-9779h 1/1 Running 0 42h
- etcd-master01-centos8 1/1 Running 0 42h
- kube-apiserver-master01-centos8 1/1 Running 0 42h
- kube-controller-manager-master01-centos8 1/1 Running 0 42h
- kube-proxy-5bprr 1/1 Running 0 42h
- kube-proxy-6dnm2 1/1 Running 0 42h
- kube-proxy-9d8gc 1/1 Running 0 42h
- kube-scheduler-master01-centos8 1/1 Running 0 42h
复制代码 2、离线标准化部署calico
- # 当本地主机无法直接访问互联网时使用
- 1、远端主机下载calico组件(假设远端主机有docker)
- docker pull calico/cni:v3.28.2
- docker pull calico/pod2daemon-flexvol:v3.28.2
- docker pull calico/node:v3.28.2
- docker pull calico/kube-controllers:v3.28.2
- docker pull calico/typha:v3.28.2
- 2、将镜像保存到本地
- docker save -o calico-cni-v3.28.2.tar calico/cni:v3.28.2
- docker save -o calico-pod2daemon-flexvol-v3.28.2.tar calico/pod2daemon-flexvol:v3.28.2
- docker save -o calico-node-v3.28.2.tar calico/node:v3.28.2
- docker save -o calico-kube-controllers-v3.28.2.tar calico/kube-controllers:v3.28.2
- docker save -o calico-typha-v3.28.2.tar calico/typha:v3.28.2
- 3、将镜像拷贝到本地主机(需要手动导入集群中所有主机)
- scp calico-cni-v3.28.2.tar root@192.168.110.138:/root/
- scp calico-pod2daemon-flexvol-v3.28.2.tar root@192.168.110.138:/root/
- scp calico-node-v3.28.2.tar root@192.168.110.138:/root/
- scp calico-kube-controllers-v3.28.2.tar root@192.168.110.138:/root/
- scp calico-typha-v3.28.2.tar root@192.168.110.138:/root/
- 4、将镜像导入本地主机(需要手动导入集群中所有主机)
- ctr -n=k8s.io image import /root/calico-cni-v3.28.2.tar
- ctr -n=k8s.io image import /root/calico-pod2daemon-flexvol-v3.28.2.tar
- ctr -n=k8s.io image import /root/calico-node-v3.28.2.tar
- ctr -n=k8s.io image import /root/calico-kube-controllers-v3.28.2.tar
- ctr -n=k8s.io image import /root/calico-typha-v3.28.2.tar
- #查看镜像是否导入(所有主机上都执行)
- ctr -n=k8s.io images list | grep calico
- 5、安装calico(master上执行)
- kubectl apply -f /root/calico.yaml
- 6、查看状态
- [root@Master01-Centos8 ~]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- master01-centos8 Ready control-plane 42h v1.31.1
- slave01-centos8 Ready <none> 42h v1.31.1
- slave02-centos8 Ready <none> 42h v1.31.1
- [root@Master01-Centos8 ~]# kubectl get pods -n kube-system
- NAME READY STATUS RESTARTS AGE
- calico-kube-controllers-b8d8894fb-nmznx 1/1 Running 0 114m
- calico-node-fn5q8 1/1 Running 0 114m
- calico-node-fn7rl 1/1 Running 0 114m
- calico-node-tkngk 1/1 Running 0 114m
- coredns-855c4dd65d-66vmm 1/1 Running 0 42h
- coredns-855c4dd65d-9779h 1/1 Running 0 42h
- etcd-master01-centos8 1/1 Running 0 42h
- kube-apiserver-master01-centos8 1/1 Running 0 42h
- kube-controller-manager-master01-centos8 1/1 Running 0 42h
- kube-proxy-5bprr 1/1 Running 0 42h
- kube-proxy-6dnm2 1/1 Running 0 42h
- kube-proxy-9d8gc 1/1 Running 0 42h
- kube-scheduler-master01-centos8 1/1 Running 0 42h
复制代码 在任何节点实验kubectl命令
- 1、将master节点中 /etc/kubernetes/admin.conf 复制需要运行的主机的 /etc/kubernetes中
- 2、对应主机上配置环境变量
- echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
复制代码 6、安装dashboard
1、安装helm
1、二进制安装helm
- wget https://get.helm.sh/helm-v3.16.0-linux-amd64.tar.gz
- tar -zxvf helm-v3.16.0-linux-amd64.tar.gz
- mv linux-amd64/helm /usr/local/bin/helm
- helm repo add bitnami https://charts.bitnami.com/bitnami
复制代码 2、deb包安装helm
- # 确保目录存在
- sudo mkdir -p /usr/share/keyrings
- # 下载、转换并保存 GPG 签名文件
- curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
- # 验证 GPG 签名 gpg --list-keys --keyring /usr/share/keyrings/helm.gpg
- sudo apt-get install apt-transport-https --yes
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
- sudo apt-get update
- udo apt-get install helm
复制代码 3、用脚本安装helm
- curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
- chmod 700 get_helm.sh
- ./get_helm.sh
复制代码 2、安装dashboard
- # Add kubernetes-dashboard repository
- # helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
- helm repo add bitnami https://charts.bitnami.com/bitnami # 建议使用
- # Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
- helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
- # 卸载
- helm delete kubernetes-dashboard --namespace kubernetes-dashboard
复制代码 3、利用dashboard
- #需要提前确保kubernetes-dashboard命名空间已经存在,确定cluster-admin ClusterRole 存在
- # kubectl create namespace kubernetes-dashboard
- 1、创建ServiceAccount 和一个 ClusterRoleBinding
- vim dashboard-adminuser.yaml
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: admin-user
- namespace: kubernetes-dashboard
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: admin-user
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
- subjects:
- - kind: ServiceAccount
- name: admin-user
- namespace: kubernetes-dashboard
- # 创建
- kubectl apply -f dashboard-adminuser.yaml
- # 验证创建情况
- kubectl get serviceaccount admin-user -n kubernetes-dashboard
- kubectl get clusterrolebinding admin-user
- 2、ServiceAccount获取token
- # 获取简单的token
- kubectl -n kubernetes-dashboard create token admin-user
- # 获取长期token,需要修改dashbord-adminuser.yaml文件
- apiVersion: v1
- kind: Secret
- metadata:
- name: admin-user
- namespace: kubernetes-dashboard
- annotations:
- kubernetes.io/service-account.name: "admin-user"
- type: kubernetes.io/service-account-token
复制代码 7 安装kubesphere
- helm upgrade --install -n kubesphere-system --set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.1.tgz --debug --wait
- # 如果可以直接拉取镜像
- helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.1.tgz --debug --wait
- # 等待一会儿安装完成
- http://192.168.110.137:30880
复制代码 免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |