马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?立即注册
x
一、卸载k8s
针对机器已安装过k8s的环境,如未安装过,请忽略。
- # 首先清理运行到k8s群集中的pod,使用
- kubectl delete node --all
- # 使用脚本停止所有k8s服务
- for service in kube-apiserver kube-controller-manager kubectl kubelet etcd kube-proxy kube-scheduler;
- do
- systemctl stop $service
- done
- # 使用命令卸载k8s
- kubeadm reset -f
- # 卸载k8s相关程序
- yum -y remove kube*
- # 删除相关的配置文件
- modprobe -r ipip
- lsmod
- # 然后手动删除配置文件和flannel网络配置和flannel网口:
- rm -rf /etc/cni
- rm -rf /root/.kube
- # 删除cni网络
- ifconfig cni0 down
- ip link delete cni0
- ifconfig flannel.1 down
- ip link delete flannel.1
- # 删除残留的配置文件
- rm -rf ~/.kube/
- rm -rf /etc/kubernetes/
- rm -rf /etc/systemd/system/kubelet.service.d
- rm -rf /etc/systemd/system/kubelet.service
- rm -rf /etc/systemd/system/multi-user.target.wants/kubelet.service
- rm -rf /var/lib/kubelet
- rm -rf /usr/libexec/kubernetes/kubelet-plugins
- rm -rf /usr/bin/kube*
- rm -rf /opt/cni
- rm -rf /var/lib/etcd
- rm -rf /var/etcd
- # 更新镜像
- yum clean all
- yum makecache
复制代码 二、 安装kube集群(4节点)
k8s重置命令(如果初始化的过程出现了错误就使用重置命令):kubeadm reset
1.1 准备工作(所有的节点都执行)
编辑4台服务器的 /etc/hosts 文件 ,添加下面内容(每个节点都执行一遍):
- 192.168.2.1 node1
- 192.168.2.2 node2
- 192.168.2.3 node3
- 192.168.2.4 node4
复制代码 设置hostname(以node1为例):
- hostnamectl set-hostname node1 # node1 是自定义名字
复制代码 大概修改 /etc/hostname 文件,写入node1(其他的子节点都一样):
修改之后/etc/hostname的内容为:
所有节点执行时间同步:
- # 启动chronyd服务
- systemctl start chronyd
- systemctl enable chronyd
- date
复制代码 所有节点禁用SELinux和Firewalld服务:
- systemctl stop firewalld
- systemctl disable firewalld
- sed -i 's/enforcing/disabled/' /etc/selinux/config # 重启后生效
复制代码 所有节点禁用swap分区:
- # 临时禁用swap分区
- swapoff -a
- # 永久禁用swap分区
- vi /etc/fstab
- # 注释掉下面的设置
- # /dev/mapper/centos-swap swap
- # 之后需要重启服务器生效
复制代码 所有节点添加网桥过滤和地址转发功能:
- cat > /etc/sysctl.d/kubernetes.conf << EOF
- net.bridge.bridge-nf-call-ip6tables = 1
- net.bridge.bridge-nf-call-iptables = 1
- net.ipv4.ip_forward = 1
- EOF
- # 然后执行,生效
- sysctl --system
复制代码 然后所有节点安装docker-ce(略)
需要留意的是要配置docker的cgroupdriver:
- {
- // 添加这行
- "exec-opts": ["native.cgroupdriver=systemd"],
- }
复制代码 所有节点的kubernetes镜像切换成国内源:
- cat > /etc/yum.repos.d/kubernetes.repo << EOF
- [kubernetes]
- name=Kubernetes
- baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
- enabled=1
- gpgcheck=0
- repo_gpgcheck=0
- gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- EOF
复制代码 所有节点安装指定版本 kubeadm,kubelet 和 kubectl(我这里选择1.23.0版本的):
- yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
- # 设置kubelet开机启动(看你自己)
- systemctl enable kubelet
复制代码
1.2 *更改kubelet的容器路径(如果需要的话,不需要可以跳过)
- vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
复制代码 修改完之后配置文件如下:
- [Service]
- Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --root-dir=/mnt/sdb_new/kubelet/ --kubeconfig=/etc/kubernetes/kubelet.conf"
复制代码 使配置生效:
- systemctl daemon-reload
- systemctl restart docker
- systemctl restart kubelet
复制代码 1.3 摆设Kubernetes集群
1.3.1 覆盖kubernetes的镜像地址(只需要在master节点上利用初始化命令)
1. 首先要覆盖kubeadm的镜像地址,因为这个是外网的无法访问,需要替换成国内的镜像地址,使用此命令列出集群在配置过程中需要哪些镜像:
- [root@node1 home]# kubeadm config images list
- I0418 18:26:04.047449 19242 version.go:255] remote version is much newer: v1.27.1; falling back to: stable-1.23
- k8s.gcr.io/kube-apiserver:v1.23.17
- k8s.gcr.io/kube-controller-manager:v1.23.17
- k8s.gcr.io/kube-scheduler:v1.23.17
- k8s.gcr.io/kube-proxy:v1.23.17
- k8s.gcr.io/pause:3.6
- k8s.gcr.io/etcd:3.5.1-0
- k8s.gcr.io/coredns/coredns:v1.8.6
复制代码 2. 更改为阿里云的镜像地址:
- [root@node1 home]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
- I0418 18:28:18.740057 20021 version.go:255] remote version is much newer: v1.27.1; falling back to: stable-1.23
- registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.17
- registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.17
- registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.17
- registry.aliyuncs.com/google_containers/kube-proxy:v1.23.17
- registry.aliyuncs.com/google_containers/pause:3.6
- registry.aliyuncs.com/google_containers/etcd:3.5.1-0
- registry.aliyuncs.com/google_containers/coredns:v1.8.6
复制代码 3. 然后将镜像手动拉取下来,如许在初始化的时候回更快一些(尚有一个办法就是直接在docker上把镜像pull下来,docker只要配置一下国内源即可快速的将镜像pull下来):
- [root@node1 home]# kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers
- I0418 18:28:31.795554 20088 version.go:255] remote version is much newer: v1.27.1; falling back to: stable-1.23
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.17
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.17
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.17
- [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.17
- [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
- [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
- [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
复制代码 1.3.2 初始化kubernetes(只需要在master节点上利用初始化命令)
初始化 Kubernetes,指定网络地址段 和 镜像地址(后续的子节点可以使用join命令进行动态的追加):
- [root@node1 home]# kubeadm init \
- --apiserver-advertise-address=192.168.2.1 \
- --image-repository registry.aliyuncs.com/google_containers \
- --kubernetes-version v1.23.0 \
- --service-cidr=10.96.0.0/12 \
- --pod-network-cidr=10.244.0.0/16 \
- --ignore-preflight-errors=all
- # –apiserver-advertise-address # 集群通告地址(master 机器IP,这里用的万兆网)
- # –image-repository # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
- # –kubernetes-version #K8s版本,与上面安装的一致
- # –service-cidr #集群内部虚拟网络,Pod统一访问入口,可以不用更改,直接用上面的参数
- # –pod-network-cidr #Pod网络,与下面部署的CNI网络组件yaml中保持一致,可以不用更改,直接用上面的参数
复制代码 执行完之后要手动执行一些参数(尤其是 加入集群的join命令 需要复制记录下载):
- [addons] Applied essential addon: kube-proxy
- Your Kubernetes control-plane has initialized successfully!
- To start using your cluster, you need to run the following as a regular user:
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Alternatively, if you are the root user, you can run:
- export KUBECONFIG=/etc/kubernetes/admin.conf
- You should now deploy a pod network to the cluster.
- Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
- https://kubernetes.io/docs/concepts/cluster-administration/addons/
- Then you can join any number of worker nodes by running the following on each as root:
- kubeadm join 192.168.2.1:6443 --token ochspx.15in9qkiu5z8tx2y \
- --discovery-token-ca-cert-hash sha256:1f31202107af96a07df9fd78c3aa9bb44fd40076ac123e8ff28d6ab691a02a31
复制代码 执行参数:
- [root@node1 home]# mkdir -p $HOME/.kube
- [root@node1 home]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- [root@node1 home]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
- [root@node1 home]#
- [root@node1 home]# vim /root/.bash_profile
复制代码 加入以下这段:
- # 超级用户变量
- export KUBECONFIG=/etc/kubernetes/admin.conf
- # 设置别名
- alias k=kubectl
- # 设置kubectl命令补齐功能
- source <(kubectl completion bash)
复制代码 激活 .bash_profile:
- [root@node1 home]# source /root/.bash_profile
复制代码 这段要复制记录下来(来自k8s初始化成功之后出现的join命令,需要先配置完Flannel才能加入子节点),后续子节点加入master节点需要执行这段命令:
- kubeadm join 192.168.2.1:6443 --token ochspx.15in9qkiu5z8tx2y \
- --discovery-token-ca-cert-hash sha256:1f31202107af96a07df9fd78c3aa9bb44fd40076ac123e8ff28d6ab691a02a31
复制代码 1.3.3 设定kubeletl网络(主节点摆设)
摆设容器网络,CNI网络插件(在Master上执行,著名的有flannel、calico、canal和kube-router等,简单易用的实现是为CoreOS提供的flannel项目),这里使用Flannel实现。
下载kube-flannel.yml:
- [root@node1 home]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
复制代码 然后修改配置文件,找到如下位置,修改 Newwork 与执行 kubeadm init 输入的网段一致:
- net-conf.json: |
- {
- "Network": "10.244.0.0/16",
- "Backend"": {
- "Type": "vxlan"
- }
- }
复制代码 修改配置之后安装组件(如果安装的时候卡在pull镜像的时候,试一试手动用docker将镜像拉取下来):
- [root@node1 home]# kubectl apply -f kube-flannel.yml
复制代码 检察flannel pod状态(必须要为Running状态,如果kube-flannel起不来,那么就用kubectl describe pod kube-flannel-ds-f5jn6 -n kube-flannel命令检察pod起不来的缘故原由,然后去搜度娘获取办理方案):
- [root@node1 home]# # 必须所有的容器都是Running
- [root@node1 home]# kubectl get pod --all-namespaces
- NAMESPACE NAME READY STATUS RESTARTS AGE
- kube-flannel kube-flannel-ds-f5jn6 1/1 Running 0 8m21s
- kube-system coredns-6d8c4cb4d-ctqw5 1/1 Running 0 42m
- kube-system coredns-6d8c4cb4d-n52fq 1/1 Running 0 42m
- kube-system etcd-k8s-master 1/1 Running 0 42m
- kube-system kube-apiserver-k8s-master 1/1 Running 0 42m
- kube-system kube-controller-manager-k8s-master 1/1 Running 0 42m
- kube-system kube-proxy-swpkz 1/1 Running 0 42m
- kube-system kube-scheduler-k8s-master 1/1 Running 0 42m
复制代码 检察通信状态:
- [root@node1 home]# kubectl get pod -n kube-system
- NAME READY STATUS RESTARTS AGE
- coredns-6d8c4cb4d-ctqw5 1/1 Running 0 52m
- coredns-6d8c4cb4d-n52fq 1/1 Running 0 52m
- etcd-k8s-master 1/1 Running 0 53m
- kube-apiserver-k8s-master 1/1 Running 0 53m
- kube-controller-manager-k8s-master 1/1 Running 0 53m
- kube-proxy-swpkz 1/1 Running 0 52m
- kube-scheduler-k8s-master 1/1 Running 0 53m
- [root@node1 home]#
- [root@node1 home]# 获取主节点的状态
- [root@node1 home]# kubectl get cs
- Warning: v1 ComponentStatus is deprecated in v1.19+
- NAME STATUS MESSAGE ERROR
- controller-manager Healthy ok
- scheduler Healthy ok
- etcd-0 Healthy {"health":"true","reason":""}
- [root@node1 home]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- node1 Ready control-plane,master 52m v1.23.0
复制代码 检察节点状态(此时还只有主节点,还没添加子节点):
- [root@node1 home]# kubectl get node
- NAME STATUS ROLES AGE VERSION
- node1 Ready control-plane,master 53m v1.23.0
复制代码 至此 K8s master主服务器 已经摆设完成!
1.3.4 子节点加入集群(在子节点上利用)
初始化会天生join命令,需要在子节点执行即可,以下token作为举例,以实际为主,例如:
- [root@node2 home]# kubeadm join 192.168.2.1:6443 --token ochspx.15in9qkiu5z8tx2y --discovery-token-ca-cert-hash sha256:1f31202107af96a07df9fd78c3aa9bb44fd40076ac123e8ff28d6ab691a02a31
- [preflight] Running pre-flight checks
- [preflight] Reading configuration from the cluster...
- [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
- [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
- [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
- [kubelet-start] Starting the kubelet
- [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
- This node has joined the cluster:
- * Certificate signing request was sent to apiserver and a response was received.
- * The Kubelet was informed of the new secure connection details.
- Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
复制代码 默认的 join token 有效期限为24小时,当过期后该 token 就不能用了,这时需要重新创建 token,创建新的join token需要在主节点上创建,创建命令如下:
- [root@node1 home]# kubeadm token create --print-join-command
复制代码 加入之后再在主节点检察集群中节点的状态(必须要都为Ready状态):
- [root@node1 home]# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- node1 Ready control-plane,master 63m v1.23.0
- node2 Ready <none> 3m57s v1.23.0
- node3 Ready <none> 29s v1.23.0
复制代码 如果所有的节点STATUS都为Ready的话,那么到此,所有的子节点加入完成!
1.3.5 删除子节点(在master主节点上利用)
- # kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
- # 其中 <node name> 是在k8s集群中使用 <kubectl get nodes> 查询到的节点名称
- # 假设这里删除 node3 子节点
- [root@node1 home]# kubectl drain node3 --delete-local-data --force --ignore-daemonsets
- [root@node1 home]# kubectl delete node node3
复制代码 然后在删除的子节点上利用重置k8s(重置k8s会删除一些配置文件),这里在node3子节点上利用:
- [root@node3 home]# # 子节点重置k8s
- [root@node3 home]# kubeadm reset
- [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
- [reset] Are you sure you want to proceed? [y/N]: y
- [preflight] Running pre-flight checks
- W0425 01:59:40.412616 15604 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
- [reset] No etcd config found. Assuming external etcd
- [reset] Please, manually reset etcd to prevent further issues
- [reset] Stopping the kubelet service
- [reset] Unmounting mounted directories in "/var/lib/kubelet"
- [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
- [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
- [reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
- The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
- The reset process does not reset or clean up iptables rules or IPVS tables.
- If you wish to reset iptables, you must do so manually by using the "iptables" command.
- If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
- to reset your system's IPVS tables.
- The reset process does not clean your kubeconfig files and you must remove them manually.
- Please, check the contents of the $HOME/.kube/config file.
复制代码 然后在被删除的子节点上手动删除k8s配置文件、flannel网络配置文件 和 flannel网口:
- [root@node3 home]# rm -rf /etc/cni/net.d/
- [root@node3 home]# rm -rf /root/.kube/config
- [root@node3 home]# # 删除cni网络
- [root@node3 home]# ifconfig cni0 down
- [root@node3 home]# ip link delete cni0
- [root@node3 home]# ifconfig flannel.1 down
- [root@node3 home]# ip link delete flannel.1
复制代码 三、 摆设k8s dashboard(这里使用Kubepi)
Kubepi是一个简单高效的k8s集群图形化管理工具,方便日常管理K8S集群,高效快速的查询日志定位问题的工具
摆设KubePI(随便在哪个节点摆设,我这里在主节点摆设):
- [root@node1 home]# docker pull kubeoperator/kubepi-server
- [root@node1 home]# # 运行容器
- [root@node1 home]# docker run --privileged -itd --restart=unless-stopped --name kube_dashboard -v /home/docker-mount/kubepi/:/var/lib/kubepi/ -p 8000:80 kubeoperator/kubepi-server
复制代码 登录:
- # 地址: http://192.168.2.1:8000
- # 默认用户名:admin
- # 默认密码:kubepi
复制代码 填写集群名称,默认认证模式,填写apisever地址及token:

kubepi导入集群
获取登录需要用到的ip地址和登录token:
- [root@node1 home]# # 在 k8s 主节点上创建用户,并获取token
- [root@node1 home]# kubectl create sa kubepi-user --namespace kube-system
- serviceaccount/kubepi-user created
- [root@node1 home]# kubectl create clusterrolebinding kubepi-user --clusterrole=cluster-admin --serviceaccount=kube-system:kubepi-user
- clusterrolebinding.rbac.authorization.k8s.io/kubepi-user created
- [root@node1 home]#
- [root@node1 home]# # 在主节点上获取新建的用户 kubeapi-user 的 token
- [root@node1 home]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubepi-user | awk '{print $1}') | grep token: | awk '{print $2}'
- eyJhbGciOiJSUzI1NiIsImtpZCI6IkhVeUtyc1BpU1JvRnVacXVqVk1PTFRkaUlIZm1KQTV6Wk9WSExSRllmd0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcGktdXNlci10b2tlbi10cjVsMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcGktdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJiYzlhZDRjLWVjZTItNDE2Mi04MDc1LTA2NTI0NDg0MzExZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcGktdXNlciJ9.QxkR1jBboqTYiVUUVO4yGhfWmlLDA5wHLo_ZnjAuSLZQDyVevCgBluL6l7y7UryRdId6FmBZ-L0QitvOuTsurcjGL2QHxPE_yZsNW7s9K7eikxJ8q-Q_yOvnADtAueH_tcMGRGW9Zyec2TlmcGTZCNaNUme84TfMlWqX7oP3GGJGMbMGN7H4fPXh-Qqrdp-0MJ3tP-dk3koZUEu3amrq8ExSmjIAjso_otrgFWbdSOMkCXKsqb9yuZzaw7u5Cy18bH_HW6RbNCRT5jGs5aOwzuMAd0HQ5iNm-5OISI4Da6jGdjipLXejcC1H-xWgLlJBx0RQWu41yoPNF57cG1NubQ
- [root@node1 home]#
- [root@node1 home]# # 在主节点上获取 apiserver 地址
- [root@node1 home]# cat ~/.kube/config | grep server: | awk '{print $2}'
- https://192.168.2.1:6443
复制代码 将上面获取的api地址和token填入页面即可,name可以自己随意取。

kubepi集群页面
到此,KubePI安装完成!
1.1 安装metrics k8s集群监控插件
k8s metrics插件提供了 top 命令可用于统计 k8s集群资源 的使用环境,它包罗有 node 和 pod 两个⼦命令,分别显⽰ node 节点和 Pod 对象的资源使⽤信息。
参考资料:K8S 笔记 - k8s 摆设 metrics-server - 知乎
四、参考链接
参考链接:K8s超详细安装摆设流程_k8s安装及摆设流程-CSDN博客
k8s的安装摆设_run "kubectl apply -f [podnetwork].yaml" with one -CSDN博客
五、k8s常用命令集合
k8s常用命令集合:
- # 查看当前集群的所有的节点
- kubectl get node
- # 显示 Node 的详细信息(一般用不着)
- kubectl describe node node1
- # 查看所有的pod
- kubectl get pod --all-namespaces
- # 查看pod的详细信息
- kubectl get pods -o wide --all-namespaces
- # 查看所有创建的服务
- kubectl get service
- # 查看所有的deploy
- kubectl get deploy
- # 重启 pod(这个方式会删除原来的pod,然后再重新生成一个pod达到重启的目的)
- # 有yaml文件的重启
- kubectl replace --force -f xxx.yaml
- # 无yaml文件的重启
- kubectl get pod <POD_NAME> -n <NAMESPACE> -o yaml | kubectl replace --force -f -
- # 查看pod的详细信息
- kubectl describe pod nfs-client-provisioner-65c77c7bf9-54rdp -n default
- # 根据 yaml 文件创建Pod资源
- kubectl apply -f pod.yaml
- # 删除基于 pod.yaml 文件定义的Pod
- kubectl delete -f pod.yaml
- # 查看容器的日志
- kubectl logs <pod-name>
- # 实时查看日志
- kubectl logs -f <pod-name>
- # 若 pod 只有一个容器,可以不加 -c
- kubectl log <pod-name> -c <container_name>
- # 返回所有标记为 app=frontend 的 pod 的合并日志
- kubectl logs -l app=frontend
- # 通过bash获得 pod 中某个容器的TTY,相当于登录容器
- # kubectl exec -it <pod-name> -c <container-name> -- bash
- eg:
- kubectl exec -it redis-master-cln81 -- bash
- # 查看 endpoint 列表
- kubectl get endpoints
- # 查看已有的token
- kubeadm token list
复制代码 六、使用过程中的一些问题息争决建议
1.1 非常关机引起的k8s服务起不来
实际上是etcd数据库出错了,导致k8s起不来
参考资料:
记一次测试虚拟机非常关机导致的问题blog.csdn.net/Name_kongkong/article/details/126218219编辑
运维手册——生产环境K8S集群断电后启动失败www.rugod.cn/posts/f7d56ada.html
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |