CentOS7.9使用sealos部署单节点k8s并部署dashboard

打印 上一主题 下一主题

主题 575|帖子 575|积分 1725

环境配置

云服务商:青云
操作系统:CentOS Linux release 7.9.2009 (Core)
内核版本:3.10.0-1160.el7.x86_64
安装方式:默认安装
CPU:4
内存:8GB
硬盘:50GB
安装k8s

sealos的具体使用方法参见: https://www.sealyun.com/

  • 使用yum安装wget
    yum install -y wget
  • 下载sealos二级制文件
    wget -c https://sealyun-home.oss-cn-beijing.aliyuncs.com/sealos/latest/sealos
  • 添加可执行权限
    chmod +x sealos
  • 移动至系统PATH目录便于直接运行命令
    mv sealos /usr/bin/
  • 下载离线资源包
    wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/05a3db657821277f5f3b92d834bbaf98-v1.22.0/kube1.22.0.tar.gz
  • 安装一个单master的集群
    sealos init --passwd 'xxxxxxx' --master 192.168.0.40 --pkg-url /root/kube1.22.0.tar.gz --version v1.22.0
  • 查看集群状态
    kubectl get nodes
  1. NAME         STATUS   ROLES                  AGE    VERSION
  2. i-o72s0m3y   Ready    control-plane,master   107m   v1.22.0
复制代码
如果master状态一直显示notready需要重启container(之前尝试三节点的时候遇到了,这次单节点没有遇到)
systemctl restart containerd

  • 查看系统pod状态是不是都正常
    kubectl get pods -n kube-system
  1. NAME                                       READY   STATUS    RESTARTS   AGE
  2. calico-kube-controllers-78d6f96c7b-cmw4f   1/1     Running   0          110m
  3. calico-node-gspmv                          1/1     Running   0          110m
  4. coredns-78fcd69978-jpfpg                   1/1     Running   0          110m
  5. coredns-78fcd69978-s4pcb                   1/1     Running   0          110m
  6. etcd-i-o72s0m3y                            1/1     Running   0          110m
  7. kube-apiserver-i-o72s0m3y                  1/1     Running   0          110m
  8. kube-controller-manager-i-o72s0m3y         1/1     Running   0          110m
  9. kube-proxy-ws9g2                           1/1     Running   0          110m
  10. kube-scheduler-i-o72s0m3y                  1/1     Running   0          110m
复制代码
部署dashboard

部署步骤参见https://github.com/kubernetes/dashboard

  • 执行命令部署dashboard
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
  • 查看部署状态
    kubectl get pod -n kubernetes-dashboard
  1. NAME                                         READY   STATUS              RESTARTS   AGE
  2. dashboard-metrics-scraper-7c857855d9-d887m   1/1     Running             0          2m45s
  3. kubernetes-dashboard-bcf9d8968-w7hlt         0/1     ContainerCreating   0          2m45s
复制代码
查看部署日志(这里有报server could not find 有的人说是要做其他操作,我这里没管等了很久就自动running了)
kubectl logs -n kubernetes-dashboard $(kubectl get pod -n kubernetes-dashboard  -o jsonpath='{.items[0].metadata.name}') -f
  1. {"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-06-13T06:24:33Z"}
  2. 192.168.0.40 - - [13/Jun/2022:06:24:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  3. 192.168.0.40 - - [13/Jun/2022:06:24:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  4. 192.168.0.40 - - [13/Jun/2022:06:24:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  5. 192.168.0.40 - - [13/Jun/2022:06:25:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  6. 192.168.0.40 - - [13/Jun/2022:06:25:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  7. 192.168.0.40 - - [13/Jun/2022:06:25:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  8. {"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-06-13T06:25:33Z"}
  9. 192.168.0.40 - - [13/Jun/2022:06:25:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  10. 192.168.0.40 - - [13/Jun/2022:06:25:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  11. 192.168.0.40 - - [13/Jun/2022:06:25:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  12. 192.168.0.40 - - [13/Jun/2022:06:26:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  13. 192.168.0.40 - - [13/Jun/2022:06:26:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  14. 192.168.0.40 - - [13/Jun/2022:06:26:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  15. {"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-06-13T06:26:33Z"}
  16. 192.168.0.40 - - [13/Jun/2022:06:26:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  17. 192.168.0.40 - - [13/Jun/2022:06:26:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  18. 192.168.0.40 - - [13/Jun/2022:06:26:50 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.0"
  19. 192.168.0.40 - - [13/Jun/2022:06:26:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
  20. 192.168.0.40 - - [13/Jun/2022:06:27:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.22"
复制代码
查看状态信息
kubectl get pods --namespace=kubernetes-dashboard -o wide
  1. NAME                                         READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
  2. dashboard-metrics-scraper-7c857855d9-d887m   1/1     Running   0          8m11s   100.81.85.4   i-o72s0m3y   <none>           <none>
  3. kubernetes-dashboard-bcf9d8968-w7hlt         1/1     Running   0          8m11s   100.81.85.5   i-o72s0m3y   <none>           <none>
复制代码

  • 改为NodePort访问,默认是API Server,比较麻烦,改为NodePort可以直接用虚拟机的IP地址访问
    kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
    可以看到当前的TYPE是ClusterIP
  1. NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
  2. kubernetes-dashboard   ClusterIP   10.101.250.179   <none>        443/TCP   12m
复制代码
编辑配置,将其中的ClusterIP改为NodePort即可,过一会配置会自动变化。
kubectl --namespace=kubernetes-dashboard edit service kubernetes-dashboard
修改前
  1. # Please edit the object below. Lines beginning with a '#' will be ignored,
  2. # and an empty file will abort the edit. If an error occurs while saving this file will be
  3. # reopened with the relevant failures.
  4. #
  5. apiVersion: v1
  6. kind: Service
  7. metadata:
  8.   annotations:
  9.     kubectl.kubernetes.io/last-applied-configuration: |
  10.       {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selec
  11. tor":{"k8s-app":"kubernetes-dashboard"}}}
  12.   creationTimestamp: "2022-06-13T06:19:05Z"
  13.   labels:
  14.     k8s-app: kubernetes-dashboard
  15.   name: kubernetes-dashboard
  16.   namespace: kubernetes-dashboard
  17.   resourceVersion: "9772"
  18.   uid: 6b35c946-142a-44c1-a7c5-4f0bd1c9f3f4
  19. spec:
  20.   clusterIP: 10.101.250.179
  21.   clusterIPs:
  22.   - 10.101.250.179
  23.   internalTrafficPolicy: Cluster
  24.   ipFamilies:
  25.   - IPv4
  26.   ipFamilyPolicy: SingleStack
  27.   ports:
  28.   - port: 443
  29.     protocol: TCP
  30.     targetPort: 8443
  31.   selector:
  32.     k8s-app: kubernetes-dashboard
  33.   sessionAffinity: None
  34.   type: ClusterIP
  35. status:
  36.   loadBalancer: {}
复制代码
修改后
  1. # Please edit the object below. Lines beginning with a '#' will be ignored,
  2. # and an empty file will abort the edit. If an error occurs while saving this file will be
  3. # reopened with the relevant failures.
  4. #
  5. apiVersion: v1
  6. kind: Service
  7. metadata:
  8.   annotations:
  9.     kubectl.kubernetes.io/last-applied-configuration: |
  10.       {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selec
  11. tor":{"k8s-app":"kubernetes-dashboard"}}}
  12.   creationTimestamp: "2022-06-13T06:19:05Z"
  13.   labels:
  14.     k8s-app: kubernetes-dashboard
  15.   name: kubernetes-dashboard
  16.   namespace: kubernetes-dashboard
  17.   resourceVersion: "9772"
  18.   uid: 6b35c946-142a-44c1-a7c5-4f0bd1c9f3f4
  19. spec:
  20.   clusterIP: 10.101.250.179
  21.   clusterIPs:
  22.   - 10.101.250.179
  23.   internalTrafficPolicy: Cluster
  24.   ipFamilies:
  25.   - IPv4
  26.   ipFamilyPolicy: SingleStack
  27.   ports:
  28.   - port: 443
  29.     protocol: TCP
  30.     targetPort: 8443
  31.   selector:
  32.     k8s-app: kubernetes-dashboard
  33.   sessionAffinity: None
  34.   type: NodePort
  35. status:
  36.   loadBalancer: {}
复制代码
查看状态,已经变成了NodePort
  1. [root@i-o72s0m3y ~]# kubectl --namespace=kubernetes-dashboard get service kubernetes-dashboard
  2. NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
  3. kubernetes-dashboard   NodePort   10.101.250.179   <none>        443:30228/TCP   18m
复制代码
可以看到端口变成了30228,说明可以使用30228端口访问。浏览器直接输入https://192.168.0.40:30228/ # 这里的IP地址要用实际虚拟机的地址
4. 使用token登陆
使用token登陆需要创建用户和角色
用户配置文件 vi admin-user.yaml 这里包括后面的文件名都可以自定义,用户名也是一样,只要配置文件里的对应关系对应上就可以。
  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4.   name: admin-user
  5.   namespace: kubernetes-dashboard
复制代码
创建用户配置
kubectl create -f admin-user.yaml
角色绑定配置 vi role-binding.yaml
  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRoleBinding
  3. metadata:
  4.   name: admin-user
  5. roleRef:
  6.   apiGroup: rbac.authorization.k8s.io
  7.   kind: ClusterRole
  8.   name: cluster-admin
  9. subjects:
  10. - kind: ServiceAccount
  11.   name: admin-user
  12.   namespace: kubernetes-dashboard
复制代码
创建用户绑定
kubectl create -f role-binding.yaml
查看token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
  1. Name:         admin-user-token-6b8xs
  2. Namespace:    kubernetes-dashboard
  3. Labels:       <none>
  4. Annotations:  kubernetes.io/service-account.name: admin-user
  5.               kubernetes.io/service-account.uid: d81fc97d-9ad0-44f7-b3d3-55d1d1a934ce
  6. Type:  kubernetes.io/service-account-token
  7. Data
  8. ====
  9. ca.crt:     1070 bytes
  10. namespace:  20 bytes
  11. token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik8yVDNwQzVMWTc4UmFHUzRXclhLd3ZobzZkdGkwXzRhTnJRRUlwN3ZVWW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZiOHhzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkODFmYzk3ZC05YWQwLTQ0ZjctYjNkMy01NWQxZDFhOTM0Y2UiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.XiJhuz6Wlj2zKzsEcvnwC0Tszzo9eRz-VPkVt_4Xwkr5s2U7C3fUrdLKfNt7rsgl_A0m88Xo48pBvFwlTjKbNRrUE1lsMSwZBjGsNCpA7fyCC4Xqur_f2qSyRCnSkbSNB9W
复制代码
5.登陆dashboard
将token复制下来,在页面中选择token并填入(注意复制时是否会多空字符或者少字符)

点击登陆,登入dashboard

当前显示没有任何东西显示
选择全部命名空间即可

其他踩到的坑

在部署三master的时候 一直pending 并且可用node 0/3
使用kubectl taint node --all node-role.kubernetes.io/master-允许master部署pod
禁止master部署pod命令kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule # 未验证

来源:https://www.cnblogs.com/yscheng/p/16370896.html
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

道家人

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表