KubeSphere3.3 私有云部署,在linux上 多节点安装

打印 上一主题 下一主题

主题 547|帖子 547|积分 1641

在四台主机上部署(也可在虚拟机部署)
 

使用软件版本
Centos7.9 内核3.10.0-1160.el7.x86_64
 KubeSphere3.3
 
KubeSphere 官网 面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere
 
四台全新主机安装Centos7.9 系统
阿里镜像下载地址  http://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64
下载  CentOS-7-x86_64-DVD-2009.iso
 
U盘安装Centos请参考 Rufus - 轻松创建USB启动盘
 
系统安装完毕,先进行系统设置
1) 系统设置

静态IP设置
Centos7默认开机不启动网卡,需要手动启动 systemctl  start network
使用命令 ip addr ,route -n 或 ifconfig 查看当前网络配置 ,然后再修改配置文件
 
vi  /etc/sysconfig/network-scripts/ifcfg-exx 不同机器网卡名称略微不同

 
 
 重启网卡 systemctl  restart network
 
配置主机名
hostnamectl set-hostname xxx
 
修改host文件
 vim  /etc/hosts
不要删除原来的内容
追加
192.168.10.222 node222
192.168.10.223  node223
192.168.10.224  node224
192.168.10.225  node225
 
关闭swap
vim /etc/fstab
注释swap
 
关闭防火墙
systemctl status firewalld
 
 
关闭 SELinux
查看状态 sestatus
修改 /etc/selinux/config
SELINUX=disabled
 
 
升级证书(不升级会出现证书过期问题)
yum upgrade ca-certificates
 
 
时间同步 yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
vim /etc/chrony.conf
替换为阿里
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
 
2) 服务器免密配置

 
生成秘钥
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
 
 
 
 
 
 想免密登录谁,只需要把自己的公钥传递给对方主机即可(自己对自己免密也要配置!)
 ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.80.12
要输入对方root密码
3) 安装

安装请参考官网文档 多节点安装 (kubesphere.com.cn)
所有服务器安装
yum install socat
yum install sudo
yum install curl
yum install openssl
yum install tar
yum install ipset
yum install ebtables
yum install conntrack
所有服务器执行 
export KKZONE=cn
 
主节点安装kubekey
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -
chmod +x kk
 
创建配置文件
./kk create config --with-kubesphere version
修改配置文件
 
参考,我的是启用了所有可插拔组件
apiVersion: kubekey.kubesphere.io/v1alpha2kind: Clustermetadata:  name: samplespec:  hosts:  - {name: node222, address: 192.168.10.222, internalAddress: 192.168.10.222, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node223, address: 192.168.10.223, internalAddress: 192.168.10.223, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node224, address: 192.168.10.224, internalAddress: 192.168.10.224, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node225, address: 192.168.10.225, internalAddress: 192.168.10.225, privateKeyPath: "~/.ssh/id_rsa"}  roleGroups:    etcd:    - node222    control-plane:     - node222    worker:    - node222    - node223    - node224    - node225  controlPlaneEndpoint:    ## Internal loadbalancer for apiservers     # internalLoadbalancer: haproxy
    domain: lb.kubesphere.local    address: ""    port: 6443  kubernetes:    version: --with-kubesphere    clusterName: cluster.local    autoRenewCerts: true    containerManager:   etcd:    type: kubekey  network:    plugin: calico    kubePodsCIDR: 10.233.64.0/18    kubeServiceCIDR: 10.233.0.0/18    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni    multusCNI:      enabled: false  registry:    privateRegistry: ""    namespaceOverride: ""    registryMirrors: ["https://whf4b9x8.mirror.aliyuncs.com"]    insecureRegistries: []  addons: []



---apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:  name: ks-installer  namespace: kubesphere-system  labels:    version: v3.3.0spec:  persistence:    storageClass: ""  authentication:    jwtSecret: ""  zone: ""  local_registry: ""  namespace_override: ""  # dev_tag: ""  etcd:    monitoring: false    endpointIps: localhost    port: 2379    tlsEnable: true  common:    core:      console:        enableMultiLogin: true        port: 30880        type: NodePort    # apiserver:    #  resources: {}    # controllerManager:    #  resources: {}    redis:      enabled: true      volumeSize: 2Gi    openldap:      enabled: false      volumeSize: 2Gi    minio:      volumeSize: 20Gi    monitoring:      # type: external      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090      GPUMonitoring:        enabled: false    gpu:      kinds:      - resourceName: "nvidia.com/gpu"        resourceType: "GPU"        default: true    es:      # master:      #   volumeSize: 4Gi      #   replicas: 1      #   resources: {}      # data:      #   volumeSize: 20Gi      #   replicas: 1      #   resources: {}      logMaxAge: 7      elkPrefix: logstash      basicAuth:        enabled: false        username: ""        password: ""      externalElasticsearchHost: ""      externalElasticsearchPort: ""  alerting:    enabled: true    # thanosruler:    #   replicas: 1    #   resources: {}  auditing:    enabled: true    # operator:    #   resources: {}    # webhook:    #   resources: {}  devops:    enabled: true    # resources: {}    jenkinsMemoryLim: 2Gi    jenkinsMemoryReq: 1500Mi    jenkinsVolumeSize: 8Gi    jenkinsJavaOpts_Xms: 1200m    jenkinsJavaOpts_Xmx: 1600m    jenkinsJavaOpts_MaxRAM: 2g  events:    enabled: true    # operator:    #   resources: {}    # exporter:    #   resources: {}    # ruler:    #   enabled: true    #   replicas: 2    #   resources: {}  logging:    enabled: true    logsidecar:      enabled: true      replicas: 2      # resources: {}  metrics_server:    enabled: false  monitoring:    storageClass: ""    node_exporter:      port: 9100      # resources: {}    # kube_rbac_proxy:    #   resources: {}    # kube_state_metrics:    #   resources: {}    # prometheus:    #   replicas: 1    #   volumeSize: 20Gi    #   resources: {}    #   operator:    #     resources: {}    # alertmanager:    #   replicas: 1    #   resources: {}    # notification_manager:    #   resources: {}    #   operator:    #     resources: {}    #   proxy:    #     resources: {}    gpu:      nvidia_dcgm_exporter:        enabled: false        # resources: {}  multicluster:    clusterRole: none  network:    networkpolicy:      enabled: true    ippool:      type: none    topology:      type: weave-scope  openpitrix:    store:      enabled: true  servicemesh:    enabled: true    istio:      components:        ingressGateways:        - name: istio-ingressgateway          enabled: false        cni:          enabled: false  edgeruntime:    enabled: false    kubeedge:      enabled: false      cloudCore:        cloudHub:          advertiseAddress:            - ""        service:          cloudhubNodePort: "30000"          cloudhubQuicNodePort: "30001"          cloudhubHttpsNodePort: "30002"          cloudstreamNodePort: "30003"          tunnelNodePort: "30004"        # resources: {}        # hostNetWork: false      iptables-manager:        enabled: true        mode: "external"        # resources: {}      # edgeService:      #   resources: {}  terminal:    timeout: 600 
执行安装命令
./kk create cluster -f config-sample.yaml
接下来是漫长的等待

 
 4) 安装问题

报错
  1. error execution phase preflight: [preflight] Some fatal errors occurred:
  2.         [ERROR SystemVerification]: unsupported graph driver: vfs
  3. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
  4. To see the stack trace of this error execute with --v=5 or higher<br><br>设置docker存储引擎
复制代码
修改或创建 /etc/docker/daemon.json,并添加 "storage-driver": "overlay"
[root@localhost ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": [],
  "storage-driver": "overlay2"
}

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

怀念夏天

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表