怀念夏天 发表于 2022-9-16 17:20:09

KubeSphere3.3 私有云部署,在linux上 多节点安装

在四台主机上部署(也可在虚拟机部署)
 
https://img2022.cnblogs.com/blog/2391895/202208/2391895-20220825133609595-160568369.png
使用软件版本
Centos7.9 内核3.10.0-1160.el7.x86_64
 KubeSphere3.3
 
KubeSphere 官网 面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere
 
四台全新主机安装Centos7.9 系统
阿里镜像下载地址  http://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64
下载  CentOS-7-x86_64-DVD-2009.iso
 
U盘安装Centos请参考 Rufus - 轻松创建USB启动盘
 
系统安装完毕,先进行系统设置
1) 系统设置

静态IP设置
Centos7默认开机不启动网卡,需要手动启动 systemctl  start network
使用命令 ip addr ,route -n 或 ifconfig 查看当前网络配置 ,然后再修改配置文件
 
vi  /etc/sysconfig/network-scripts/ifcfg-exx 不同机器网卡名称略微不同
https://img2022.cnblogs.com/blog/2391895/202208/2391895-20220825140050341-2093160447.png
 
 
 重启网卡 systemctl  restart network
 
配置主机名
hostnamectl set-hostname xxx
 
修改host文件
 vim  /etc/hosts
不要删除原来的内容
追加
192.168.10.222 node222
192.168.10.223node223
192.168.10.224node224
192.168.10.225node225
 
关闭swap
vim /etc/fstab
注释swap
 
关闭防火墙
systemctl status firewalld
 
 
关闭 SELinux
查看状态 sestatus
修改 /etc/selinux/config
SELINUX=disabled
 
 
升级证书(不升级会出现证书过期问题)
yum upgrade ca-certificates
 
 
时间同步 yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
vim /etc/chrony.conf
替换为阿里
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
 
2) 服务器免密配置

 
生成秘钥
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
 
 
 https://img2022.cnblogs.com/blog/2391895/202208/2391895-20220825141716988-1619171607.png
 
 
 想免密登录谁,只需要把自己的公钥传递给对方主机即可(自己对自己免密也要配置!)
 ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.80.12
要输入对方root密码
3) 安装

安装请参考官网文档 多节点安装 (kubesphere.com.cn)
所有服务器安装
yum install socat
yum install sudo
yum install curl
yum install openssl
yum install tar
yum install ipset
yum install ebtables
yum install conntrack
所有服务器执行 
export KKZONE=cn
 
主节点安装kubekey
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -
chmod +x kk
 
创建配置文件
./kk create config --with-kubesphere version
修改配置文件
 
参考,我的是启用了所有可插拔组件
apiVersion: kubekey.kubesphere.io/v1alpha2kind: Clustermetadata:  name: samplespec:  hosts:  - {name: node222, address: 192.168.10.222, internalAddress: 192.168.10.222, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node223, address: 192.168.10.223, internalAddress: 192.168.10.223, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node224, address: 192.168.10.224, internalAddress: 192.168.10.224, privateKeyPath: "~/.ssh/id_rsa"}  - {name: node225, address: 192.168.10.225, internalAddress: 192.168.10.225, privateKeyPath: "~/.ssh/id_rsa"}  roleGroups:    etcd:    - node222    control-plane:     - node222    worker:    - node222    - node223    - node224    - node225  controlPlaneEndpoint:    ## Internal loadbalancer for apiservers     # internalLoadbalancer: haproxy
    domain: lb.kubesphere.local    address: ""    port: 6443  kubernetes:    version: --with-kubesphere    clusterName: cluster.local    autoRenewCerts: true    containerManager:   etcd:    type: kubekey  network:    plugin: calico    kubePodsCIDR: 10.233.64.0/18    kubeServiceCIDR: 10.233.0.0/18    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni    multusCNI:      enabled: false  registry:    privateRegistry: ""    namespaceOverride: ""    registryMirrors: ["https://whf4b9x8.mirror.aliyuncs.com"]    insecureRegistries: []  addons: []



---apiVersion: installer.kubesphere.io/v1alpha1kind: ClusterConfigurationmetadata:  name: ks-installer  namespace: kubesphere-system  labels:    version: v3.3.0spec:  persistence:    storageClass: ""  authentication:    jwtSecret: ""  zone: ""  local_registry: ""  namespace_override: ""  # dev_tag: ""  etcd:    monitoring: false    endpointIps: localhost    port: 2379    tlsEnable: true  common:    core:      console:        enableMultiLogin: true        port: 30880        type: NodePort    # apiserver:    #  resources: {}    # controllerManager:    #  resources: {}    redis:      enabled: true      volumeSize: 2Gi    openldap:      enabled: false      volumeSize: 2Gi    minio:      volumeSize: 20Gi    monitoring:      # type: external      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090      GPUMonitoring:        enabled: false    gpu:      kinds:      - resourceName: "nvidia.com/gpu"        resourceType: "GPU"        default: true    es:      # master:      #   volumeSize: 4Gi      #   replicas: 1      #   resources: {}      # data:      #   volumeSize: 20Gi      #   replicas: 1      #   resources: {}      logMaxAge: 7      elkPrefix: logstash      basicAuth:        enabled: false        username: ""        password: ""      externalElasticsearchHost: ""      externalElasticsearchPort: ""  alerting:    enabled: true    # thanosruler:    #   replicas: 1    #   resources: {}  auditing:    enabled: true    # operator:    #   resources: {}    # webhook:    #   resources: {}  devops:    enabled: true    # resources: {}    jenkinsMemoryLim: 2Gi    jenkinsMemoryReq: 1500Mi    jenkinsVolumeSize: 8Gi    jenkinsJavaOpts_Xms: 1200m    jenkinsJavaOpts_Xmx: 1600m    jenkinsJavaOpts_MaxRAM: 2g  events:    enabled: true    # operator:    #   resources: {}    # exporter:    #   resources: {}    # ruler:    #   enabled: true    #   replicas: 2    #   resources: {}  logging:    enabled: true    logsidecar:      enabled: true      replicas: 2      # resources: {}  metrics_server:    enabled: false  monitoring:    storageClass: ""    node_exporter:      port: 9100      # resources: {}    # kube_rbac_proxy:    #   resources: {}    # kube_state_metrics:    #   resources: {}    # prometheus:    #   replicas: 1    #   volumeSize: 20Gi    #   resources: {}    #   operator:    #     resources: {}    # alertmanager:    #   replicas: 1    #   resources: {}    # notification_manager:    #   resources: {}    #   operator:    #     resources: {}    #   proxy:    #     resources: {}    gpu:      nvidia_dcgm_exporter:        enabled: false        # resources: {}  multicluster:    clusterRole: none  network:    networkpolicy:      enabled: true    ippool:      type: none    topology:      type: weave-scope  openpitrix:    store:      enabled: true  servicemesh:    enabled: true    istio:      components:        ingressGateways:        - name: istio-ingressgateway          enabled: false        cni:          enabled: false  edgeruntime:    enabled: false    kubeedge:      enabled: false      cloudCore:        cloudHub:          advertiseAddress:            - ""        service:          cloudhubNodePort: "30000"          cloudhubQuicNodePort: "30001"          cloudhubHttpsNodePort: "30002"          cloudstreamNodePort: "30003"          tunnelNodePort: "30004"        # resources: {}        # hostNetWork: false      iptables-manager:        enabled: true        mode: "external"        # resources: {}      # edgeService:      #   resources: {}  terminal:    timeout: 600 
执行安装命令
./kk create cluster -f config-sample.yaml
接下来是漫长的等待
https://img2022.cnblogs.com/blog/2391895/202208/2391895-20220825142924873-176030469.png
 
 4) 安装问题

报错
error execution phase preflight: Some fatal errors occurred:
        : unsupported graph driver: vfs
If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher<br><br>设置docker存储引擎修改或创建 /etc/docker/daemon.json,并添加 "storage-driver": "overlay"
# vim /etc/docker/daemon.json
{
"registry-mirrors": [],
"storage-driver": "overlay2"
}

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
页: [1]
查看完整版本: KubeSphere3.3 私有云部署,在linux上 多节点安装