搭建集群使用docker下载K8s,使用一主两从模式
主机名IP地点k8s- master192.168.1.141k8s- node-1192.168.1.142k8s- node-2192.168.1.143 一:准备工作
VMware Workstation Pro新建三台假造机Rocky Linux 9(体系推荐最小化安装) 。
如果VMware Workstation Pro(如低于 16.x 的版本)中,如果新建假造机向导没有 Rocky Linux 9 的预设选项,可以 Red Hat Enterprise Linux 9 或相近版本作为替换模板。
主机硬件设置说明
作用IP地点操作体系设置关键组件k8s-master01192.168.1.11Rocky Linux release 92颗CPU 4G内存 100G硬盘kube-apiserver, etcd, etck8s-node01192.168.1.12Rocky Linux release 92颗CPU 4G内存 100G硬盘kubelet, kube-proxyk8s-node02192.168.1.13Rocky Linux release 92颗CPU 4G内存 100G硬盘kubelet, kube-proxy
yum源搭建
1、体系最小化安装。
2、更换默认源。
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak \
/etc/yum.repos.d/rocky*.repo
dnf makecache
3、安装epel软件堆栈,更换国内源
1>. 在 Rocky Linux 9 中启用并安装 EPEL Repo。
# Rocky Linux 9
dnf config-manager --set-enabled crb
dnf install epel-release
2>. 备份(如有设置其他epel源)并更换为国内镜像
留意末了这个库,阿里云没有对应的镜像,不要修改它,如果误改恢复原版源即可
cp /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
cp /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
cp /etc/yum.repos.d/epel-cisco-openh264.repo /etc/yum.repos.d/epel-cisco-openh264.repo.backup
3>. 将 repo 设置中的地点更换为阿里云镜像站地点
执行下面语句,它会更换epel.repo、eple-testing.repo中的网址,不会修改epel-cisco-openh264.repo,可以正常使用。
sed -e 's!^metalink=!#metalink=!g' \
-e 's!^#baseurl=!baseurl=!g' \
-e 's!https\?://download\.fedoraproject\.org/pub/epel!https://mirrors.aliyun.com/epel!g' \
-e 's!https\?://download\.example/pub/epel!https://mirrors.aliyun.com/epel!g' \
-i /etc/yum.repos.d/epel{,-testing}.repo
现在我们有了 EPEL 堆栈,更新堆栈缓存
dnf clean all
dnf makecache
设置主机名和IP
[root@localhost ~]#hostnamectl set-hostname k8s-master01
[root@localhost ~]#hostnamectl set-hostname k8s-node01
[root@localhost ~]#hostnamectl set-hostname k8s-node02
#master01设置ip地点
[root@k8s-master01 ~]#vi /etc/NetworkManager/system-connections/ens160.nmconnection
[connection]
id=ens160
uuid=ff8b8a02-ec88-301d-8e64-4f88b4551949
type=ethernet
autoconnect-priority=-999
interface-name=ens160
timestamp=1744709836
[ethernet]
[ipv4]
method=manual
address=192.168.1.11/24,192.168.1.2
dns=114.114.114.114
[ipv6]
addr-gen-mode=eui64
method=auto
[proxy]
[root@k8s-master01 network-scripts]# nmcli connection reload
[root@k8s-master01 network-scripts]# nmcli connection up ens160
#同理node01和node02设置ip地点
[root@k8s-node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens160address=192.168.1.11/24,192.168.1.2
[root@k8s-node01 ~]# nmcli connection reload
[root@k8s-node01 ~]# nmcli connection up ens160
[root@k8s-node02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens160address=192.168.1.11/24,192.168.1.2
[root@k8s-node02 ~]# nmcli connection reload
[root@k8s-node02 ~]# nmcli connection up ens160
设置hosts解析
[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02
[root@k8s-node01 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02
[root@k8s-node02 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02
# 设置免密登录,只在k8s-master01上操作
[root@k8s-master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N '' -q
# 点拷贝秘钥到其他 2 台节点
[root@k8s-master01 ~]# ssh-copy-id k8s-node01
[root@k8s-master01 ~]# ssh-copy-id k8s-node02
开启多执行关闭防火墙和SELinux,设置时间同步
[root@k8s-master01 ~]#systemctl disable --now firewalld
[root@k8s-master01 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
[root@k8s-master01 ~]#setenforce 0
[root@k8s-node01 ~]#systemctl disable --now firewalld
[root@k8s-node01 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
[root@k8s-node01 ~]#setenforce 0
[root@k8s-node02 ~]#systemctl disable --now firewalld
[root@k8s-node02 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
[root@k8s-node02 ~]#setenforce 0
[root@k8s-master01 ~]#dnf install -y chrony
# 修改同步服务器
[root@k8s-master01 ~]#sed -i '/^pool/ c pool ntp1.aliyun.com iburst' /etc/chrony.conf
[root@k8s-master01 ~]#systemctl restart chronyd
[root@k8s-master01 ~]#systemctl enable chronyd
[root@k8s-master01 ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 47.96.149.233 2 6 17 2 -2147us[-2300us] +/- 46ms
#同理其他主机也必要举行时间同步安装,千篇一律的设置
启用ipvs
[root@k8s-master01 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
[root@k8s-master01 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-master01 ~]#systemctl restart systemd-modules-load.service
[root@k8s-node01 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
[root@k8s-node01 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-node01 ~]#systemctl restart systemd-modules-load.service
[root@k8s-node02 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
[root@k8s-node02 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-node02 ~]#systemctl restart systemd-modules-load.service
句柄数最大
[root@k8s-master01 ~]#ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
查看修改结果
[root@k8s-master01 ~]#ulimit -a
举行体系优化
[root@k8s-master01 ~]#cat > /etc/sysctl.d/k8s_better.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
[root@k8s-master01 ~]#modprobe br_netfilter
[root@k8s-master01 ~]#lsmod |grep conntrack
[root@k8s-master01 ~]#modprobe ip_conntrack
[root@k8s-master01 ~]#sysctl -p /etc/sysctl.d/k8s_better.conf
二:容器运行时工具安装及运行
#安装依赖
[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
#添加软件源信息
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/rhel/docker-ce.repo
#安装Docker-CE
[root@k8s-master01 ~]# yum makecache fast
[root@k8s-master01 ~]# yum -y install docker-ce
[root@k8s-master01 ~]# docker -v
Docker version 28.0.4, build b8034c0
# 设置国内镜像加速
[root@k8s-master01 ~]# mkdir -p /etc/docker/
cat >> /etc/docker/daemon.json << EOF
{
"registry-mirrors":["https://p3kgr6db.mirror.aliyuncs.com",
"https://docker.m.daocloud.io",
"https://your_id.mirror.aliyuncs.com",
"https://docker.nju.edu.cn/",
"https://docker.anyhub.us.kg",
"https://dockerhub.jobcher.com",
"https://dockerhub.icu",
"https://docker.ckyl.me",
"https://cr.console.aliyun.com"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
设置docker开机启动并启动
[root@k8s-master01 ~]# # systemctl enable --now docker
#k8s-node01和k8s-node02举行千篇一律的命令
查看docker版本
# docker version
三台同时安装cri-docker
wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16-3.fc35.x86_64.rpm
wget -c https://rpmfind.net/linux/almalinux/8.10/BaseOS/x86_64/os/Packages/libcgroup-0.41-19.el8.x86_64.rpm
yum install libcgroup-0.41-19.el8.x86_64.rpm
yum install cri-dockerd-0.3.16-3.fc35.x86_64.rpm
启动cri-docker服务
systemctl enable cri-docker
cri-dockerd设置国内镜像加速
[root@k8s-master01 ~]#vim /usr/lib/systemd/system/cri-docker.service
[root@k8s-node01 ~]# vim /usr/lib/systemd/system/cri-docker.service
[root@k8s-node02 ~]# vim /usr/lib/systemd/system/cri-docker.service
# 重启Docker组件
[root@k8s-master01 ~]# systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker
#检查Docker组件状态
[root@k8s-master01 ~]#systemctl status docker cir-docker.socket cri-docker
[root@k8s-node01 ~]#systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker
[root@k8s-master01 ~]#systemctl status docker cir-docker.socket cri-docker
[root@k8s-node02 ~]#systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker
[root@k8s-master02 ~]#systemctl status docker cir-docker.socket cri-docker
三:三台一起举行K8S软件安装
#添加阿里云YUM软件源,设置kubernetes源
[root@k8s-master01 ~]#
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF
#安装kubelet、kubeadm、kubectl、kubernetes-cni
[root@k8s-master01 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni
#设置cgroup为了实现docker使用的cgroupdriver与kubelet使用的cgroup的同等性,建议修改如下文件内容。
[root@k8s-master01 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
[root@k8s-master01 ~]# systemctl enable kubelet
#node1
[root@k8s-node01 ~]#
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF
[root@k8s-node01 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni
[root@k8s-node01 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
[root@k8s-node01 ~]# systemctl enable kubelet
#node2
[root@k8s-node02 ~]#
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF
[root@k8s-node02 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni
[root@k8s-node02 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
[root@k8s-node02 ~]# systemctl enable kubelet
四:K8S集群初始化
#在master上面举行修改
[root@k8s-master01 ~]#kubeadm config print init-defaults > kubeadm-init.yaml
[root@k8s-master01 ~]#vi kubeadm-init.yaml
修改为 advertiseAddress: 192.168.1.11
修改为 criSocket: unix:///var/run/cri-dockerd.sock
修改为 name: k8s-master01
修改为:imageRepository: registry.aliyuncs.com/google_containers
修改为:kubernetesVersion: 1.32.2
文件末尾增长启用ipvs功能
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
#根据设置文件启动 kubeadm 初始化 k8s
[root@k8s-master01 ~]#kubeadm init --config=kubeadm-init.yaml --upload-certs --v=6
#master主机上
[root@k8s-master01 ~]#mkdir -p $HOME/.kube
[root@k8s-master01 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]#sudo chown $(id -u) (id -g) $HOME/.kube/config
[root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
#将node-1和node-2参加到k8s集群中
[root@k8s-node01 ~]# kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1c8cadee79fc7739c3084fa08f1d4347bb0f0ae67d7cb38b329c7f2481ee0048 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-node02 ~]# kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1c8cadee79fc7739c3084fa08f1d4347bb0f0ae67d7cb38b329c7f2481ee0048 --cri-socket unix:///var/run/cri-dockerd.sock
#maste查看集群
[root@k8s-master01 ~]# kubectl get node
# 只在master01上操作
[root@k8s-master01 ~]# curl -O https://docs.projectcalico.org/archive/v3.28/manifests/calico.yaml
[root@k8s-master01 ~]# vim calico.yaml
以下两行默认没有开启,开始后修改第二举动kubeadm初始化使用指定的pod network即可。
3680 # The default IPv4 pool to create on startup if none exists. Pod IPs will be
3681 # chosen from this range. Changing this value after installation will have
3682 # no effect. This should fall within `--cluster-cidr`.
3683 - name: CALICO_IPV4POOL_CIDR
3684 value: "10.244.0.0/16"
3685 # Disable file logging so `kubectl logs` works.
[root@k8s-master ~]# ls
anaconda-ks.cfg calico.tar.gz calico.yaml kubeadm-init.yaml
[root@k8s-master ~]# docker load -i calico.tar.gz
29ebc113185d: Loading layer 3.582MB/3.582MB
de34b16b5b80: Loading layer 75.58MB/75.58MB
Loaded image: calico/kube-controllers:v3.28.0
3ba0ed02b4de: Loading layer 205.4MB/205.4MB
5f70bf18a086: Loading layer 1.024kB/1.024kB
Loaded image: calico/cni:v3.28.0
30d979f3b1cb: Loading layer 354.5MB/354.5MB
Loaded image: calico/node:v3.28.0
[root@k8s-master ~]# scp calico.tar.gz k8s-node01:~
calico.tar.gz 100% 610MB 92.1MB/s 00:06
[root@k8s-master ~]# scp calico.tar.gz k8s-node02:~
calico.tar.gz 100% 610MB 89.3MB/s 00:06
[root@k8s-node01 ~]# docker load -i calico.tar.gz
[root@k8s-node02 ~]# docker load -i calico.tar.gz
29ebc113185d: Loading layer 3.582MB/3.582MB
de34b16b5b80: Loading layer 75.58MB/75.58MB
Loaded image: calico/kube-controllers:v3.28.0
3ba0ed02b4de: Loading layer 205.4MB/205.4MB
5f70bf18a086: Loading layer 1.024kB/1.024kB
Loaded image: calico/cni:v3.28.0
30d979f3b1cb: Loading layer 354.5MB/354.5MB
Loaded image: calico/node:v3.28.0
部署calico网络
[root@k8s-master01 ~]# kubectl apply -f calico.yaml
检查:
[root@k8s-master01 ~]# kubectl get pod -n kube-system
[root@k8s-master01 ~]# kubectl get nodes
[root@k8s-master01 ~]# kubectl get pod -n kube-system
扩展~Kubectl命令自动补全
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |