[kubernetes]二进制方式部署单机k8s-v1.30.5
媒介之前在单机测试k8s的kind最近故障了,虚拟机运行个几分钟后就宕机了,不知道是根因是什么,而且kind部署k8s不太好做一些个性化设置,干脆用二进制方式重新搭一个单机k8s。
因为是用来开辟测试的,所以control-panel就不做高可用了,etcd+apiserver+controller-manager+scheduler都只有一个实例。
情况信息:
[*]主机:Debian 12.7,4核CPU、4GB内存、30GB存储(只是部署一个k8s的话,2C2G的设置也足够)
[*]容器运行时:containerd v1.7.22
[*]etcd: v3.4.34
[*]kubernetes:v1.30.5
[*]cni: calico v3.25.0
本文中的大部门设置文件已上传到gitee - k8s-note,目录为"安装k8s/二进制单机部署k8s-v1.30.5",如有需要可直接clone repo.
准备
本节下令大部门都要root权限,如果执行下令时提示权限不足,可自行切换root用户或利用sudo。
调解主机参数
[*]修改主机名。kubernetes要求每个节点的hostname不一样
hostnamectl set-hostname k8s-node1
[*]修改/etc/hosts文件。如果内网有自建DNS可忽略
192.168.0.31 k8s-node1
[*]安装时间同步服务。如果有多台主机,要注意主机之间的时间要同步。内网如果有时间同步服务器,可以修改chrony的设置来指向内网时间同步服务器
sudo apt install -y chrony
sudo systemctl start chrony
[*]关闭swap。默认情况下,k8s没法在利用swap的主机上运行。这里用的临时关闭下令,固化设置需要修改/etc/fstab文件,将swap干系设置行删除或解释。
sudo swapoff -a
[*]装载内核模块。这步没做的话,下一步设置系统参数会报错。
# 1. 添加配置
cat <<EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
# 2. 立即装载
modprobe overlay
modprobe br_netfilter
# 3. 检查装载。如果没有输出结果则说明没有装载成功。
lsmod | grep br_netfilter
[*]编辑设置文件/etc/containerd/config.toml,修改以下内容
# 1. 添加配置文件
cat << EOF > /etc/sysctl.d/k8s-sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
vm.swappiness = 0
EOF
# 2. 配置生效
sysctl -p /etc/sysctl.d/k8s-sysctl.conf
[*]启动containerd
# 1. 安装依赖
apt install -y ipset ipvsadm
# 2. 立即装载
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
# 3. 固化到配置文件
cat << EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 4. 检查是否已装载
lsmod |grep ip_vs
[*]执行下令测试下containerd是否正常。没报错一般就是正常的
tar xf cri-containerd-cni-1.7.22-linux-amd64.tar.gz -C /天生ca证书
后面的k8s和etcd集群都会用到ca证书。如果组织能提供统一的CA认证中心,则直接利用组织颁发的CA证书即可。如果没有统一的CA认证中心,则可以通过颁发自签名的CA证书来完成安全设置。这里自行天生一个ca证书。
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml安装etcd
etcd的安装包可以从官网下载,下载后解压。可以将压缩包中的etcd和etcdctl放到情况变量PATH中的目录。
[*]编辑文件etcd_ssl.cnf。IP地址为etcd节点。
# 对于使用systemd作为init system的linux发行版,官方建议用systemd作为容器cgroup driver
# false改成true
SystemdCgroup = true
# pause镜像的地址改为自己在阿里云上传的镜像地址。如果是内网环境,可改为内网registry的地址
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/rainux/pause:3.9"
[*]创建etcd服务端证书
systemctl start containerd
systemctl enable containerd
[*]创建etcd客户端证书
crictl images
[*]编辑etcd的设置文件。目录、文件路径,IP、端口等信息按实际情况修改
# 生成私钥文件ca.key
openssl genrsa -out ca.key 2048
# 根据私钥文件生成根证书文件ca.crt
# /CN为master的主机名或IP地址
# days为证书的有效期
openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s-node1" -days 36500 -out ca.crt
# 拷贝ca证书到/etc/kubernetes/pki
mkdir -p /etc/kubernetes/pki
cp ca.crt ca.key /etc/kubernetes/pki/
[*]编辑/etc/systemd/system/etcd.service,注意根据实际修改设置文件和etcd二进制文件的路径
[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 192.168.0.31
[*]启动etcd
openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt
[*]利用etcd客户端验证下etcd状态
openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 3650 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt安装control-panel
k8s的二进制文件安装包可以从github下载:https://github.com/kubernetes/kubernetes/releases
在changelog中找到二进制包的下载链接,下载server binary即可,内里包含了master和node的二进制文件。
解压后将其中的二进制文件挪到 /usr/local/bin目录
安装apiserver
apiserver的核心功能是提供k8s各类资源对象的增删改查及watch等HTTP REST接口,成为集群内各个功能模块之间数据交互和通信的中心枢纽,是整个系统的数据总线和数据中心。除此之外,它还是集群管理的API入口,是资源配额控制的入口,提供了完备的集群安全机制。
[*]编辑master_ssl.cnf。DNS.5为三台服务器的主机名,另行设置/etc/hosts。IP.1为Master Service虚拟服务的Cluster IP地址,IP.2为apiserver的服务器IP
ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/rainux/apps/etcd/data
ETCD_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.0.31:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.0.31:2379
ETCD_PEER_CERT_FILE=/home/rainux/apps/etcd/certs/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/rainux/apps/etcd/certs/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/home/rainux/apps/certs/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.0.31:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.31:2380"
ETCD_INITIAL_CLUSTER_STATE=new
[*]天生ssl证书文件
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target
User=rainux
EnvironmentFile=/home/rainux/apps/etcd/conf/etcd.conf
ExecStart=/home/rainux/apps/etcd/etcd
Restart=on-failure
WantedBy=multi-user.target
[*]利用cfssl创建sa.pub和sa-key.pem。cfssl和cfssljson可以从GitHub - cfssl下载
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
# 检查service状态
systemctl status etcd
[*]编辑kube-apiserver的设置文件,注意根据实际情况修改文件路径和etcd地址
etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=$HOME/apps/certs/etcd_client.crt --key=$HOME/apps/certs/etcd_client.key --endpoints=https://192.168.0.31:2379 endpoint health
# 正常情况下会有类似以下输出
https://192.168.0.31:2379 is healthy: successfully committed proposal: took = 13.705325ms
[*]编辑service文件。/etc/systemd/system/kube-apiserver.service
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-node1
IP.1 = 169.169.0.1
IP.2 = 192.168.0.31
[*]启动apiserver
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=k8s-node1" -out apiserver.csr
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
[*]天生客户端证书
cat<<EOF > sa-csr.json
{
"CN":"sa",
"key":{
"algo":"rsa",
"size":2048
},
"names":[
{
"C":"CN",
"L":"BeiJing",
"ST":"BeiJing",
"O":"k8s",
"OU":"System"
}
]
}
EOF
cfssl gencert -initca sa-csr.json | cfssljson -bare sa -
openssl x509 -in sa.pem -pubkey -noout > sa.pub
[*]创建客户端毗连apiserver所需的kubeconfig设置文件。其中server为nginx监听地址。注意根据实际修改设置。这个kubeconfig设置文件也可以给kubectl利用,所以开辟情况中可以直接文件路径置为$HOME/.kube/config
KUBE_API_ARGS="--secure-port=6443 \
--tls-cert-file=/home/rainux/apps/certs/apiserver.crt \
--tls-private-key-file=/home/rainux/apps/certs/apiserver.key \
--client-ca-file=/home/rainux/apps/certs/ca.crt \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/home/rainux/apps/certs/sa.pub \
--service-account-signing-key-file=/home/rainux/apps/certs/sa-key.pem \
--apiserver-count=1 \
--endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.0.31:2379 \
--etcd-cafile=/home/rainux/apps/certs/ca.crt \
--etcd-certfile=/home/rainux/apps/certs/etcd_client.crt \
--etcd-keyfile=/home/rainux/apps/certs/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--audit-log-maxsize=100 \
--audit-log-maxage=15 \
--audit-log-path=/home/rainux/apps/kubernetes/logs/apiserver.log --v=2"安装kube-controller-manager
controller-manager通过apiserver提供的接口实时监控集群中特定资源的状态变化,当资源对象不符合预期状态时,controller-manager会尝试将其状态调解为盼望的状态。
[*]编辑设置文件 /home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
EnvironmentFile=/home/rainux/apps/kubernetes/conf/apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
WantedBy=multi-user.target
[*]编辑service文件/etc/systemd/system/kube-controller-manager.service
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
# 检查service状态
systemctl status kube-apiserver
[*]启动kube-controller-manager
openssl genrsa -out client.key 2048
# /CN的名称用于标识连接apiserver的客户端用户名称
openssl req -new -key client.key -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500安装kube-scheduler
[*]编辑设置文件/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
apiVersion: v1
kind: Config
clusters:
- name: default
cluster:
server: https://192.168.0.31:6443
certificate-authority: /home/rainux/apps/certs/ca.crt
users:
- name: admin
user:
client-certificate: /home/rainux/apps/certs/client.crt
client-key: /home/rainux/apps/certs/client.key
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
[*]编辑service文件 /etc/systemd/system/kube-scheduler.service
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/home/rainux/apps/certs/apiserver.key \
--root-ca-file=/home/rainux/apps/certs/ca.crt \
--v=0"
[*]启动
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
WantedBy=multi-user.target安装worker node
安装kubelet
[*]编辑文件 /home/rainux/apps/kubernetes/conf/kubelet.conf。注意根据实际修改hostname-override和kubeconfig。
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
[*]编辑/home/rainux/apps/kubernetes/conf/kubelet.config文件。
KUBE_SCHEDULER_ARGS="--kubeconfig=/home/rainux/.kube/config \
--leader-elect=true \
--v=0"
[*]编辑service文件 /etc/systemd/system/kubelet.service
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
WantedBy=multi-user.target
[*]启动kubelet
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler安装kube-proxy
[*]编辑设置文件/home/rainux/apps/kubernetes/conf/kube-proxy.conf。proxy-mode参数默认为iptables,如果安装了ipvs,建议修改为ipvs
KUBELET_ARGS="--kubeconfig=/home/rainux/.kube/config \
--config=/home/rainux/apps/kubernetes/conf/kubelet.config \
--hostname-override=k8s-node1 \
--v=0 \
--container-runtime-endpoint="unix:///run/containerd/containerd.sock"
[*]编辑service文件 /etc/systemd/system/kube-proxy.service
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0# 服务监听地址
port: 10250# 服务监听端口号
cgroupDriver: systemd# cgroup驱动,默认为cgroupfs, 建议systemd
clusterDNS: ["169.169.0.100"]# 集群DNS地址
clusterDomain: cluster.local# 服务DNS域名后缀
authentication:# 是否允许匿名访问或者是否使用webhook鉴权
anonymous:
enabled: true
[*]启动kube-proxy
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_ARGS
Restart=on-failure
WantedBy=multi-user.target安装calico
[*]下载calico设置文件
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
[*]如果可以正常访问docker hub,则可以直接利用设置文件来创建calico资源对象,否则需要修改其中的镜像地址。如果用的calico版本也是3.25.0,可以用我在阿里云上传的镜像。
KUBE_PROXY_ARGS="--kubeconfig=/home/rainux/.kube/config \
--hostname-override=k8s-node1 \
--proxy-mode=ipvs \
--v=0"
[*]执行安装
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=kubelet.service
EnvironmentFile=/home/rainux/apps/kubernetes/conf/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
WantedBy=multi-user.target
[*]查看calico的pod是否正常运行。如果正常,状态应该都是running;若不正常,则需要describe pod的信息查看什么问题
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy安装CoreDNS
[*]编辑部署文件 coredns.yaml。注意service中指定了clusterIP,以及镜像地址改为了我在阿里云上传的。
wget https://docs.projectcalico.org/manifests/calico.yaml
[*]创建coredns服务
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:cni-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:node-v3.25.0
image: registry.cn-hangzhou.aliyuncs.com/rainux/calico:kube-controllers-v3.25.0
[*]在repo中我放了一份test-dns.yaml用来测试dns是否正常。创建这个测试对象后,在debian的pod中安装nslookup,测试可否解析出svc-nginx
kubectl create -f calico.yaml安装metrics-server
在新版k8s中,系统资源的收罗和HPA功能均需要利用metrics-server
[*]编辑设置文件。注意镜像地址
kubectl get pods -A
[*]创建干系资源对象
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
cluster.local {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local 169.169.0.0/16 {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
. {
cache 30
loadbalance
forward . /etc/resolv.conf
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: registry.cn-hangzhou.aliyuncs.com/rainux/coredns:1.11.3
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 169.169.0.100
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
[*]执行干系下令测试是否安装正常
kubectl create -f coredns.yaml小结
按照以上步骤执行完成后,一个用于开辟测试的单机k8s就搭建好了,而且增加节点也比较方便,同时二进制部署方式也便于修改集群参数。
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页:
[1]