一:Pod介绍
pod资源的各种配置和原理
关于很多yaml文件的编写,都是基于配置引出来的
1:pod的结构和界说
每个Pod中都可以包含一个大概多个容器,这些容器可以分为2大类:
1:用户所在的容器,数量可多可少(用户容器)
2:pause容器,这是每个pod都会有的一个跟容器,作用有2个
1、可以以它为根据,评估整个pod的康健状态
2、可以在根容器上面设置ip地址,其他容器都以此ip,实现Pod内部的网络通信
这里的Pod内部通讯是,pod之间采用二层网络技术来实现
;其他容器都共享这个根容器的ip地址,外界访问这个根容器ip地址+端口即可
2:pod界说
pod的资源清单:
属性,依次类推的进行查找- [root@master /]# kubectl explain pod
- #查看二级属性
- [root@master /]# kubectl explain pod.metadata
复制代码
介绍- apiVersion 版本
- #查看所有的版本
- [root@master /]# kubectl api-versions
- admissionregistration.k8s.io/v1
- apiextensions.k8s.io/v1
- apiregistration.k8s.io/v1
- apps/v1
- authentication.k8s.io/v1
- authorization.k8s.io/v1
- autoscaling/v1
- autoscaling/v2
- batch/v1
- certificates.k8s.io/v1
- coordination.k8s.io/v1
- discovery.k8s.io/v1
- events.k8s.io/v1
- flowcontrol.apiserver.k8s.io/v1beta2
- flowcontrol.apiserver.k8s.io/v1beta3
- networking.k8s.io/v1
- node.k8s.io/v1
- policy/v1
- rbac.authorization.k8s.io/v1
- scheduling.k8s.io/v1
- storage.k8s.io/v1
- v1
- kind 类型
- #查看资源的类型
- [root@master /]# kubectl api-resources
- metadata 元数据,资源的名字,标签等等
- [root@master /]# kubectl explain pod.metadata
- status 状态信息,自动的进行生成,不需要自己定义
- [root@master /]# kubectl get pods -o yaml
- spec 定义资源的详细信息,
- 下面的子属性
- containers:object 容器列表,用于定义容器的详细信息
- nodename:string 根据nodename的值将pod的调度到指定的node节点,pod部署在哪个Pod上面
- nodeselector:pod标签选择器,可以将pod调度到包含这些label的Node上
- hostnetwork:默认是false,k8s自动的分配一个ip地址,如果设置为true,就使用宿主机的ip
- volumes:存储卷,用于定义pod上面挂载的存储信息
- restartpolicy:重启策略,表示pod在遇到故障的时候处理的策略
复制代码 3:pod配置
重要关于pod.spec.containers属性
里面有的是数组,就是可以选择多个值,在里面的话,有的只是一个值,看环境进行区分- [root@master /]# kubectl explain pod.spec.containers
- KIND: Pod
- VERSION: v1
- name:容器名称
- image:容器需要的镜像地址
- imagePullPolicy:镜像拉取策略 本地的还是远程的
- command:容器的启动命令列表,如不指定,使用打包时使用的启动命令 string
- args:容器的启动命令需要的参数列表,也就是上面的列表的命令 string
- env:容器环境变量的配置 object
- ports:容器需要暴露的端口列表 object
- resources:资源限制和资源请求的设置 object
复制代码 1、基本配置
- [root@master ~]# cat pod-base.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-base
- namespace: dev
- labels:
- user: qqqq
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- - name: busybox
- image: busybox:1.30
- 简单的Pod的配置,里面有2个容器
- nginx轻量级的web软件
- busybox:就是一个小巧的Linux命令集合
- [root@master ~]# kubectl create -f pod-base.yaml
- pod/pod-base created
- #查看Pod状态,
- ready:只有里面有2个容器,但是只有一个是准备就绪的,还有一个没有启动
- restarts:重启的次数,因为有一个容器故障了,Pod一直重启试图恢复它
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-base 1/2 CrashLoopBackOff 4 (29s ago) 2m36s
- #可以查看pod详情
- [root@master ~]# kubectl describe pods pod-base -n dev
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 4m51s default-scheduler Successfully assigned dev/pod-base to node2
- Normal Pulling 4m51s kubelet Pulling image "nginx:1.17.1"
- Normal Pulled 4m17s kubelet Successfully pulled image "nginx:1.17.1" in 33.75s (33.75s including waiting)
- Normal Created 4m17s kubelet Created container nginx
- Normal Started 4m17s kubelet Started container nginx
- Normal Pulling 4m17s kubelet Pulling image "busybox:1.30"
- Normal Pulled 4m9s kubelet Successfully pulled image "busybox:1.30" in 8.356s (8.356s including waiting)
- Normal Created 3m27s (x4 over 4m9s) kubelet Created container busybox
- Normal Started 3m27s (x4 over 4m9s) kubelet Started container busybox
- Warning BackOff 2m59s (x7 over 4m7s) kubelet Back-off restarting failed container busybox in pod pod-base_dev(2e9aeb3f-2bec-4af5-853e-2d8473e115a7)
- Normal Pulled 2m44s (x4 over 4m8s) kubelet Container image "busybox:1.30" already present on machine
复制代码 之后再来进行办理
2、镜像拉取
imagePullPolicy
就是pod里面有个容器,一个有本地镜像,一个没有,可以使用这个参数来进行控制是本地照旧远程的
imagePullPolicy的值,
Always:总是从远程仓库进行拉取镜像(一直用远程下载)
ifNotPresent:本地有则使用本地的镜像,本地没有则使用从远程仓库拉取镜像
Never:一直使用本地的,不使用远程下载
假如镜像的tag为具体的版本号:默认值是ifNotPresent,
假如是latest:默认策略是always- [root@master ~]# cat pod-policy.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-imagepullpolicy
- namespace: dev
- labels:
- user: qqqq
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.2
- imagePullPolicy: Never
- - name: busybox
- image: busybox:1.30
- [root@master ~]# kubectl create -f pod-policy.yaml
- pod/pod-imagepullpolicy created
- #查看pods状态
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-base 1/2 CrashLoopBackOff 9 (3m59s ago) 25m
- pod-imagepullpolicy 0/2 CrashLoopBackOff 1 (9s ago) 19s
- #查看详细的信息
- [root@master ~]# kubectl describe pods pod-imagepullpolicy -n dev
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 64s default-scheduler Successfully assigned dev/pod-imagepullpolicy to node1
- Normal Pulling 64s kubelet Pulling image "busybox:1.30"
- Normal Pulled 56s kubelet Successfully pulled image "busybox:1.30" in 8.097s (8.097s including waiting)
- Normal Created 39s (x3 over 56s) kubelet Created container busybox
- Normal Started 39s (x3 over 56s) kubelet Started container busybox
- Normal Pulled 39s (x2 over 55s) kubelet Container image "busybox:1.30" already present on machine
- Warning ErrImageNeverPull 38s (x6 over 64s) kubelet Container image "nginx:1.17.2" is not present with pull policy of Never
- Warning Failed 38s (x6 over 64s) kubelet Error: ErrImageNeverPull
- Warning BackOff 38s (x3 over 54s) kubelet Back-off restarting failed container busybox in pod pod-imagepullpolicy_dev(38d5d2ff-6155-4ff3-ad7c-8b7f4a370107)
- #直接报了一个错误,就是镜像拉取失败了
- #解决的措施,修改里面的策略为ifnotpresent即可
- [root@master ~]# kubectl delete -f pod-policy.yaml
- [root@master ~]# kubectl apply -f pod-policy.yaml
- [root@master ~]# kubectl get pods -n dev
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-base 1/2 CrashLoopBackOff 11 (2m34s ago) 34m
- pod-imagepullpolicy 1/2 CrashLoopBackOff 4 (63s ago) 2m55s
- 这样就拉取成功了
复制代码 3、启动下令
command:容器启动的下令列表,假如不指定的话,使用打包时使用的启动下令
args:容器的启动下令需要的参数列表
为什么没有busybox运行了,busybox并不是一个程序,而是类似于一个工具类的集合,他会自动的进行关闭,办理的方法就是让其一直的运行,这就要使用command下令了- [root@master ~]# cat command.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-command
- namespace: dev
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- - name: busybox
- image: busybox:1.30
- command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]
- #/bin/sh 命令行脚本
- -c 之后的字符串作为一个命令来执行
- 向这个文件里面执行时间,然后执行结束后,休息3秒钟,这个就是一个进程一直在运行
- [root@master ~]# kubectl create -f command.yaml
- pod/pod-command created
- #这样就好了,都启动了
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-command 2/2 Running 0 6s
- #进入这个容器
- [root@master ~]# kubectl exec pod-command -n dev -it -c busybox /bin/sh
- kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
- / #
- 这样就成功的进入里面去了
- / # cat /tmp/hell0.txt ,因为有这个进程的存在,就不是关闭掉
复制代码
分析:发现command已经完成启动下令后和传递参数后的功能,为什么还需要提供一个args的选项了,用于传递参数呢,这其实跟docker有点关系,整个个就是覆盖dockerfile中的entrypoint的功能
k8s拉取镜像的时候,里面有一个dockerfile来构建镜像,然后k8s的command和args会替换
环境:
1,假如command和args没有写,那么用dockerfile的配置
2、假如command写了,但是args没有写,那么用dockerfile默认配置会被忽略,实行输入的command下令
3、假如command没写,但是args写了,那么dockerfile中的配置的entrypoint下令会被实行,使用当前的args的参数
4、假如都写了,那么dockerfile的配置被忽略,实行command并追上args参数
4、环境变量(了解即可)
env向容器里面传入环境变量,object范例的数组
键值对,就是一个键加上一个值即可- [root@master ~]# cat pod-env.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-command
- namespace: dev
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- - name: busybox
- image: busybox:1.30
- command: ["/bin/sh","-c","touch /tmp/hello.txt;while true;do /bin/echo $(date +%T) >> /tmp/hell0.txt;sleep 3;done;"]
- env:
- - name: "username"
- vaule : "admin"
- - name: "password"
- vaule: "123456"
- #创建Pod
- [root@master ~]# kubectl create -f pod-env.yaml
- pod/pod-command created
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-command 2/2 Running 0 47s
- #进入容器里面
- -c选项,只有一个容器的话,可以省略掉即可
- [root@master ~]# kubectl exec -ti pod-command -n dev -c busybox /bin/sh
- kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
- / # ls
- bin dev etc home proc root sys tmp usr var
- / # echo $username
- admin
- / # echo password
- password
复制代码 5、端口设置(ports)
查察端口一些选项 - [root@master ~]# kubectl explain pod.spec.containers.ports
- ports
- name:端口的名称,必须是在Pod中是唯一的
- containerport 容器要监听的端口
- hostport 容器要在主机上公开的端口,如果设置,主机上只能运行容器的一个副本,会有冲突,多个Pod会占用一个端口
- hostip 要将外部端口绑定到主机的Ip(一般省略了)
- protocol 端口协议,默认是TCP,UTP,SCTP
-
复制代码
案例:- [root@master ~]# cat pod-port.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-ports
- namespace: dev
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- protocol: TCP
- kubectl create -f pod-port.yaml
- [root@master ~]# kubectl get pod -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-command 2/2 Running 0 27m 10.244.1.2 node2 <none> <none>
- pod-ports 1/1 Running 0 2m58s 10.244.2.2 node1 <none> <none>
- #访问容器里面的程序的话,需要使用Pod的ip加上容器的端口即可,进行访问
- [root@master ~]# curl 10.244.2.2:80
- <!DOCTYPE html>
- <html>
- <head>
- <title>Welcome to nginx!</title>
- </head>
- <body>
- <h1>Welcome to nginx!</h1>
- <p>If you see this page, the nginx web server is successfully installed and
- working. Further configuration is required.</p>
- <p>For online documentation and support please refer to
- <a target="_blank" href="http://nginx.org/">nginx.org</a>.<br/>
- Commercial support is available at
- <a target="_blank" href="http://nginx.com/">nginx.com</a>.</p>
- <p><em>Thank you for using nginx.</em></p>
- </body>
- </html>
复制代码 6、资源限定(resources)
因为容器的运行需要占用一些资源,就是对某些容器进行资源的限定,假如某个资源突然大量的值内存的话,其他的容器就不能正常的工作了,就会出现问题
就是规定A容器只需要600M内存,假如大于的话,就出现了问题,进行重启容器的操作
有2个字选项:
limits:用于限定运行时容器的最大占用资源,当容器占用的资源凌驾了limits会被终止,并就进行重启(上限)
requests:用于设置容器需要的最小资源,假如环境资源不够的话,容器无法进行启动(下限)
作用:
1、只针对cpu,内存
案例:- [root@master ~]# cat pod-r.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-resources
- namespace: dev
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- resources:
- limits:
- cpu: "2"
- memory: "10Gi"
- requests:
- cpu: "1"
- memory: "10Mi"
- kubectl create -f pod-r.yaml
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-command 2/2 Running 0 41m
- pod-ports 1/1 Running 0 16m
- pod-resources 1/1 Running 0 113s
- #规定最少需要10G才能启动容器,但是不会进行启动
- [root@master ~]# cat pod-r.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-resources
- namespace: dev
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- resources:
- limits:
- cpu: "2"
- memory: "10Gi"
- requests:
- cpu: "1"
- memory: "10G"
- [root@master ~]# kubectl create -f pod-r.yaml
- pod/pod-resources created
- #查找状态
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-command 2/2 Running 0 44m
- pod-ports 1/1 Running 0 19m
- pod-resources 0/1 Pending 0 89s
- #查看详细的信息
- [root@master ~]# kubectl describe pods pod-resources -n dev
- cpu和内存的单位
- cpu为整数
- 内存为Gi Mi G M等形式
复制代码 二:pod生命周期
1:概念
一般是指Pod对象从创建至终的时间范围称为pod的生命周期,重要包含一下过程
1、pod创建过程
2、运行初始化容器过程,它是容器的一种,可多可少,一定在主容器运行之前实行
3、运行主容器过程
容器启动后钩子,容器终止前钩子,就是启动之后的一些下令,2个特殊的点
容器的存活性探测,就绪性探测
4、pod终止过程
在整个生命周期中,pod会出现5中状态
挂起(pending):apiserver,已经创建了pod资源对象,但它尚未被调度,大概仍然处于下载镜像的过程中;创建一个pod,里面有容器,需要拉取
运行中(running):pod已经被调度至某一个节点,并且全部的容器都已经被kubelet创建完成
成功(succeeded):Pod中的全部容器都已经被成功终止,并且不会被重启;就是运行一个容器,30秒后,打印,然退却出
失败(failed):全部容器都已经被终止,但至少有一个容器终止失败,即容器返回非0的退出状态
未知(unknown):apiserver无法正常的获取到pod对象的状态信息,通常由网络通信失败所导致的
2:pod创建和终止
pod的创建过程:
都监听到apiserver上面了
开始创建就已经返回一个信息了,给etcd了,
scheduler:开始为pod分配主机,将结果告诉apiserver
node节点上面发现有pod调度过来,调用docker启动容器,并将结果告诉apiserver
apiserver将吸取的信息pod状态信息存入etcd中
pod的终止过程:
service就是Pod的代理,访问pod通过service即可
向apiserver发送一个请求,apiserver更新pod的状态,将pod标志为terminating状态,kubelet监听到为terminating,就启动关闭pod过程
3:初始化容器
重要做的就是主容器的前置工作(环境的准备),2个特点
1、初始化容器必须运行在完成直至结束,若某初始化容器运行失败了,那么k8s需要重启它知道成功完成
2、初始化容器必须按照界说的顺序实行,当且仅当前一个成功了,后面的一个才能运行,否则不运行
初始化容器应用场景:
提供主容器进行不具备工具程序或自界说代码
初始化容器需要先于应用容器串行启动并运行成功,因此,可应用容器的启动直至依靠的条件得到满足
nginx,mysql,redis, 先连mysql,不成功,则会一直处于连接, 一直连成功了,就会去连接redis,这2个条件都满足了,nginx这个主容器就会启动了
测试:
规定mysql 192.168.109.201 redis 192.168.109.202
- [root@master ~]# cat pod-init.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-init
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- initContainers:
- - name: test-mysql
- image: busybox:1.30
- command: ['sh','-c','util ping 192.168.109.201 -c 1;do echo waiting for mysql;sleep 2;done;']
- - name: test-redis
- image: busybox:1.30
- command: ['sh','-c','util ping 192.168.109.202 -c 1;di echo waiting for redis;sleep 2;done']
- #由于没有地址,所以的话,初始化失败
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-init 0/1 Init:CrashLoopBackOff 3 (27s ago) 83s
- #添加地址,第一个初始化容器就能运行了
- [root@master ~]# ifconfig ens33:1 192.168.109.201 netmask 255.255.255.0 up
- #再次添加地址,第二个初始化容器也能运行了
- [root@master ~]# ifconfig ens33:2 192.168.109.202 netmask 255.255.255.0 up
- [root@master ~]# kubectl get pods -n dev -w
- NAME READY STATUS RESTARTS AGE
- pod-init 0/1 Init:0/2 0 6s
- pod-init 0/1 Init:1/2 0 13s
- pod-init 0/1 Init:1/2 0 14s
- pod-init 0/1 PodInitializing 0 27s
- pod-init 1/1 Running 0 28s
- 主容器就运行成功了
复制代码 4:主容器钩子函数
就是主容器上面的一些点,能够答应用户使用一些代码
2个点
post start:容器启动后钩子,容器启动之后会立即的实行,成功了,则启动,否则,会重启
prestop:容器终止前钩子,容器在删除之前实行,就是terming状态,会壅闭容器删除,实行成功了,就会删除
1、钩子处置惩罚器(三种方式界说动作)
exec下令:在容器内实行一次下令
用的最多的exec方式- lifecycle:
- podstart:
- exec:
- command:
- - cat
- - /tmp/healthy
复制代码 tcpsocket:在当前容器内尝试访问指定socket,在容器内部访问8080端口- lifecycle:
- podstart:
- tcpsocket:
- port:8080 #会尝试连接8080端口
复制代码 httpget:在当前容器中向某url发起http请求- lifecycle:
- poststart:
- httpGet:
- path: url地址
- port: 80
- host: 主机地址
- schme: HTTP 支持的协议
复制代码 案例:- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-exec
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80 #容器内部的端口,一般是service将公开pod端口,将pod端口映射到主机上面
- lifecycle:
- postStart:
- exec: ###在启动的时候,执行一个命令,修改默认网页内容
- command: ["/bin/sh","-c","echo poststart > /usr/share/nginx/html/index.html"]
- preStop:
- exec: ###停止容器的时候,-s传入一个参数,优雅的停止nginx服务
- command: ["/usr/sbin/nginx","-s","quit"]
- [root@master ~]# kubectl create -f pod-exec.yaml
- pod/pod-exec created
- [root@master ~]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-exec 1/1 Running 0 53s 10.244.1.7 node1 <none> <none>
- pod-init 1/1 Running 0 27m 10.244.1.6 node1 <none> <none>
- 访问一下pod里面容器的服务即可
- 格式为pod的ip+容器的端口
- [root@master ~]# curl 10.244.1.7:80
- poststart
复制代码 5:容器探测
主容器探测:用于检测容器中的应用实例是否正常的工作,是保障业务可用性的一种传统机制,假如经过了探测,实例的状态不符合预期,那么k8s就会把问题的实例摘除,不承担业务的流量,k8s提供了2种探针来实现容器探测,
分别是:
liveness probes:存活性探针,用于检测应用实例,是否处于正常的运行状态,假如不是,k8s会重启容器;用于决定是否重启容器
readiness probes:就绪性探针,用于检测应用实例是否可以担当请求,假如不能,k8s不会转发流量;nginx需要读取很多的web文件,在读取的过程中,service认为nginx已经成功了,假如有个请求的话,那么就无法提供了服务;所以就不会将请求转发到这里了
就是一个service来代理许多的pod,请求来到了pod,假如有一个pod出现了问题,假如没有了探针的话,就会出现了问题
作用:
1、找出这些出了问题的pod
2、服务是否已经准备成功了
三种探测方式:
exec:退出码为0,则正常- livenessProbe
- exec:
- command:
- - cat
- - /tmp/healthy
复制代码 tcpsocket:- livenessProbe:
- tcpSocket:
- port: 8080
复制代码 httpget:
返回的状态码在200个399之间,则认为程序正常,否则不正常- livenessProbe:
- httpGet:
- path: / url地址
- port:80 主机端口
- host:主机地址
- scheme:http
-
复制代码 案例:
exec案例:- [root@master ~]# cat pod-live-exec.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-exec
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- exec:
- command: ["/bin/cat","/tmp/hello.txt"] #由于没有这个文件,所以就会一直进行重启
- #出现了问题,就会处于一直重启的状态
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-exec 1/1 Running 0 38m
- pod-init 1/1 Running 0 65m
- pod-liveness-exec 1/1 Running 2 (27s ago) 97s
- #查看pod的详细信息
- [root@master ~]# kubectl describe pod -n dev pod-liveness-exec
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 2m13s default-scheduler Successfully assigned dev/pod-liveness-exec to node2
- Normal Pulling 2m12s kubelet Pulling image "nginx:1.17.1"
- Normal Pulled 2m kubelet Successfully pulled image "nginx:1.17.1" in 12.606s (12.606s including waiting)
- Normal Created 33s (x4 over 2m) kubelet Created container main-container
- Normal Started 33s (x4 over 2m) kubelet Started container main-container
- Warning Unhealthy 33s (x9 over 113s) kubelet Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
- Normal Killing 33s (x3 over 93s) kubelet Container main-container failed liveness probe, will be restarted
- Normal Pulled 33s (x3 over 93s) kubelet Container image "nginx:1.17.1" already present on machine
- #一直在重启
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-exec 1/1 Running 0 39m
- pod-init 1/1 Running 0 66m
- pod-liveness-exec 0/1 CrashLoopBackOff 4 (17s ago) 2m57s
- #一个正常的案例
- [root@master ~]# cat pod-live-exec.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-exec
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- exec:
- command: ["/bin/ls","/tmp/"]
- [root@master ~]# kubectl create -f pod-live-exec.yaml
- pod/pod-liveness-exec created
- #就不会一直重启了
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-exec 1/1 Running 0 42m
- pod-init 1/1 Running 0 69m
- pod-liveness-exec 1/1 Running 0 56s
- #查看详细的信息,发现没有错误
复制代码
tcpsocket: - [root@master ~]# cat tcp.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-tcp
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- tcpSocket:
- port: 8080 访问容器的8080端口
- kubectl create -f tcp.yaml
- #发现一直在进行重启,没有访问到8080端口
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-liveness-tcp 1/1 Running 5 (72s ago) 3m43s
- #查看详细的信息
- [root@master ~]# kubectl describe pod -n dev pod-liveness-tcp
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 3m22s default-scheduler Successfully assigned dev/pod-liveness-tcp to node2
- Normal Pulled 112s (x4 over 3m22s) kubelet Container image "nginx:1.17.1" already present on machine
- Normal Created 112s (x4 over 3m22s) kubelet Created container main-container
- Normal Started 112s (x4 over 3m22s) kubelet Started container main-container
- Normal Killing 112s (x3 over 2m52s) kubelet Container main-container failed liveness probe, will be restarted
- Warning Unhealthy 102s (x10 over 3m12s) kubelet Liveness probe failed: dial tcp 1
复制代码 正常的案例:- [root@master ~]# cat tcp.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-tcp
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- tcpSocket:
- port: 80
- #查看效果,没有任何的问题
- [root@master ~]# kubectl describe pods -n dev pod-liveness-tcp
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 27s default-scheduler Successfully assigned dev/pod-liveness-tcp to node2
- Normal Pulled 28s kubelet Container image "nginx:1.17.1" already present on machine
- Normal Created 28s kubelet Created container main-container
- Normal Started 28s kubelet Started container main-container
复制代码 httpget- [root@master ~]# cat tcp.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-http
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- httpGet:
- scheme: HTTP
- port: 80
- path: /hello # http://127.0.0.1:80/hello
- #发现一直在进行重启的操作
- [root@master ~]# kubectl describe pod -n dev pod-liveness-http
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-liveness-http 1/1 Running 1 (17s ago) 48s
- pod-liveness-tcp 1/1 Running 0 4m21s
- #正常的情况
- [root@master ~]# cat tcp.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-liveness-http
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- httpGet:
- scheme: HTTP
- port: 80
- path: /
- [root@master ~]# kubectl describe pods -n dev pod-liveness-http
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 21s default-scheduler Successfully assigned dev/pod-liveness-http to node1
- Normal Pulled 22s kubelet Container image "nginx:1.17.1" already present on machine
- Normal Created 22s kubelet Created container main-container
- Normal Started 22s kubelet Started container main-container
复制代码 容器探测补充- [root@master ~]# kubectl explain pod.spec.containers.livenessProbe
- initialDelaySeconds <integer> 容器启动后等待多少秒执行第一次探测
- timeoutSeconds <integer> 探测超时时间,默认是1秒,最小1秒
- periodSeconds <integer> 执行探测的频率,默认是10秒,最小是1秒
- failureThreshold <integer> 连续探测失败多少次后才被认为失败,默认是3,最小值是1
- successThreshold <integer> 连续探测成功多少次后才被认定为成功,默认是1
复制代码 案例:
6:重启策略
就是容器探测出现了问题,k8s就会对容器所在的Pod进行重启,这个由pod的重启策略决定的,pod的重启策略有三种
always:容器失效时,自动重启该容器,默认值
onfailure:容器终止运行且退出码不为0时重启,异常终止
never:不论状态为何,都不重启该容器
重启策略适用于Pod对象中的全部容器,初次需要重启的容器,将在需要时立即重启,随后再次需要重启的操作由kubelet延迟一段时间进行,且反复的重启操作的延迟时长为10S,20S,300s为最大的延迟时长
案例:- apiVersion: v1
- kind: Pod
- metadata:
- name: restart-pod
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- - name: nginx-port
- containerPort: 80
- livenessProbe:
- httpGet:
- scheme: HTTP
- port: 80
- path: /hello # http://127.0.0.1:80/hello
- restartPolicy: Always
- #会一直进行重启
- #改为Never
- 容器监听失败了,就不会进行重启,直接停止了
- 状态是完成的状态,
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-liveness-http 1/1 Running 1 (16h ago) 16h
- pod-liveness-tcp 1/1 Running 1 (22m ago) 16h
- restart-pod 0/1 Completed 0 41s
- [root@master ~]# kubectl describe pod -n dev restart-pod
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal Scheduled 84s default-scheduler Successfully assigned dev/restart-pod to node1
- Normal Pulled 84s kubelet Container image "nginx:1.17.1" already present on machine
- Normal Created 84s kubelet Created container main-container
- Normal Started 84s kubelet Started container main-container
- Warning Unhealthy 55s (x3 over 75s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 404
- Normal Killing 55s kubelet Stopping container main-container
复制代码 三:pod调度
默认的环境下,一个Pod在哪个节点上面运行,是有scheduler组件采用相应的算法盘算出来,这个过程是不受人工控制的,但是在实际中,这不满足需求,需要控制pod在哪个节点上面运行,这个就需要调度的规则了,四大类调度的方式
自动调度:经过算法自动的调度
定向调度:通过nodename属性(node的名字),nodeselector(标签)
亲和性调度:nodeAffinity(node的亲和性),podAffinity(pod的亲和性),podANtiAffinity(这个就是跟Pod的亲和性差,所以就去相反的一侧)
污点(容忍调度):站在node节点上面完成的,有一个污点,别人就不能在;容忍站在pod上面来说的,可以在node上面的污点进行就是容忍调度
1:定向调度
指定的是pod声明nodename,大概nodeselector,依次将pod调度到指定的node节点上面,这个是逼迫性的,即使node不存在,也会被调度,只不过是pod运行失败而已
1、nodename
逼迫的调度,直接跳过了scheduler的调度逻辑,直接将pod调度到指定的节点上面- [root@master ~]# cat pod-nodename.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-nodename
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- ports:
- nodeName: node1
- [root@master ~]# kubectl create -f pod-nodename.yaml
- pod/pod-nodename created
- #运行在node1上面运行
- [root@master ~]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-liveness-http 1/1 Running 1 (16h ago) 17h 10.244.2.8 node1 <none> <none>
- pod-liveness-tcp 1/1 Running 1 (42m ago) 17h 10.244.1.7 node2 <none> <none>
- pod-nodename 1/1 Running 0 41s 10.244.2.10 node1 <none> <none>
- #将节点改为不存在的,pod会失败而已
- [root@master ~]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-liveness-http 1/1 Running 1 (16h ago) 17h 10.244.2.8 node1 <none> <none>
- pod-liveness-tcp 1/1 Running 1 (43m ago) 17h 10.244.1.7 node2 <none> <none>
- pod-nodename 0/1 Pending 0 9s <none> node3 <none> <none>
复制代码 2、nodeselector
看的就是节点上面的标签,标签选择器,逼迫性的- [root@master ~]# kubectl label nodes node1 nodeenv=pro
- node/node1 labeled
- [root@master ~]# kubectl label nodes node2 nodeenv=test
- node/node2 labeled
- [root@master ~]# cat pod-selector.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-select
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- nodeSelector:
- nodeenv: pro
- [root@master ~]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-liveness-http 1/1 Running 1 (17h ago) 17h 10.244.2.8 node1 <none> <none>
- pod-liveness-tcp 1/1 Running 1 (51m ago) 17h 10.244.1.7 node2 <none> <none>
- pod-select 1/1 Running 0 2m16s 10.244.2.11 node1 <none> <none>
- #不存在的标签
- 改为pr1,调度失败
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-liveness-http 1/1 Running 1 (17h ago) 17h
- pod-liveness-tcp 1/1 Running 1 (51m ago) 17h
- pod-select 0/1 Pending 0 5s
复制代码 2:亲和性调度
上面的问题,就是逼迫性的调度,就是假如没有节点的话,Pod就会调度失败
就是声明一个调度的节点,假如找到了,就调度,否则,找其他的;这个就是亲和性
nodeAffinity:node的亲和性,以node为目标,重要就是标签()
podAffinity:pod的亲和性,以pod为目标,就是以正在运行的pod为目标,就是一个web的pod需要和一个mysql的pod在一起,向其中一个打个标签,另外一个就会来找他
podAntAffinity:pod的反亲和性,以pod为目标,讨厌和谁在一起,就选择其他的
场景的分析:
假如2个应用时频繁交互,那么就有须要利用亲和性让2个应用尽大概的靠近,这样就能淘汰因为网络通信带来的性能消耗了,调度到了pod1上面就都在一个节点上面,通信的性能就消耗淘汰了
反亲和性的应用:
当应用的采用多副本摆设时,有须要采用反亲和性让各个应用实列打散分布在各个node上面,这样就能提高服务的高可用性
应用的功能是雷同的,使用反亲和性,都分布在不同的节点上面,高可用性,就是坏了一个节点,其他的节点也能正常的提供工作
参数:- [root@master ~]# kubectl explain pod.spec.affinity.nodeAffinity
- requiredDuringSchedulingIgnoredDuringExecution node节点必须满足的指定的所有规划才可以,相当于硬限制
- nodeSelectorTerms:节点选择列表
- matchFields:按节点字段列出的节点选择器要求列表
- matchExpressions 按节点标签列出的节点选择器要求列表(标签)
- key:
- vaules:
- operator:关系符,支持in, not exists
- 如果有符合的条件,就调度,没有符合的条件就调度失败
- preferredDuringSchedulingIgnoredDuringExecution <NodeSelector> 软限制,优先找这些满足的节点
- preference 一个节点选择器,以相应的权重相关联
- matchFields:按节点字段列出的节点选择器要求列表
- matchExpressions 按节点标签列出的节点选择器要求列表
- key:键
- vaules:
- operator:
- weight:倾向权重,1~100 ##就是倾向调度
-
- 如果找不到的话,就从其他的节点调度上去
- 关系符
- - key:nodedev 匹配存在标签的key为noddev的节点
- operator: exists
- - key: nodedev 匹配标签的key为nodedev,且vaule是xxx或者yyy的节点
- operator:in
- vaules:['xxx','yyy']
复制代码
1、nodeAffinity
node的亲和性,2大类,硬限定,软限定,节点上面的标签作为选择- [root@master ~]# cat pod-aff-re.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-aff
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- affinity:
- nodeAffinity: ##亲和性设置
- requiredDuringSchedulingIgnoredDuringExecution: #设置node亲和性,硬限制
- nodeSelectorTerms:
- matchExpressions: 匹配nodeenv的值在[xxx,yyy]中的标签
- - key: nodeenv
- operator: In
- vaules: ["xxx","yyy"]
- [root@master ~]# kubectl create -f pod-aff-re.yaml
- pod/pod-aff created
- [root@master ~]# kubectl get pod -n dev
- NAME READY STATUS RESTARTS AGE
- pod-aff 0/1 Pending 0 23s
- pod-liveness-http 1/1 Running 1 (17h ago) 18h
- pod-liveness-tcp 1/1 Running 1 (94m ago) 18h
- pod-select 0/1 Pending 0 43m
- #调度失败
- #值改为pro,就能在node1上面调度了
- [root@master ~]# kubectl create -f pod-aff-re.yaml
- pod/pod-aff created
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-aff 1/1 Running 0 5s
- pod-liveness-http 1/1 Running 1 (17h ago) 18h
- pod-liveness-tcp 1/1 Running 1 (96m ago) 18h
- pod-select 0/1 Pending 0 45m
复制代码
软限定- #软限制
- [root@master ~]# cat pod-aff-re.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-aff
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- affinity:
- nodeAffinity:
- preferredDuringSchedulingIgnoredDuringExecution: #软限制
- - weight: 1
- preference:
- matchExpressions:
- - key: nodeenv
- operator: In
- values: ["xxx","yyy"]
- #直接调度在node2上面了
- [root@master ~]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-aff 1/1 Running 0 41s 10.244.1.9 node2 <none> <none>
- pod-liveness-http 1/1 Running 1 (17h ago) 18h 10.244.2.8 node1 <none> <none>
- pod-liveness-tcp 1/1 Running 1 (102m ago) 18h 10.244.1.7 node2 <none> <none>
- pod-select 0/1 Pending 0 50m <none> <none> <none> <none>
复制代码
注意:- 如果同时定义了nodeSelector和nodeAffinity,那么必须满足这2个条件,pod才能在指定的node上面运行
- 如果nodeaffinity指定了多个nodeSelectorTerms,那么只要有一个能够匹配成功即可
- 如果一个nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有的才能匹配成功
- 如果一个Pod所在node在pod运行期间标签发生了改变,不符合该pod的节点亲和性需求,则系统将忽略此变化
复制代码 这个调度就是只在调度的时候见效,所以的话,就是假如调度成功后,标签发生了变化,不会对这个pod进行什么样的变化
2、podAffinitly
就是以正在运行的pod为参照,硬限定和软限定- kubectl explain pod.spec.affinity.podAffinity
- requiredDuringSchedulingIgnoredDuringExecution 硬限制
- namespace:指定参照pod的名称空间,如果不指定的话,默认的参照物pod就跟pod一眼的
- topologkey:调度的作用域,靠近到节点上,还是网段上面,操作系统了
- ###hostname的话,就是以node节点为区分的范围,调度到node1的节点上面
- os的话,就是以操作系统为区分的,调度到跟pod1操作系统上一样的
- labeSelector:标签选择器
- matchExpressions: 按节点列出的节点选择器要求列表
- key:
- vaules:
- operator:
- matchLbales: 指多个matchExpressions映射的内容
- preferredDuringSchedulingIgnoredDuringExecution 软限制
- namespace:指定参照pod的名称空间,如果不指定的话,默认的参照物pod就跟pod一眼的
- topologkey:调度的作用域,靠近到节点上,还是网段上面,操作系统了
- ###hostname的话,就是以node节点为区分的范围,调度到node1的节点上面
- os的话,就是以操作系统为区分的,调度到跟pod1操作系统上一样的
- labeSelector:标签选择器
- matchExpressions: 按节点列出的节点选择器要求列表
- key:
- vaules:
- operator:
- matchLbales: 指多个matchExpressions映射的内容
- weight:倾向权重1~100
复制代码
案例:
软亲和性:- apiVersion: v1
- kind: Pod
- metadata: #元数据的信息
- name: pods-1 #pod的名字
- namespace: dev #名称空间
- spec:
- containers: #容器
- - name: my-tomcat #镜像的名字
- image: tomcat #拉取的镜像
- imagePullPolicy: IfNotPresent #策略为远程和本地都有
- affinity:
- podAffinity: #pod的亲和性
- preferredDuringSchedulingIgnoredDuringExecution: #软限制
- - weight: 1 #权重为1
- podAffinityTerm: #定义了具体的pod亲和性的条件
- labelSelector: #标签选择器
- matchExpressions: #一个或者多个标签匹配式
- - key: user #标签的键
- operator: In
- values: #标签的值
- - "qqqq"
- topologyKey: kubernetes.io/hostname #按照主机进行区分
- 就是这个pod会被调度到节点上面有pod,并且标签为user=qqqq这个节点上面去
复制代码
硬亲和性:- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-5
- namespace: dev
- spec:
- containers:
- - name: my-tomcat
- image: tomcat
- imagePullPolicy: IfNotPresent
- affinity:
- podAffinity:
- requiredDuringSchedulingIgnoredDuringExecution: #软限制
- - labelSelector: #标签选择器
- matchExpressions: #匹配列表
- - key: user
- operator: In
- values: ["qqqq"]
- topologyKey: kubernetes.io/hostname #按照主机来进行划分
复制代码
3、反亲和性
就是不在这个pod上面进行调度,在另外的一个pod上面进行调度即可
案例:- [root@master mnt]# cat podaff.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: podaff
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: podenv
- operator: In
- values: ["pro"]
- topologyKey: kubernets.io/hostname
- 发现在node2节点上面创建了
- [root@master mnt]# kubectl get pods -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pod-podaff 1/1 Running 0 61m 10.244.2.14 node1 <none> <none>
- podaff 1/1 Running 0 2m57s 10.244.1.12 node2 <none> <none>
复制代码 3:污点(taints)
前面都是站在pod的角度上面来进行配置的属性,那么就是可以站在node的节点上面,是否答应这些pod调度过来,这些在node上面的信息就是被称为了污点
就是一个拒绝的策略
污点作用:
可以将拒绝Pod调度过来
乃至还可以将已经存在的pod赶出去
污点的格式:
key=value:effect
key和value:是污点的标签,effect描述污点的作用
effect三种的选项为:
PreferNoSchedule:k8s尽量避免把Pod调度到具有该污点的node上面,除非没有其他的节点可以调度了
NoSchedule:k8s不会把pod调度到该具有污点node上面,但不会影响当前node上已经存在的pod
NoExecue:k8s将不会把Pod调度该具有污点的node上面,同时也会将node已经存在的Pod驱离,一个pod也没有了
设置污点:- #设置污点
- [root@master mnt]# kubectl taint nodes node1 key=vaule:effect
- #去除污点
- [root@master mnt]# kubectl taint nodes node1 key:effect-
- #去除所有的污点
- [root@master mnt]# kubectl taint nodes node1 key-
复制代码
案例:- 准备节点node1,先暂时停止node2节点
- 为node1节点一个污点,tag=heima:PreferNoSchedule; 然后创建pod1
- 修改node1节点设置一个污点;tag=heima:NoSchedule: 然后创建pod2,不在接收新的pod,原来的也不会离开
- 修改node1节点设置一个污点;tag=heima:NoExecute;然后创建pod3,pod3也不会被创建,都没有了pod了
- #关掉node2节点即可
- #设置node1污点
- [root@master mnt]# kubectl taint nodes node1 tag=heima:PreferNoSchedule
- node/node1 tainted
- #查看污点
- [root@master mnt]# kubectl describe nodes -n dev node1| grep heima
- Taints: tag=heima:PreferNoSchedule
- #第一个pod可以进行运行
- [root@master mnt]# kubectl run taint1 --image=nginx:1.17.1 -n dev
- pod/taint1 created
- [root@master mnt]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-podaff 1/1 Running 0 90m
- podaff 1/1 Terminating 0 31m
- taint1 1/1 Running 0 6s
- #修改node1的污点
- [root@master mnt]# kubectl taint nodes node1 tag=heima:PreferNoSchedule-
- node/node1 untainted
- [root@master mnt]# kubectl taint nodes node1 tag=heima:NoSchedule
- node/node1 tainted
- #第一个正常的运行,第二个运行不了
- [root@master mnt]# kubectl run taint2 --image=nginx:1.17.1 -n dev
- pod/taint2 created
- [root@master mnt]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-podaff 1/1 Running 0 94m
- podaff 1/1 Terminating 0 35m
- taint1 1/1 Running 0 3m35s
- taint2 0/1 Pending 0 3s
- #第三种污点的级别
- [root@master mnt]# kubectl taint nodes node1 tag=heima:NoSchedule-
- node/node1 untainted
- 设置级别
- [root@master mnt]# kubectl taint nodes node1 tag=heima:NoExecute
- node/node1 tainted
- #新的pod也会不能创建了
- [root@master mnt]# kubectl run taint3 --image=nginx:1.17.1 -n dev
- pod/taint3 created
- [root@master mnt]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- podaff 1/1 Terminating 0 39m
- taint3 0/1 Pending 0 4s
复制代码
为什么创建pod的时候,不能往master节点上面进行调度了,因为有污点的作用
4、容忍
容忍就是忽略,node上面有污点,但是pod上面有容忍,进行忽略,可以进行调度
案例:- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-aff
- namespace: dev
- spec:
- containers:
- - name: main-container
- image: nginx:1.17.1
- tolerations: #添加容忍
- - key: "tag" #要容忍的key
- operator: "Equal" #操作符
- values: "heima" #容忍的污点
- effect: "NoExecute" #添加容忍的规划,这里必须和标记的污点规则相同
- #首先创建一个没有容忍的pod,看能不能进行创建
- #无法进行创建
- [root@master mnt]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-aff 0/1 Pending 0 6s
- podaff 1/1 Terminating 0 55m
- #有容忍的创建
- [root@master mnt]# kubectl create -f to.yaml
- pod/pod-aff created
- [root@master mnt]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pod-aff 1/1 Running 0 3s
- podaff 1/1 Terminating 0 57m
复制代码
容忍的具体信息- Key:对应的容忍的污点的值,空意味着匹配的所有的键
- value:对应着容忍的污点的值
- operator:key-value的运算符,支持Equal和Exists(默认),对于所有的键进行操作,跟值就没有关系了
- effect:对应的污点的effect,空意味着匹配所有的影响
- tolerationSeconds 容忍的时间,当effect为NoExecute时生效,表示pod在node上停留的时间
复制代码
四:pod控制器
1、pod的控制器的介绍
1:pod的分类:
自主式pod,k8s直接创建出来的pod,这种pod删除后就没有了。也不会重建
控制器创建的pod,通过控制器创建的Pod,这种pod删除后,还会自动重建
作用:
pod控制器管理pod的中间层,使用了pod控制器后,我们需要告诉pod控制器,想要多少个pod即可,他会创建满足条件的pod并确保pod处于用户期望的状态,假如pod运行中出现了故障,控制器会基于策略重启大概重建pod
2:控制器范例
replicaSet:保证指定命量的pod运行支持数量变更
deployment:通过控制replicaSet来控制pod,支持滚动升级,版本回退的功能
horizontal pod autoscaler:可以根据集群负载均衡自动调整pod的数量
2:控制器的具体介绍
replicaSet(rs)
:创建的数量的Pod能够正常的运行,会持续监听pod的运行状态
支持对pod数量的扩容缩容,
案例:副本数量
- apiVersion: apps/v1
- kind: ReplicaSet
- metadata:
- name: pc-replicaset #pod控制器的名字
- namespace: dev
- spec:
- replicas: 3 #创建的pod的数量,
- selector: #pod标签选择器规则,选择app=nginx-pod的pod的标签用来进行管理,用来管理pod上面有相同的标签
- matchLabels: #标签选择器规则
- app: nginx-pod
- template: 副本,也就是创建pod的模版
- metadata: #pod元数据的信息
- labels: #pod上面的标签
- app: nginx-pod
- spec:
- containers: #容器里面的名字
- - name: nginx
- image: nginx:1.17.1
- #查看控制器
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-replicaset 3 3 3 70s
- RESIRED 期望的pod数量
- CURRENT:当前有几个
- READY:准备好提供服务的有多少
- #查看pod
- [root@master ~]# kubectl get rs,pods -n dev
- NAME DESIRED CURRENT READY AGE
- replicaset.apps/pc-replicaset 3 3 3 2m31s
- NAME READY STATUS RESTARTS AGE
- pod/pc-replicaset-448tq 1/1 Running 0 2m31s
- pod/pc-replicaset-9tdhd 1/1 Running 0 2m31s
- pod/pc-replicaset-9z64w 1/1 Running 0 2m31s
- pod/pod-pod-affinity 1/1 Running 1 (47m ago) 12h
复制代码 案例2:实现扩缩容的pod- #编辑yaml文件 edit
- [root@master ~]# kubectl edit rs -n dev pc-replicaset
- replicaset.apps/pc-replicaset edited
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pc-replicaset-448tq 1/1 Running 0 10m
- pc-replicaset-9tdhd 1/1 Running 0 10m
- pc-replicaset-9z64w 1/1 Running 0 10m
- pc-replicaset-q6ps9 1/1 Running 0 94s
- pc-replicaset-w5krn 1/1 Running 0 94s
- pc-replicaset-zx8gw 1/1 Running 0 94s
- pod-pod-affinity 1/1 Running 1 (55m ago) 12h
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-replicaset 6 6 6 10m
- #第二种方式
- [root@master ~]# kubectl scale rs -n dev pc-replicaset --replicas=2 -n dev
- replicaset.apps/pc-replicaset scaled
- [root@master ~]# kubectl get rs,pod -n dev
- NAME DESIRED CURRENT READY AGE
- replicaset.apps/pc-replicaset 2 2 2 12m
- NAME READY STATUS RESTARTS AGE
- pod/pc-replicaset-448tq 1/1 Running 0 12m
- pod/pc-replicaset-9tdhd 1/1 Running 0 12m
- pod/pod-pod-affinity 1/1 Running 1 (57m ago) 12h
复制代码
案例3、镜像的版本的升级- #编辑镜像的版本
- [root@master ~]# kubectl edit rs -n dev pc-replicaset
- replicaset.apps/pc-replicaset edited
- [root@master ~]# kubectl get rs -n dev pc-replicaset -o wide
- NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
- pc-replicaset 2 2 2 15m nginx nginx:1.17.2 app=nginx-pod
- #命令来进行编辑,但是一般使用edit来进行编辑即可
- [root@master ~]# kubectl get rs -n dev -o wide
- NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
- pc-replicaset 2 2 2 17m nginx nginx:1.17.1 app=nginx-pod
复制代码
案例4、删除replicaSet
就是先删除pod再来删除控制器- #文件来进行删除
- root@master ~]# kubectl delete -f replicas.yaml
- replicaset.apps "pc-replicaset" deleted
- [root@master ~]# kubectl get rs -n dev
- No resources found in dev namespace.
- #命令来进行删除
- [root@master ~]# kubectl delete rs -n dev pc-replicaset
- replicaset.apps "pc-replicaset" deleted
- [root@master ~]# kubectl get rs -n dev
- No resources found in dev namespace.
复制代码 deployment(deploy)
支持全部的RS的功能
保留汗青的版本,就是可以进行回退版本
滚动更新的策略
更新策略:
案例:创建deployment- [root@master ~]# cat deploy.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: pc-deployment
- namespace: dev
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- [root@master ~]# kubectl get deploy -n dev
- NAME READY UP-TO-DATE AVAILABLE AGE
- pc-deployment 3/3 3 3 53s
- update:最新版本的pod数量
- available:当前可用的pod的数量
- #所以也会创建一个rs出来
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-deployment-6cb555c765 3 3 3 2m9s
复制代码 扩缩容:
基本上和之前的一样的操作- #命令来进行编辑
- [root@master ~]# kubectl scale deployment -n dev pc-deployment --replicas=5
- deployment.apps/pc-deployment scaled
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pc-deployment-6cb555c765-8qc9g 1/1 Running 0 4m52s
- pc-deployment-6cb555c765-8xss6 1/1 Running 0 4m52s
- pc-deployment-6cb555c765-m7wdf 1/1 Running 0 4s
- pc-deployment-6cb555c765-plkbf 1/1 Running 0 4m52s
- pc-deployment-6cb555c765-qh6gk 1/1 Running 0 4s
- pod-pod-affinity 1/1 Running 1 (81m ago) 13h
- #编辑文件
- [root@master ~]# kubectl edit deployments.apps -n dev pc-deployment
- deployment.apps/pc-deployment edited
- [root@master ~]# kubectl get pods -n dev
- NAME READY STATUS RESTARTS AGE
- pc-deployment-6cb555c765-8qc9g 1/1 Running 0 5m41s
- pc-deployment-6cb555c765-8xss6 1/1 Running 0 5m41s
- pc-deployment-6cb555c765-plkbf 1/1 Running 0 5m41s
- pod-pod-affinity 1/1 Running 1 (82m ago) 13h
复制代码
镜像更新
分为重建更新,滚动更新
重建更新:
一次性删除全部的来老版本的pod,然后再来创建新版本的pod
滚动更新:(默认)
先删除一部分的内容,进行更新,老的版本越来越少,新的版本越来越多
- #重建策略
- #先创建pod,实时观看
- [root@master ~]# cat deploy.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: pc-deployment
- namespace: dev
- spec:
- strategy:
- type: Recreate
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- [root@master ~]# kubectl get pods -n dev -w
- #然后更新镜像的版本
- [root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev
- #查看
- pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
- pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
- pc-deployment-6cb555c765-m92t8 0/1 Terminating 0 60s
- pc-deployment-5967bb44bb-bbkzz 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-bbkzz 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-kxrn5 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-zxfwl 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-kxrn5 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-zxfwl 0/1 Pending 0 0s
- pc-deployment-5967bb44bb-bbkzz 0/1 ContainerCreating 0 0s
- pc-deployment-5967bb44bb-kxrn5 0/1 ContainerCreating 0 0s
- pc-deployment-5967bb44bb-zxfwl 0/1 ContainerCreating 0 0s
- pc-deployment-5967bb44bb-kxrn5 1/1 Running 0 1s
复制代码
滚动更新:- [root@master ~]# cat deploy.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: pc-deployment
- namespace: dev
- spec:
- strategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 25%
- maxSurge: 25%
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- #更新
- [root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
- deployment.apps/pc-deployment image updated
- #就会更新
- [root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.3 -n dev
- deployment.apps/pc-deployment image updated
复制代码
总结:
镜像版本更新的话,会先创建一个新的RS,老RS也会存在,pod会在新的RS里面,老RS就会删除一个,到末了老的rs里面没有了pod,新的rs里面就会有pod了
留这个老的rs的作用的话,就是版本回退作用
版本回退:
undo回滚到上一个版本- #记录整个更新的deployment过程
- [root@master ~]# kubectl create -f deploy.yaml --record
- Flag --record has been deprecated, --record will be removed in the future
- deployment.apps/pc-deployment created
- #更新版本就会有历史记录
- [root@master ~]# kubectl edit deployments.apps -n dev pc-deployment
- deployment.apps/pc-deployment edited
- [root@master ~]# kubectl rollout history deployment -n dev pc-deployment
- deployment.apps/pc-deployment
- REVISION CHANGE-CAUSE
- 1 kubectl create --filename=deploy.yaml --record=true
- 2 kubectl create --filename=deploy.yaml --record=true
- 3 kubectl create --filename=deploy.yaml --record=true
- #直接回退到到指定的版本,如果不指定的话,默认是上一个版本
- [root@master ~]# kubectl rollout undo deployment -n dev pc-deployment --to-revision=1
- deployment.apps/pc-deployment rolled back
- #rs也发生了变化,pod回到了老的rs里面了
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-deployment-5967bb44bb 0 0 0 4m11s
- pc-deployment-6478867647 0 0 0 3m38s
- pc-deployment-6cb555c765 3 3 3 5m28s
- [root@master ~]# kubectl rollout history deployment -n dev
- deployment.apps/pc-deployment
- REVISION CHANGE-CAUSE
- 2 kubectl create --filename=deploy.yaml --record=true
- 3 kubectl create --filename=deploy.yaml --record=true
- 4 kubectl create --filename=deploy.yaml --record=true #这个就相当于是1了
复制代码
金丝雀发布:
deployment支持更新过程中的控制,停息,继续更新操作
就是在更新的过程中,仅存在一部分的更新的应用,主机部分是一些旧的版本,将这些请求发送到新的应用上面,不能吸取请求就赶紧回退,能担当请求,就继续更新,这个就被称为金丝雀发布- #更新,并且立刻暂停
- [root@master ~]# kubectl set image deploy pc-deployment nginx=nginx:1.17.2 -n dev && kubectl rollout pause deployment -n dev pc-deployment
- deployment.apps/pc-deployment image updated
- deployment.apps/pc-deployment paused
- #rs的变化
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-deployment-5967bb44bb 1 1 1 21m
- pc-deployment-6478867647 0 0 0 20m
- pc-deployment-6cb555c765 3 3 3 22m
- #有一个已经更新完毕了
- [root@master ~]# kubectl rollout status deployment -n dev
- Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
- #发送一个请求
- #继续更新
- [root@master ~]# kubectl rollout resume deployment -n dev pc-deployment
- deployment.apps/pc-deployment resumed
- #查看状态
- [root@master ~]# kubectl rollout status deployment -n dev
- Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
- Waiting for deployment spec update to be observed...
- Waiting for deployment spec update to be observed...
- Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
- Waiting for deployment "pc-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
- Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
- Waiting for deployment "pc-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
- Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination...
- Waiting for deployment "pc-deployment" rollout to finish: 1 old replicas are pending termination...
- deployment "pc-deployment" successfully rolled out
- #查看rs
- [root@master ~]# kubectl get rs -n dev
- NAME DESIRED CURRENT READY AGE
- pc-deployment-5967bb44bb 3 3 3 24m
- pc-deployment-6478867647 0 0 0 24m
- pc-deployment-6cb555c765 0 0 0 26m
复制代码 hpa控制器
总的来说就是,就是获取每个pod的利用率,与pod上面的hpa界说的指标进行比力,假如大于的话,就直接自动的增长pod,当访问量淘汰了话,会删除增长的pod
通过监控pod负载均衡的环境,实现pod数量的扩缩容
安装一个软件,拿到pod的负载
metries-server可以用来网络集群中的资源使用环境。pod。node都可以以进行监控- # 下载最新版配置软件包
- wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml
- #到每台服务器上系在阿里云版本的相关版本
- ctr image pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3
- #修改配置文件
- containers:
- - args:
- - --cert-dir=/tmp
- - --secure-port=4443
- - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- - --kubelet-use-node-status-port
- - --metric-resolution=15s
- - --kubelet-insecure-tls #增加证书忽略
- image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3 #修改image为阿里云下载的这个
- #应用下配置文件
- kubectl apply -f components.yaml
- #查看执行结果
- [root@master ~]# kubectl get pod -n kube-system
- NAME READY STATUS RESTARTS AGE
- coredns-66f779496c-88c5b 1/1 Running 33 (55m ago) 10d
- coredns-66f779496c-hcpp5 1/1 Running 33 (55m ago) 10d
- etcd-master 1/1 Running 14 (55m ago) 10d
- kube-apiserver-master 1/1 Running 14 (55m ago) 10d
- kube-controller-manager-master 1/1 Running 14 (55m ago) 10d
- kube-proxy-95x52 1/1 Running 14 (55m ago) 10d
- kube-proxy-h2qrf 1/1 Running 14 (55m ago) 10d
- kube-proxy-lh446 1/1 Running 15 (55m ago) 10d
- kube-scheduler-master 1/1 Running 14 (55m ago) 10d
- metrics-server-6779c94dff-dflh2 1/1 Running 0 2m6s
复制代码
查察资源的使用环境- #查看node的使用情况信息
- [root@master ~]# kubectl top nodes
- NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
- master 104m 5% 1099Mi 58%
- node1 21m 1% 335Mi 17%
- node2 22m 1% 305Mi 16%
- #查看pod的使用情况
- [root@master ~]# kubectl top pods -n dev
- NAME CPU(cores) MEMORY(bytes)
- pod-aff 3m 83Mi
- pod-label 0m 1Mi
复制代码
实现这个hpa的操作,就是pod上面要有资源的限定才可以,
然后使用下令即可
测试:
daemonset(DS)控制器
在每个节点上面创建一个副本(并且只能有一个),就是节点级别的,一般用于日志网络,节点监控等
当节点移除的话,自然Pod也就没有了
案例:- [root@master ~]# cat daemonset.yaml
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- name: daemon
- namespace: dev
- spec:
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- [root@master ~]# kubectl get pod -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- daemon-g8b4v 1/1 Running 0 2m30s 10.244.1.102 node2 <none> <none>
- daemon-t5tmd 1/1 Running 0 2m30s 10.244.2.89 node1 <none> <none>
- nginx-7f89875f58-prf9c 1/1 Running 0 79m 10.244.2.84 node1 <none> <none>
- #每个副本上面都有一个pod
复制代码
job控制器
批量处置惩罚(依次处置惩罚指定命量的任务),一次性任务(每个任务仅运行一次就结束)
由job创建的pod实行成功时,job会记载成功结束的Pod数量
当成功结束的pod达到指定的数量时,job将完成实行
里面的job都是存放的一次性文件
重启策略:在这里不能设置为always,因为这个是一次性任务,结束了,都要进行重启
只能设置为onfailure和never才行
onfailure:pod出现故障时,重启容器,不是创建pod,failed次数不变
never:出现故障,并且故障的pod不会消散也不会重启,failed次数=1
案例:- [root@master ~]# cat jod.yaml
- apiVersion: batch/v1
- kind: Job
- metadata:
- name: pc-job
- namespace: dev
- spec:
- manualSelector: true
- completions: 6 #一次性创建6个pod
- parallelism: 3 #允许三个一起执行,2轮就结束了
- selector:
- matchLabels:
- app: counter-pod
- template:
- metadata:
- labels:
- app: counter-pod
- spec:
- restartPolicy: Never
- containers:
- - name: busybox
- image: busybox:1.30
- command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo $i;sleep 3;done"]
- [root@master ~]# kubectl get job -n dev -w
- NAME COMPLETIONS DURATION AGE
- pc-job 0/6 0s
- pc-job 0/6 0s 0s
- pc-job 0/6 2s 2s
- pc-job 0/6 29s 29s
- pc-job 0/6 30s 30s
- pc-job 3/6 30s 30s
- pc-job 3/6 31s 31s
- pc-job 3/6 32s 32s
- pc-job 3/6 59s 59s
- pc-job 3/6 60s 60s
- pc-job 6/6 60s 60s
- [root@master ~]# kubectl get pod -n dev -w
- NAME READY STATUS RESTARTS AGE
- daemon-g8b4v 1/1 Running 0 20m
- daemon-t5tmd 1/1 Running 0 20m
- nginx-7f89875f58-prf9c 1/1 Running 0 97m
- pc-job-z2gmb 0/1 Pending 0 0s
- pc-job-z2gmb 0/1 Pending 0 0s
- pc-job-z2gmb 0/1 ContainerCreating 0 0s
- pc-job-z2gmb 1/1 Running 0 1s
- pc-job-z2gmb 0/1 Completed 0 28s
- pc-job-z2gmb 0/1 Completed 0 29s
- pc-job-z2gmb 0/1 Completed 0 30s
- pc-job-z2gmb 0/1 Completed 0 30s
复制代码 cronjob控制器(cj)
就是指定时间的周期实行job任务
案例:- [root@master ~]# cat cronjob.yaml
- apiVersion: batch/v1
- kind: CronJob
- metadata:
- name: pc-cronjob
- namespace: dev
- labels:
- controller: cronjob
- spec:
- schedule: "*/1 * * * *"
- jobTemplate:
- metadata:
- name: pc-cronjob
- labels:
- controller: cronjob
- spec:
- template:
- spec:
- restartPolicy: Never
- containers:
- - name: counter
- image: busybox:1.30
- command: ["/bin/sh","-c","for i in 1 2 3 4 5 6 7 8 9;do echo$i;sleep 3;done"]
- [root@master ~]# kubectl get job -n dev -w
- NAME COMPLETIONS DURATION AGE
- pc-cronjob-28604363 0/1 21s 21s
- pc-job 6/6 60s 33m
- pc-cronjob-28604363 0/1 28s 28s
- pc-cronjob-28604363 0/1 29s 29s
- pc-cronjob-28604363 1/1 29s 29s
- pc-cronjob-28604364 0/1 0s
- pc-cronjob-28604364 0/1 0s 0s
- pc-cronjob-28604364 0/1 1s 1s
- pc-cronjob-28604364 0/1 29s 29s
- pc-cronjob-28604364 0/1 30s 30s
- pc-cronjob-28604364 1/1 30s 30s
- ^C[root@master ~]#
- [root@master ~]# kubectl get pod -n dev -w
- NAME READY STATUS RESTARTS AGE
- daemon-g8b4v 1/1 Running 0 57m
- daemon-t5tmd 1/1 Running 0 57m
- nginx-7f89875f58-prf9c 1/1 Running 0 134m
- pc-job-2p6p6 0/1 Completed 0 32m
- pc-job-62z2d 0/1 Completed 0 32m
- pc-job-6sm97 0/1 Completed 0 32m
- pc-job-97j4j 0/1 Completed 0 31m
- pc-job-lsjz5 0/1 Completed 0 31m
- pc-job-pt28s 0/1 Completed 0 31m
- [root@master ~]# kubectl get pod -n dev -w
- pc-cronjob-28604363-fcnvr 0/1 Pending 0 0s
- pc-cronjob-28604363-fcnvr 0/1 Pending 0 0s
- pc-cronjob-28604363-fcnvr 0/1 ContainerCreating 0 0s
- pc-cronjob-28604363-fcnvr 1/1 Running 0 0s
- pc-cronjob-28604363-fcnvr 0/1 Completed 0 27s
- pc-cronjob-28604363-fcnvr 0/1 Completed 0 29s
- pc-cronjob-28604363-fcnvr 0/1 Completed 0 29s
- #就是这个job执行结束后,每隔1分钟再去执行
复制代码 四:service详解
流量负载组件service和ingress
serverice用于四层的负载ingress用于七层负载
1、service介绍
pod有一个ip地址,但是不是固定的,所以的话,service就是一部分的pod的代理,有一个ip地址,可以通过这个地址来进行访问pod
service就是一个标签选择器的机制
kube-proxy代理
核心就是kube-proxy机制发生的作用,当创建service时,api-server向etcd存储service相关的信息,kube-proxy监听到发生了变化,就会将service相关的信息转换为访问规则
查察规则
kube-proxy支持的三种模式
userspace模式:用户空间模式
kube-proxy会为每一个service创建一个监听的端口,发给service的ip的请求会被iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据算法选择一个提供服务的pod并建立连接,以将请求转发到pod上
kube-proxy相当于一个负载均衡器的样子
缺点:效率比力低,进行转发处置惩罚时,增长内核和用户空间
iptables模式:
当请求来的时候,不经过了kube-proxy了,经过clusterip(规则即可),然后进行轮询(随机)转发到pod上面
缺点:没有负载均衡,一但又问题,用户拿到的就是错误的页面
ipvs模式:
开启ipvs模块- 编辑里面的配置文件为mode为ipvs
- [root@master /]# kubectl edit cm kube-proxy -n kube-system
- #删除里面的pod,带有标签的
- [root@master /]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
- root@master /]# ipvsadm -Ln
- IP Virtual Server version 1.2.1 (size=4096)
- Prot LocalAddress:Port Scheduler Flags
- -> RemoteAddress:Port Forward Weight ActiveConn InActConn
- TCP 172.17.0.1:30203 rr 轮询的规则,就是将地址转发到这里面去即可
- -> 10.244.2.103:80 Masq 1 0 0
- TCP 192.168.109.100:30203 rr
- -> 10.244.2.103:80 Masq 1 0 0
- TCP 10.96.0.1:443 rr
- -> 192.168.109.100:6443 Masq 1 0 0
- TCP 10.96.0.10:53 rr
- -> 10.244.0.44:53 Masq 1 0 0
- -> 10.244.0.45:53 Masq 1 0 0
- TCP 10.96.0.10:9153 rr
- -> 10.244.0.44:9153 Masq 1 0 0
- -> 10.244.0.45:9153 Masq 1 0 0
- TCP 10.100.248.78:80 rr
- -> 10.244.2.103:80 Masq 1 0 0
- TCP 10.110.118.76:443 rr
- -> 10.244.1.108:10250 Masq 1 0 0
- -> 10.244.2.102:10250 Masq 1 0 0
- TCP 10.244.0.0:30203 rr
复制代码 2:service范例
标签选择器只是一个表象,本质就是规则,通过标签,来进行确定里面的pod的ip
session亲和性,假如不配置的话,请求会将轮询到每一个pod上面,特殊的环境下,将多个请求发送到同一个pod上面,就需要session亲和性
type:就是service范例
ClusterIP:默认值,k8s自动分配的虚拟ip,只能在集群内部访问
NodePort:将service通过指定的node上面端口暴露给外部,可以实现集群外面访问服务,节点上面的端口暴露给外部
LoadBalancer:使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境
ExternalName:把集合外部的服务引入集群内部,直接使用
1、环境准备
三个pod。deploy控制器来创建, - [root@master ~]# cat service-example.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: pc-deployment
- namespace: dev
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- ports:
- - containerPort: 80
- [root@master ~]# kubectl get pod -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pc-deployment-5cb65f68db-959hm 1/1 Running 0 62s 10.244.2.104 node1 <none> <none>
- pc-deployment-5cb65f68db-h6v8r 1/1 Running 0 62s 10.244.1.110 node2 <none> <none>
- pc-deployment-5cb65f68db-z4k2f 1/1 Running 0 62s 10.244.2.105 node1 <none> <none>
- #访问pod的ip和容器里面的端口
- [root@master ~]# curl 10.244.2.104:80
- 修改里面的网页文件,观察请求发送到哪一个节点上面去了,依次修改网页文件即可
- [root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-h6v8r /bin/bash
- root@pc-deployment-5cb65f68db-z4k2f:/# echo 10.244.2.10 > /usr/share/nginx/html/index.html
复制代码 2、ClusterIP范例的service
service的端口可以随便写- [root@master ~]# cat ClusterIP.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-clusterip
- namespace: dev
- spec:
- selector: #service标签选择器
- app: nginx-pod
- clusterIP: 10.96.0.100 #不写的话,默认生成一个ip地址
- type: ClusterIP
- ports:
- - port: 80 #service端口
- targetPort: 80 #pod的端口
- [root@master ~]# kubectl create -f ClusterIP.yaml
- service/service-clusterip created
- [root@master ~]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-clusterip ClusterIP 10.96.0.100 <none> 80/TCP 2m7s
- #查看service的详细的信息,
- [root@master ~]# kubectl describe svc service-clusterip -n dev
- Name: service-clusterip
- Namespace: dev
- Labels: <none>
- Annotations: <none>
- Selector: app=nginx-pod
- Type: ClusterIP
- IP Family Policy: SingleStack
- IP Families: IPv4
- IP: 10.96.0.100
- IPs: 10.96.0.100
- Port: <unset> 80/TCP
- TargetPort: 80/TCP
- Endpoints: 10.244.1.110:80,10.244.2.104:80,10.244.2.105:80 #建立pod和service的关联,主要是标签选择器,里面都是记录的Pod的访问地址,实际端点服务的集合
- Session Affinity: None
- Events: <none>
- [root@master ~]# kubectl get pod -n dev -o wide
- NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
- pc-deployment-5cb65f68db-959hm 1/1 Running 0 25m 10.244.2.104 node1 <none> <none>
- pc-deployment-5cb65f68db-h6v8r 1/1 Running 0 25m 10.244.1.110 node2 <none> <none>
- pc-deployment-5cb65f68db-z4k2f 1/1 Running 0 25m 10.244.2.105 node1 <none> <none>
- [root@master ~]# kubectl get endpoints -n dev
- NAME ENDPOINTS AGE
- service-clusterip 10.244.1.110:80,10.244.2.104:80,10.244.2.105:80 4m48s
- 真正起作用的就是kube-proxy,创建service的时,会创建对应的规则
- [root@master ~]# ipvsadm -Ln
- TCP 10.96.0.100:80 rr
- -> 10.244.1.110:80 Masq 1 0 0
- -> 10.244.2.104:80 Masq 1 0 0
- -> 10.244.2.105:80 Masq 1 0 0
- #发送一个请求,测试是谁接收了,循环访问,发现是轮询环的状态
- [root@master ~]# while true;do curl 10.96.0.100:80; sleep 5;done;
- 10.244.2.105
- 10.244.2.104
- 10.244.1.110
- 10.244.2.105
- 10.244.2.104
- 10.244.1.110
复制代码 访问service的ip和主机端口
负载分发策略:(session亲和性)
默认的话,访问就是轮询大概随机
有设置的话,就是多个请求到同一个pod里面上面,就不会轮训大概随机- #设置session亲和性
- [root@master ~]# cat ClusterIP.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-clusterip
- namespace: dev
- spec:
- sessionAffinity: ClientIP #就是通过哟个请求到同一个节点上面
- selector:
- app: nginx-pod
- clusterIP: 10.96.0.100
- type: ClusterIP
- ports:
- - port: 80
- targetPort: 80
- [root@master ~]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-clusterip ClusterIP 10.96.0.100 <none> 80/TCP 78s
- [root@master ~]# ipvsadm -Ln
- TCP 10.96.0.100:80 rr persistent 10800 持久化
- -> 10.244.1.112:80 Masq 1 0 0
- -> 10.244.2.107:80 Masq 1 0 0
- -> 10.244.2.108:80 Masq 1 0 0
- 这种类型的service,只能通过集群节点来进行访问,就是内部进行访问,自己的电脑访问不了这个ip
- [root@master ~]# curl 10.96.0.100:80
- 10.244.2.108
- [root@master ~]# curl 10.96.0.100:80
- 10.244.2.108
- [root@master ~]# curl 10.96.0.100:80
- 10.244.2.108
复制代码 3、headliness范例的service
Cluster范例的service,默认是随机的负载均衡分发策略,希望自己来控制这个策略,使用headliness范例的service,不会分发Clusterip。想要访问service,只能通过service的域名来进行访问- [root@master ~]# cat headliness.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-headliness
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- clusterIP: None #设置为None,就能生成headliness类型的service
- type: ClusterIP
- ports:
- - port: 80
- targetPort: 80
- [root@master ~]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-headliness ClusterIP None <none> 80/TCP 4s
- #查看域名
- [root@master ~]# kubectl exec -ti -n dev pc-deployment-5cb65f68db-959hm /bin/bash
- root@pc-deployment-5cb65f68db-959hm:/# cat /etc/resolv.conf
- search dev.svc.cluster.local svc.cluster.local cluster.local
- nameserver 10.96.0.10
- options ndots:5
- #访问headliness类型的service
- #格式为dns服务器,加上service的名字,名称空间,等;; ANSWER SECTION:
- [root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.108
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.112
- service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.107
复制代码 4、NodePort范例的service
就是将service的port映射到node节点上面,通过nodeip+node端口来实现访问service
请求来到node的端口上面时,会将请求发送到service的端口上面,再来发送到pod上面的端口,实现访问
就将service暴露到外部了
测试:- [root@master ~]# cat nodeport.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-clusterip
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- type: NodePort #NodePort类型的service
- ports:
- - port: 80 #service端口
- targetPort: 80 #pod端口
- nodePort: 30002 默认在一个·1范围内
- [root@master ~]# kubectl create -f nodeport.yaml
- service/service-clusterip created
- [root@master ~]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-clusterip NodePort 10.106.183.217 <none> 80:30002/TCP 4s
- #访问节点ip+端口就能映射到Clusterip+端口了
- [root@master ~]# curl 192.168.109.100:30002
- 10.244.2.108
- [root@master ~]# curl 192.168.109.101:30002
- 10.244.2.108
- [root@master ~]# curl 192.168.109.102:30002
- 10.244.2.108
- 就能实现访问了service,以及内部了pod了
复制代码 5、LoadBalancer范例的service
就是在nodeport的基础上面添加了一个负载均衡的设备,经过盘算后得出
6、ExternalName范例的service
将这个这个服务引入www.baidu.com这个服务
- [root@master ~]# cat service-external.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: service-externalname
- namespace: dev
- spec:
- type: ExternalName
- externalName: www.baidu.com
- [root@master ~]# kubectl create -f service-external.yaml
- service/service-externalname created
- [root@master ~]# kubectl get svc -n dev
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service-clusterip NodePort 10.106.183.217 <none> 80:30002/TCP 17m
- service-externalname ExternalName <none> www.baidu.com <none> 7s
- #访问service
- [root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
- service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com.
- www.baidu.com. 30 IN CNAME www.a.shifen.com.
- www.a.shifen.com. 30 IN A 180.101.50.188
- www.a.shifen.com. 30 IN A 180.101.50.242
- #这样就能解析到了
复制代码 3:Ingress介绍
service对外暴露服务重要就是2种范例的,NodePort和LoadBalancer
缺点:
NodePort暴露的是主机的端口,当集群服务很多的时候,这个端口就会更多
LB方式就是每一个service都需要LB,浪费
用户界说这个请求到service的规则,然后ingress控制器感知将其转换为nginx配置文件,然后动态更新到nginx-proxy里面去即可,这个过程是动态的
1、环境的准备
- #下载yaml文件
- kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
- [root@master ingress-example]# kubectl get pod,svc -n ingress-nginx
- NAME READY STATUS RESTARTS AGE
- pod/ingress-nginx-admission-create-jv5n5 0/1 Completed 0 77s
- pod/ingress-nginx-admission-patch-tpfv6 0/1 Completed 0 77s
- pod/ingress-nginx-controller-597dc6d68-rww45 1/1 Running 0 77s
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service/ingress-nginx-controller NodePort 10.97.10.122 <none> 80:30395/TCP,443:32541/TCP 78s
- service/ingress-nginx-controller-admission ClusterIP 10.96.17.67 <none> 443/TCP
复制代码
service和deployment文件,创建2个service和6个pod- [root@master ~]# cat deploy.yaml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: nginx-deployment
- namespace: dev
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pod
- template:
- metadata:
- labels:
- app: nginx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.17.1
- ports:
- - containerPort: 80
- ---
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: tomcat-deployment
- namespace: dev
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: tocmat-pod
- template:
- metadata:
- labels:
- app: tocmat-pod
- spec:
- containers:
- - name: tomcat
- image: tomcat:8.5-jre10-slim
- ports:
- - containerPort: 8080
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: nginx-service
- namespace: dev
- spec:
- selector:
- app: nginx-pod
- clusterIP: None
- type: ClusterIP
- ports:
- - port: 80
- targetPort: 80
- ---
- apiVersion: v1
- kind: Service
- metadata:
- name: tomcat-service
- namespace: dev
- spec:
- selector:
- app: tomcat-pod
- type: ClusterIP
- clusterIP: None
- ports:
- - port: 8080
- targetPort: 8080
- [root@master ~]# kubectl get deployments.apps,pod -n dev
- NAME READY UP-TO-DATE AVAILABLE AGE
- deployment.apps/nginx-deployment 3/3 3 3 86s
- deployment.apps/tomcat-deployment 3/3 3 3 86s
- NAME READY STATUS RESTARTS AGE
- pod/nginx-deployment-5cb65f68db-5lzpb 1/1 Running 0 86s
- pod/nginx-deployment-5cb65f68db-75h4m 1/1 Running 0 86s
- pod/nginx-deployment-5cb65f68db-nc8pj 1/1 Running 0 86s
- pod/tomcat-deployment-5dbff496f4-6msb2 1/1 Running 0 86s
- pod/tomcat-deployment-5dbff496f4-7wjc9 1/1 Running 0 86s
- pod/tomcat-deployment-5dbff496f4-wlgmm 1/1 Running 0 86s
复制代码 2、http代理
创建一个yaml文件就是里面,
访问的就是域名+path 假如path是/xxx的话,访问要带上域名/xxx
访问的时候,就会将其转发到对应的service加上端口上面即可
3、https代理
密钥要提前的生成
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。 |