名称空间,亲和性,pod生命周期,健康检查
一、名称空间1、切换名称空间
# kubectl create ns test
namespace/test created
# kubectl get ns
NAME STATUS AGE
default Active 10h
kube-node-lease Active 10h
kube-public Active 10h
kube-system Active 10h
test Active 2s
# kubectl config set-context --current --namespace=kube-system
Context "kubernetes-admin@kubernetes" modified.
# kubectl get pod
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-d886b8fff-mbdz7 1/1 Running 0 6h42m
calico-node-48tnk 1/1 Running 0 6h46m
calico-node-jq7mr 1/1 Running 0 6h46m
calico-node-pdwcr 1/1 Running 0 6h46m
coredns-567c556887-99cqw 1/1 Running 1 (6h44m ago) 10h
coredns-567c556887-9sbfp 1/1 Running 1 (6h44m ago) 10h
etcd-master 1/1 Running 1 (6h44m ago) 10h
kube-apiserver-master 1/1 Running 1 (6h44m ago) 10h
kube-controller-manager-master 1/1 Running 1 (6h44m ago) 10h
kube-proxy-7dl5r 1/1 Running 1 (6h50m ago) 10h
kube-proxy-pvbrg 1/1 Running 1 (6h44m ago) 10h
kube-proxy-xsqt9 1/1 Running 1 (6h50m ago) 10h
kube-scheduler-master 1/1 Running 1 (6h44m ago) 10h
# kubectl config set-context --current --namespace=default
Context "kubernetes-admin@kubernetes" modified.
# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx1 1/1 Running 0 8m44s2、设置名称空间资源限额
[*]就是不能超过这个名称空间的限制
[*]限制这个名称空间全部pod的类型的限制
# cat test.yaml
apiVersion: v1
kind: ResourceQuota#这个是资源配额
metadata:
name: mem-cpu-qutoa
namespace: test
spec:
hard:#限制资源
requests.cpu: "2" #最少2个cpu
requests.memory: 2Gi
limits.cpu: "4" #最大4个cpu
limits.memory: 4Gi
#查看名称空间详细信息
# kubectl describe ns test
Name: test
Labels: kubernetes.io/metadata.name=test
Annotations:<none>
Status: Active
Resource Quotas
Name: mem-cpu-qutoa
Resource UsedHard
-------- --- ---
limits.cpu 0 4
limits.memory 0 4Gi
requests.cpu 0 2
requests.memory0 2Gi
No LimitRange resource.
#定义了名称空间限制的话,创建Pod必须设置资源限制,否则会报错
# cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx1
namespace: test
labels:
app: nginx-pod
spec:
containers:
- name: nginx01
image: docker.io/library/nginx:1.9.1
imagePullPolicy: IfNotPresent
resources:#pod资源的限制,如果不做限制的话,pod出现了问题的话,一直吃内存的话,就会出现问题
limits:
memory: "2Gi"#内存为2g
cpu: "2m"#单位为毫核,1000m=1核 二、标签
[*]这个非常的重要,由于很多的资源类型都是靠这个标签进行管理的(识别到了)
[*]服务或者控制器等都是靠这个标签来进行管理的
#打上标签
# kubectl label pods nginx1 test=01
pod/nginx1 labeled
# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1 1/1 Running 0 45m app=nginx-pod,test=01
#具有这个标签的pod进行列出
# kubectl get pods -l app=nginx-pod
NAME READY STATUS RESTARTS AGE
nginx1 1/1 Running 0 48m
#查看所有名称空间和标签
# kubectl get pods --all-namespaces --show-labels
#查看这个键app对应的值是什么
# kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
nginx1 1/1 Running 0 50m nginx-pod
#删除这个标签
# kubectl label pod nginx1 app-
pod/nginx1 unlabeled
# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1 1/1 Running 0 57m test=01
s三、亲和性
1、node节点选择器
就是根据主机名或者标签进行pod的调度,属于强制性的调度,不存在的也能进行调度,是pending的状态
1、nodename
# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: test
spec:
nodeName: node1#调度到node1主机上面
containers:
- name: pod1
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1 1/1 Running 0 12h 10.244.104.5 node2 <none> <none>
pod1 1/1 Running 0 34s 10.244.166.130 node1 <none> <none>2、nodeselector
#给主机名打上标签,以便进行调度
# kubectl label nodes node1 app=node1
node/node1 labeled
# kubectl get nodes node1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready <none> 23h v1.26.0 app=node1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux
# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod2
namespace: test
spec:
nodeSelector:#根据主机名的标签进行调度
app: node1 #这种键值的形式来表现出来
containers:
- name: pod2
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx1 1/1 Running 0 12h 10.244.104.5 node2 <none> <none>
pod1 1/1 Running 0 9m28s 10.244.166.130 node1 <none> <none>
pod2 1/1 Running 0 12s 10.244.166.131 node1 <none> <none>2、node亲和性
[*]根据node上面的标签进行调度
[*]根据的是node和pod之间的关系进行调度的
1、软亲和性
[*]如果没有符合条件的,就随机选择一个进行调度
# cat pod4.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod4
namespace: test
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions: #匹配节点上面的标签
- key: app
operator: In
values: ["node1"]
weight: 1 #根据权重来调度
containers:
- name: pod4
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod3 1/1 Running 0 6m52s 10.244.166.133 node1 <none> <none>
pod4 1/1 Running 0 40s 10.244.166.135 node1 <none> <none>2、硬亲和性
# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod3
namespace: test
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:#硬限制
nodeSelectorTerms: #根据这个node上面的标签来进行调度
- matchExpressions:
- key: app
operator: In
values: ["node1"] #调度到上面有app=node1这个标签的节点上面去
containers:
- name: pod3
image: docker.io/library/nginx:1.9.1
imagePullPolicy: IfNotPresent3、pod亲和性
[*]就是几个pod之间有依赖的关系,就放在一起,如许效率就快一点,网站服务和数据库服务就需要在一起,进步效率
[*]根据正在运行的pod上面的标签进行调度
1、软亲和性
apiVersion: v1
kind: Pod
metadata:
name: pod7
namespace: test
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pod4"]
topologyKey: app
weight: 1
containers:
- name: pod7
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod4 1/1 Running 0 24m 10.244.166.136 node1 <none> <none>
pod5 1/1 Running 0 21m 10.244.166.137 node1 <none> <none>
pod7 1/1 Running 0 51s 10.244.166.139 node1 <none> <none>2、硬亲和性
# cat pod5.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod5
namespace: test
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pod4"]
topologyKey: kubernetes.io/hostname #这个就是拓扑域,每个节点的这个都不一样。node1,node2等
containers:
- name: pod5
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
#关于这个topologyKey的值的选择,一般就是节点上面的标签
apiVersion: v1
kind: Pod
metadata:
name: pod6
namespace: test
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pod4"]
topologyKey: app2#这个是node2上面的标签,调度到pod包含这个app=pod4这个标签,并且节点是标签是app2上面的节点上面
containers:
- name: pod6
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# cat pod5.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod6
namespace: test
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pod4"]
topologyKey: app#调度到pod包含了app的标签,并且值在app节点上面去了
containers:
- name: pod6
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# operator: DoesNotExist情况
apiVersion: v1
kind: Pod
metadata:
name: pod6
namespace: test
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: DoesNotExist
topologyKey: app #调度到key不包含app并且节点标签为app的节点上面,还是调度到app节点上面去了
containers:
- name: pod6
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
4、pod反亲和性
就是当2个都是占内存比较高的Pod,就使用和这个反亲和性进行分开
apiVersion: v1
kind: Pod
metadata:
name: pod8
namespace: test
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values: ["pod4"]
topologyKey: kubernetes.io/hostname #调度到不能包含app=pod4上面的节点,调度到node1上
containers:
- name: pod8
image: docker.io/library/nginx
imagePullPolicy: IfNotPresent
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod4 1/1 Running 0 36m 10.244.166.136 node1 <none> <none>
pod5 1/1 Running 0 33m 10.244.166.137 node1 <none> <none>
pod6 1/1 Running 0 7m42s 10.244.166.140 node1 <none> <none>
pod7 1/1 Running 0 12m 10.244.166.139 node1 <none> <none>
pod8 1/1 Running 0 8s 10.244.104.6 node2 <none> <none>=5、污点
[*]在node上面进行打污点
[*]kubectl explain node.spec.taints
[*]手动打污点, kubectl taint nodes node1 a=b:NoSchedule
[*]污点三个等级
[*]NoExecute 节点上面的pod都移除掉,不能调度到这个节点上
[*]NoSchedule 节点上面存在的pod保留,但是新创建的pod不能调度到这个节点上面
[*]PreferNoSchedule pod不到万不得已的环境下,才气调度到这个节点上面
#给node1打上一个污点
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod4 1/1 Running 0 41m 10.244.166.136 node1 <none> <none>
pod5 1/1 Running 0 37m 10.244.166.137 node1 <none> <none>
pod6 1/1 Running 0 12m 10.244.166.140 node1 <none> <none>
pod7 1/1 Running 0 17m 10.244.166.139 node1 <none> <none>
pod8 1/1 Running 0 4m33s 10.244.104.6 node2 <none> <none>
# kubectl taint node node1 app=node1:NoExecute
node/node1 tainted
#发现这个节点上面的pod都销毁了
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod8 1/1 Running 0 6m21s 10.244.104.6 node2 <none> <none>
#去除污点
# kubectl taint node node1 app-
node/node1 untainted
# kubectl describe node node1 | grep -i taint
Taints: <none>6、容忍度
[*]在pod上面进行容忍度,就是会容忍node上面的污点,从而能进行调度
[*]kubectl explain pod.spec.tolerations
#就是节点上面有污点但是pod上面有容忍度可以容忍这个污点来进行调度到指定的节点上面去
#给node1打上污点
# kubectl taint node node1 app=node1:NoExecute
node/node1 tainted
#进行调度到node1上
apiVersion: v1
kind: Pod
metadata:
name: pod10
namespace: test
spec:
tolerations:
- key: "app"
operator: Equal#就是key和values,effect必须和node上面完全匹配才行 #exists,只要对应的键是存在的,其值被自动定义成通配符
value: "node1"
effect: NoExecute
containers:
- name: pod10
image: docker.io/library/nginx:1.9.1
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod10 1/1 Running 0 58s 10.244.166.142 node1 <none> <none>
pod8 1/1 Running 0 27m 10.244.104.6 node2 <none> <none>
apiVersion: v1
kind: Pod
metadata:
name: pod11
namespace: test
spec:
tolerations:
- key: "app"
operator: Exists #容忍无论app,NoExecute的值为多少,都能进行调度
value: ""
effect: NoExecute
containers:
- name: pod11
image: docker.io/library/nginx:1.9.1四:pod的生命周期
https://img2023.cnblogs.com/blog/3210480/202406/3210480-20240616153528732-1078478474.png
[*]init容器,初始化的容器,就是必须要经过这个阶段才气运行主容器
[*]主容器,里面有启动前钩子和启动后钩子
1、初始化容器
# cat init.yaml
apiVersion: v1
kind: Pod
metadata:
name:init-pod
namespace: test
spec:
initContainers:
- name: init-pod1
image: docker.io/library/nginx:1.9.1
command: ["/bin/bash","-c","touch /11.txt"]
containers:
- name: main-pod
image: docker.io/library/nginx:1.9.1
# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
init-pod 0/1 Pending 0 0s
init-pod 0/1 Pending 0 0s
init-pod 0/1 Init:0/1 0 0s
init-pod 0/1 Init:0/1 0 1s
init-pod 0/1 PodInitializing 0 2s
init-pod 1/1 Running 0 3s
#如果初始化错误的话,会一直陷入重启的状态,这个跟pod的重启策略有关
# cat init.yaml
apiVersion: v1
kind: Pod
metadata:
name:init-pod
namespace: test
spec:
initContainers:
- name: init-pod1
image: docker.io/library/nginx:1.9.1
command: ["/bin/bash","-c","qwe /11.txt"]
containers:
- name: main-pod
image: docker.io/library/nginx:1.9.1
# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
init-pod 0/1 Pending 0 0s
init-pod 0/1 Pending 0 0s
init-pod 0/1 Init:0/1 0 0s
init-pod 0/1 Init:0/1 0 0s
init-pod 0/1 Init:0/1 0 1s
init-pod 0/1 Init:Error 0 2s
init-pod 0/1 Init:Error 1 (2s ago) 3s
init-pod 0/1 Init:CrashLoopBackOff 1 (2s ago) 4s
init-pod 0/1 Init:Error 2 (14s ago) 16s2、启动前钩子
[*]就是在主容器运行的前,执行这个钩子
[*]失败的话,会一直重启(重启策略决定的),就不会运行主容器了
[*]有三种的写法
1、exec
# cat pre.yaml
apiVersion: v1
kind: Pod
metadata:
name: pre-pod
namespace: test
spec:
containers:
- name: pre-pod
image: docker.io/library/nginx:1.9.1
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","touch /11.txt"]
# kubectl exec -n test -ti pre-pod -- /bin/bash
root@pre-pod:/# ls
11.txt bootetc lib mediaopt rootsbinsysusr
bin dev homelib64mnt proc run srv tmpvar
root@pre-pod:/# cat 11.txt
#如果启动前钩子钩子报错的话,后面的主容器不会运行了3、启动后钩子
# cat pre.yaml
apiVersion: v1
kind: Pod
metadata:
name: pre-pod
namespace: test
spec:
containers:
- name: pre-pod
image: docker.io/library/nginx:1.9.1
lifecycle:
preStop:
exec:
command: ["/bin/bash","-c","touch /11.txt"]4、pod重启策略和pod的状态
[*]用于设置pod的值
[*]Always,当容器出现任何状况的话,就自动进行重启,这个是默认的值
[*]OnFailure,当容器终止运行且退出码不为0时,kubelet自动重启该容器
[*]Never,不论容器的状态如何,kubelet都不会重启该容器
[*]pod的状态
1、pending,请求创建Pod时,条件不满足,调度没有进行完成没有一个节点符合,或者是处于下载镜像的环境
[*]running 就是已经调度到一个节点上面了,里面的容器至少有一个创建出来了
[*]succeeded pod里面的全部容器都成功的被终止了,并且不会在重启了
[*]Failed 里面的全部容器都已经终止了,并且至少有一个容器是由于失败终止的,就是非0状态重启的
[*]Unknown 未知状态,就是apiserver和kubelet出现了问题
[*]Evicted状态,内存和硬盘资源不敷
[*]CrashLoopBackOff 容器曾经启动了,但是又异常退出了
[*]Error pod启动过程中发生了错误
[*]Completed 说明pod已经完成了工作,
#在容器里面设置一个启动前钩子,钩子会失败,然后重启策略设置为Never
apiVersion: v1
kind: Pod
metadata:
name: pre-pod
namespace: test
spec:
restartPolicy: Never
containers:
- name: pre-pod
image: docker.io/library/nginx:1.9.1
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","qwe /11.txt"]
#这个钩子失败了,然后pod不进行重启策略
# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
pre-pod 0/1 Pending 0 0s
pre-pod 0/1 Pending 0 0s
pre-pod 0/1 ContainerCreating 0 0s
pre-pod 0/1 ContainerCreating 0 0s
pre-pod 0/1 Completed 0 2s
pre-pod 0/1 Completed 0 3s
pre-pod 0/1 Completed 0 4s
#查看详细信息
#正常退出了
Events:
Type Reason Age From Message
---- ------ -------- -------
Normal Scheduled 12m default-schedulerSuccessfully assigned test/pre-pod to node1
Normal Pulled 12m kubelet Container image "docker.io/library/nginx:1.9.1" already present on machine
Normal Created 12m kubelet Created container pre-pod
Normal Started 12m kubelet Started container pre-pod
WarningFailedPostStartHook12m kubelet PostStartHook failed
Normal Killing 12m kubelet s FailedPostStartHook五、pod健康检查(主要就是容器里面)
1、liveness probe(存活探测)
[*]用于检测pod内的容器是否处于运行的状态,当这个探测失效时,k8s会根据这个重启策略决定是否重启改容器
[*]适用于在容器发生故障时进行重启,web步伐等
[*]主要就是检测pod是否运行的
[*]支持三种格式,exec,tcp,httpget
[*]探测效果有三个值,Success表现通过了检测,Failure表现未通过检测,Unknown表现检测没有正常的运行
[*]kubectl explain pod.spec.containers.livenessProbe
1、参数详解
livenessProbe:
initialDelaySeconds: #pod启动后首次进行检查的等待时间,单位为秒
periodSeconds: #检查的间隔时间,默认为10秒
timeoutSeconds: #探针执行检测请求后,等待响应的超时时间,默认为1秒
successThreshold: #连续探测几次成功,才认为探测成功,默认为1,在liveness中,必须为1,最小值为1
failureThreshold: #探测失败的重试次数,重试一定次数后将认为失败,在readiness探针中,Pod会被标记未就绪,默认为3,最小值为12、exec格式
# cat liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: live1
namespace: test
spec:
containers:
- name: live1
image: docker.io/library/nginx:1.9.1
livenessProbe:
exec:
command: ["/bin/bash","-c","touch /11.txt"]
failureThreshold: 3#失败三次就认定为失败
initialDelaySeconds: 3#进行探测的时候,等待三秒
periodSeconds: 5 #检查的时间间隔为10s
successThreshold: 1 #必须为1,有1次成功即可
timeoutSeconds: 10#执行请求后,等待的时间为10s
# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
pre-pod 0/1 Completed 0 4h45m
live1 0/1 Pending 0 0s
live1 0/1 Pending 0 0s
live1 0/1 ContainerCreating 0 0s
live1 0/1 ContainerCreating 0 1s
live1 1/1 Running 0 2s
live1 1/1 Running 0 30s3、httpget格式
#格式说明
httpGet:
scheme: #用于连接host的协议,默认为http
host:#要连接的主机名,默认为pod的ip,就是容器里面的主机名
port:#容器上要访问端口号或名称
path: #http服务器上的访问url
httpHeaders: #自定义http请求headers,允许重复
# cat liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: live1
namespace: test
spec:
containers:
- name: live1
image: docker.io/library/nginx:1.9.1
livenessProbe:
httpGet:
port: 80
scheme: HTTP
path: /index.html #就是在容器内部curl localhost:80/index.html检测
failureThreshold: 3 #返回了一个成功的 HTTP 响应(状态码在 200-399 之间)就是成功的
initialDelaySeconds: 3
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 10
#可以运行
live1 0/1 ContainerCreating 0 0s
live1 0/1 ContainerCreating 0 1s
live1 1/1 Running 0 2s
live1 1/1 Running 0 42s4、tcp方式健康检查
# cat liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: live1
namespace: test
spec:
containers:
- name: live1
image: docker.io/library/nginx:1.9.1
livenessProbe:
tcpSocket:
port: 80 #发送一个探针,尝试连接容器80端口
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 102、readiness probe(就绪性探测)
[*]就是pod里面的容器运行了,但是提供服务的步伐,需要读取这个网页的配置文件,才气提供服务
[*]以是的话需要这个就绪性探测,服务器起来了,就能提供这个服务了
[*]防止Pod起来了,但是里面的服务是假的服务这种环境
[*]也支持三种
# cat liveness.yaml
apiVersion: v1
kind: Pod
metadata:
name: live1
namespace: test
spec:
containers:
- name: live1
image: docker.io/library/nginx:1.9.1
readinessProbe:
httpGet:
port: 80 #发送一个请求
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 10
#在检测的时候的等待几秒钟
# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
pre-pod 0/1 Completed 0 5h11m
live1 0/1 Pending 0 0s
live1 0/1 Pending 0 0s
live1 0/1 ContainerCreating 0 0s
live1 0/1 ContainerCreating 0 0s
live1 0/1 Running 0 1s
live1 1/1 Running 0 5s3、(启动探测)
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页:
[1]