基于velero及minio实现etcd数据备份与恢复

打印 上一主题 下一主题

主题 877|帖子 877|积分 2631

1、Velero简介

Velero 是vmware开源的一个云原生的灾难恢复和迁移工具,它本身也是开源的,采用Go语言编写,可以安全的备份、恢复和迁移Kubernetes集群资源数据;官网https://velero.io/。Velero 是西班牙语意思是帆船,非常符合Kubernetes社区的命名风格,Velero的开发公司Heptio,已被VMware收购。Velero 支持标准的K8S集群,既可以是私有云平台也可以是公有云,除了灾备之外它还能做资源移转,支持把容器应用从一个集群迁移到另一个集群。Velero 的工作方式就是把kubernetes中的数据备份到对象存储以实现高可用和持久化,默认的备份保存时间为720小时,并在需要的时候进行下载和恢复。
2、Velero与etcd快照备份的区别


  • etcd 快照是全局完成备份(类似于MySQL全部备份),即使需要恢复一个资源对象(类似于只恢复MySQL的一个库),但是也需要做全局恢复到备份的状态(类似于MySQL的全库恢复),即会影响其它namespace中pod运行服务(类似于会影响MySQL其它数据库的数据)。
  • Velero可以有针对性的备份,比如按照namespace单独备份、只备份单独的资源对象等,在恢复的时候可以根据备份只恢复单独的namespace或资源对象,而不影响其它namespace中pod运行服务。
  • velero支持ceph、oss等对象存储,etcd 快照是一个为本地文件。
  • velero支持任务计划实现周期备份,但etcd 快照也可以基于cronjob实现。
  • velero支持对AWS EBS创建快照及还原https://www.qloudx.com/velero-for-kubernetes-backup-restore-stateful-workloads-with-aws-ebs-snapshots/
    https://github.com/vmware-tanzu/velero-plugin-for-aws
3、velero整体架构


4、velero备份流程


Velero 客户端调用Kubernetes API Server创建Backup任务。Backup 控制器基于watch 机制通过API Server获取到备份任务。Backup 控制器开始执行备份动作,其会通过请求API Server获取需要备份的数据。Backup 控制器将获取到的数据备份到指定的对象存储server端。
5、对象存储minio部署

5.1、创建数据目录
  1. root@harbor:~# mkdir -p /data/minio
复制代码
5.2、创建minio容器

下载镜像
  1. root@harbor:~# docker pull minio/minio:RELEASE.2023-08-31T15-31-16Z
  2. RELEASE.2023-08-31T15-31-16Z: Pulling from minio/minio
  3. 0c10cd59e10e: Pull complete
  4. b55c0ddd1333: Pull complete
  5. 4aade59ba7c6: Pull complete
  6. 7c45df1e40d6: Pull complete
  7. adedf83b12e0: Pull complete
  8. bc9f33183b0c: Pull complete
  9. Digest: sha256:76868af456548aab229762d726271b0bf8604a500416b3e9bdcb576940742cda
  10. Status: Downloaded newer image for minio/minio:RELEASE.2023-08-31T15-31-16Z
  11. docker.io/minio/minio:RELEASE.2023-08-31T15-31-16Z
  12. root@harbor:~#
复制代码
创建minio容器
  1. root@harbor:~# docker run --name minio \
  2. > -p 9000:9000 \
  3. > -p 9999:9999 \
  4. > -d --restart=always \
  5. > -e "MINIO_ROOT_USER=admin" \
  6. > -e "MINIO_ROOT_PASSWORD=12345678" \
  7. > -v /data/minio/data:/data \
  8. > minio/minio:RELEASE.2023-08-31T15-31-16Z server /data \
  9. > --console-address '0.0.0.0:9999'
  10. ba5e511da5f30a17614d719979e28066788ca7520d87c67077a38389e70423f1
  11. root@harbor:~#
复制代码
如果不指定,则默认用户名与密码为 minioadmin/minioadmin,可以通过环境变量自定义(MINIO_ROOT_USER来指定用户名,MINIO_ROOT_PASSWORD来指定用户名名对应的密码);
5.3、minio web界面登录



5.4、minio 创建bucket


5.5、minio 验证bucket


6、在master节点部署velero

6.1、下载velero客户端工具

下载velero客户端工具
  1. root@k8s-master01:/usr/local/src# wget https://github.com/vmware-tanzu/velero/releases/download/v1.11.1/velero-v1.11.1-linux-amd64.tar.gz
复制代码
解压压缩包
  1. root@k8s-master01:/usr/local/src# ll
  2. total 99344
  3. drwxr-xr-x  3 root root     4096 Sep  2 12:38 ./
  4. drwxr-xr-x 10 root root     4096 Feb 17  2023 ../
  5. drwxr-xr-x  2 root root     4096 Oct 21  2015 bin/
  6. -rw-r--r--  1 root root 64845365 May 31 13:21 buildkit-v0.11.6.linux-amd64.tar.gz
  7. -rw-r--r--  1 root root 36864459 Sep  2 12:31 velero-v1.11.1-linux-amd64.tar.gz
  8. root@k8s-master01:/usr/local/src# tar xf velero-v1.11.1-linux-amd64.tar.gz
  9. root@k8s-master01:/usr/local/src# ll
  10. total 99348
  11. drwxr-xr-x  4 root root     4096 Sep  2 12:39 ./
  12. drwxr-xr-x 10 root root     4096 Feb 17  2023 ../
  13. drwxr-xr-x  2 root root     4096 Oct 21  2015 bin/
  14. -rw-r--r--  1 root root 64845365 May 31 13:21 buildkit-v0.11.6.linux-amd64.tar.gz
  15. drwxr-xr-x  3 root root     4096 Sep  2 12:39 velero-v1.11.1-linux-amd64/
  16. -rw-r--r--  1 root root 36864459 Sep  2 12:31 velero-v1.11.1-linux-amd64.tar.gz
  17. root@k8s-master01:/usr/local/src#
复制代码
将velero二进制文件移动至/usr/local/bin
  1. root@k8s-master01:/usr/local/src# ll velero-v1.11.1-linux-amd64
  2. total 83780
  3. drwxr-xr-x 3 root root     4096 Sep  2 12:39 ./
  4. drwxr-xr-x 4 root root     4096 Sep  2 12:39 ../
  5. -rw-r--r-- 1 root root    10255 Dec 13  2022 LICENSE
  6. drwxr-xr-x 4 root root     4096 Sep  2 12:39 examples/
  7. -rwxr-xr-x 1 root root 85765416 Jul 25 08:43 velero*
  8. root@k8s-master01:/usr/local/src# cp velero-v1.11.1-linux-amd64/velero /usr/local/bin/
  9. root@k8s-master01:/usr/local/src#
复制代码
验证velero命令是否可执行?
  1. root@k8s-master01:/usr/local/src# velero --help
  2. Velero is a tool for managing disaster recovery, specifically for Kubernetes
  3. cluster resources. It provides a simple, configurable, and operationally robust
  4. way to back up your application state and associated data.
  5. If you're familiar with kubectl, Velero supports a similar model, allowing you to
  6. execute commands such as 'velero get backup' and 'velero create schedule'. The same
  7. operations can also be performed as 'velero backup get' and 'velero schedule create'.
  8. Usage:
  9.   velero [command]
  10. Available Commands:
  11.   backup            Work with backups
  12.   backup-location   Work with backup storage locations
  13.   bug               Report a Velero bug
  14.   client            Velero client related commands
  15.   completion        Generate completion script
  16.   create            Create velero resources
  17.   debug             Generate debug bundle
  18.   delete            Delete velero resources
  19.   describe          Describe velero resources
  20.   get               Get velero resources
  21.   help              Help about any command
  22.   install           Install Velero
  23.   plugin            Work with plugins
  24.   repo              Work with repositories
  25.   restore           Work with restores
  26.   schedule          Work with schedules
  27.   snapshot-location Work with snapshot locations
  28.   uninstall         Uninstall Velero
  29.   version           Print the velero version and associated image
  30. Flags:
  31.       --add_dir_header                   If true, adds the file directory to the header of the log messages
  32.       --alsologtostderr                  log to standard error as well as files (no effect when -logtostderr=true)
  33.       --colorized optionalBool           Show colored output in TTY. Overrides 'colorized' value from $HOME/.config/velero/config.json if present. Enabled by default
  34.       --features stringArray             Comma-separated list of features to enable for this Velero process. Combines with values from $HOME/.config/velero/config.json if present
  35.   -h, --help                             help for velero
  36.       --kubeconfig string                Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration
  37.       --kubecontext string               The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context)
  38.       --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
  39.       --log_dir string                   If non-empty, write log files in this directory (no effect when -logtostderr=true)
  40.       --log_file string                  If non-empty, use this log file (no effect when -logtostderr=true)
  41.       --log_file_max_size uint           Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
  42.       --logtostderr                      log to standard error instead of files (default true)
  43.   -n, --namespace string                 The namespace in which Velero should operate (default "velero")
  44.       --one_output                       If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
  45.       --skip_headers                     If true, avoid header prefixes in the log messages
  46.       --skip_log_headers                 If true, avoid headers when opening log files (no effect when -logtostderr=true)
  47.       --stderrthreshold severity         logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
  48.   -v, --v Level                          number for the log level verbosity
  49.       --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
  50. Use "velero [command] --help" for more information about a command.
  51. root@k8s-master01:/usr/local/src#
复制代码
能够正常执行velero命令说明velero客户端工具就准备就绪;
6.2、配置velero认证环境

6.2.1、创建velero工作目录
  1. root@k8s-master01:/usr/local/src# mkdir  /data/velero -p
  2. root@k8s-master01:/usr/local/src# cd /data/velero/
  3. root@k8s-master01:/data/velero# ll
  4. total 8
  5. drwxr-xr-x 2 root root 4096 Sep  2 12:42 ./
  6. drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
  7. root@k8s-master01:/data/velero#
复制代码
6.2.2、创建访问minio的认证文件
  1. root@k8s-master01:/data/velero# ll
  2. total 12
  3. drwxr-xr-x 2 root root 4096 Sep  2 12:43 ./
  4. drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
  5. -rw-r--r-- 1 root root   69 Sep  2 12:43 velero-auth.txt
  6. root@k8s-master01:/data/velero# cat velero-auth.txt
  7. [default]
  8. aws_access_key_id = admin
  9. aws_secret_access_key = 12345678
  10. root@k8s-master01:/data/velero#
复制代码
这个velero-auth.txt文件中记录了访问对象存储minio的用户名和密码;其中aws_access_key_id这个变量用来指定对象存储用户名aws_secret_access_key变量用来指定密码;这两个变量是固定的不能随意改动;
6.2.3、准备user-csr文件
  1. root@k8s-master01:/data/velero# ll
  2. total 16
  3. drwxr-xr-x 2 root root 4096 Sep  2 12:48 ./
  4. drwxr-xr-x 3 root root 4096 Sep  2 12:42 ../
  5. -rw-r--r-- 1 root root  222 Sep  2 12:48 awsuser-csr.json
  6. -rw-r--r-- 1 root root   69 Sep  2 12:43 velero-auth.txt
  7. root@k8s-master01:/data/velero# cat awsuser-csr.json
  8. {
  9.   "CN": "awsuser",
  10.   "hosts": [],
  11.   "key": {
  12.     "algo": "rsa",
  13.     "size": 2048
  14.   },
  15.   "names": [
  16.     {
  17.       "C": "CN",
  18.       "ST": "SiChuan",
  19.       "L": "GuangYuan",
  20.       "O": "k8s",
  21.       "OU": "System"
  22.     }
  23.   ]
  24. }
  25. root@k8s-master01:/data/velero#
复制代码
该文件用于制作证书所需信息;
6.2.4、准备证书签发环境

安装证书签发工具
  1. root@k8s-master01:/data/velero# apt install golang-cfssl
复制代码
下载cfssl
  1. root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
复制代码
下载cfssljson
  1. root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
复制代码
cfssl-certinfo
  1. root@k8s-master01:/data/velero# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
复制代码
重命名
  1. root@k8s-master01:/data/velero# ll
  2. total 40248
  3. drwxr-xr-x 2 root root     4096 Sep  2 12:57 ./
  4. drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
  5. -rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
  6. -rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo_1.6.1_linux_amd64
  7. -rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl_1.6.1_linux_amd64
  8. -rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson_1.6.1_linux_amd64
  9. -rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
  10. root@k8s-master01:/data/velero# mv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfo
  11. root@k8s-master01:/data/velero# mv cfssl_1.6.1_linux_amd64 cfssl
  12. root@k8s-master01:/data/velero# mv cfssljson_1.6.1_linux_amd64 cfssljson
  13. root@k8s-master01:/data/velero# ll
  14. total 40248
  15. drwxr-xr-x 2 root root     4096 Sep  2 12:58 ./
  16. drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
  17. -rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
  18. -rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
  19. -rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
  20. -rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
  21. -rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
  22. root@k8s-master01:/data/velero#
复制代码
移动二进制文件至/usr/local/bin/
  1. root@k8s-master01:/data/velero# cp cfssl-certinfo cfssl cfssljson /usr/local/bin/
复制代码
添加可执行权限
  1. root@k8s-master01:/data/velero# chmod  a+x /usr/local/bin/cfssl*
  2. root@k8s-master01:/data/velero# ll /usr/local/bin/cfssl*
  3. -rwxr-xr-x 1 root root 16659824 Sep  2 12:59 /usr/local/bin/cfssl*
  4. -rwxr-xr-x 1 root root 13502544 Sep  2 12:59 /usr/local/bin/cfssl-certinfo*
  5. -rwxr-xr-x 1 root root 11029744 Sep  2 12:59 /usr/local/bin/cfssljson*
  6. root@k8s-master01:/data/velero#
复制代码
6.2.5、执行证书签发

复制部署k8s集群的ca-config.json文件至/data/velero
  1. root@k8s-deploy:~# scp /etc/kubeasz/clusters/k8s-cluster01/ssl/ca-config.json 192.168.0.31:/data/velero
  2. ca-config.json                                                                                                       100%  459   203.8KB/s   00:00   
  3. root@k8s-deploy:~#
复制代码
验证ca-config.json是否正常复制
  1. root@k8s-master01:/data/velero# ll
  2. total 40252
  3. drwxr-xr-x 2 root root     4096 Sep  2 13:03 ./
  4. drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
  5. -rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
  6. -rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
  7. -rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
  8. -rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
  9. -rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
  10. -rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
  11. root@k8s-master01:/data/velero#
复制代码
签发证书
  1. root@k8s-master01:/data/velero# cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=./ca-config.json -profile=kubernetes ./awsuser-csr.json | cfssljson -bare awsuser
  2. 2023/09/02 13:05:37 [INFO] generate received request
  3. 2023/09/02 13:05:37 [INFO] received CSR
  4. 2023/09/02 13:05:37 [INFO] generating key: rsa-2048
  5. 2023/09/02 13:05:38 [INFO] encoded CSR
  6. 2023/09/02 13:05:38 [INFO] signed certificate with serial number 309924608852958492895277791638870960844710474947
  7. 2023/09/02 13:05:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
  8. websites. For more information see the Baseline Requirements for the Issuance and Management
  9. of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
  10. specifically, section 10.2.3 ("Information Requirements").
复制代码
6.2.6、验证证书
  1. root@k8s-master01:/data/velero# ll
  2. total 40264
  3. drwxr-xr-x 2 root root     4096 Sep  2 13:05 ./
  4. drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
  5. -rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
  6. -rw------- 1 root root     1679 Sep  2 13:05 awsuser-key.pem
  7. -rw-r--r-- 1 root root     1001 Sep  2 13:05 awsuser.csr
  8. -rw-r--r-- 1 root root     1391 Sep  2 13:05 awsuser.pem
  9. -rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
  10. -rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
  11. -rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
  12. -rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
  13. -rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
  14. root@k8s-master01:/data/velero#
复制代码
6.2.7、分发证书到api-server证书路径
  1. root@k8s-master01:/data/velero# cp awsuser-key.pem /etc/kubernetes/ssl/
  2. root@k8s-master01:/data/velero# cp awsuser.pem /etc/kubernetes/ssl/
  3. root@k8s-master01:/data/velero# ll /etc/kubernetes/ssl/
  4. total 48
  5. drwxr-xr-x 2 root root 4096 Sep  2 13:07 ./
  6. drwxr-xr-x 3 root root 4096 Apr 22 14:56 ../
  7. -rw-r--r-- 1 root root 1679 Apr 22 14:54 aggregator-proxy-key.pem
  8. -rw-r--r-- 1 root root 1387 Apr 22 14:54 aggregator-proxy.pem
  9. -rw------- 1 root root 1679 Sep  2 13:07 awsuser-key.pem
  10. -rw-r--r-- 1 root root 1391 Sep  2 13:07 awsuser.pem
  11. -rw-r--r-- 1 root root 1679 Apr 22 14:10 ca-key.pem
  12. -rw-r--r-- 1 root root 1310 Apr 22 14:10 ca.pem
  13. -rw-r--r-- 1 root root 1679 Apr 22 14:56 kubelet-key.pem
  14. -rw-r--r-- 1 root root 1460 Apr 22 14:56 kubelet.pem
  15. -rw-r--r-- 1 root root 1679 Apr 22 14:54 kubernetes-key.pem
  16. -rw-r--r-- 1 root root 1655 Apr 22 14:54 kubernetes.pem
  17. root@k8s-master01:/data/velero#
复制代码
6.3、生成k8s集群认证config文件
  1. root@k8s-master01:/data/velero# export KUBE_APISERVER="https://192.168.0.111:6443"                                                                     root@k8s-master01:/data/velero# kubectl config set-cluster kubernetes \
  2. > --certificate-authority=/etc/kubernetes/ssl/ca.pem \
  3. > --embed-certs=true \
  4. > --server=${KUBE_APISERVER} \
  5. > --kubeconfig=./awsuser.kubeconfig
  6. Cluster "kubernetes" set.
  7. root@k8s-master01:/data/velero# ll
  8. total 40268
  9. drwxr-xr-x 2 root root     4096 Sep  2 13:12 ./
  10. drwxr-xr-x 3 root root     4096 Sep  2 12:42 ../
  11. -rw-r--r-- 1 root root      222 Sep  2 12:48 awsuser-csr.json
  12. -rw------- 1 root root     1679 Sep  2 13:05 awsuser-key.pem
  13. -rw-r--r-- 1 root root     1001 Sep  2 13:05 awsuser.csr
  14. -rw------- 1 root root     1951 Sep  2 13:12 awsuser.kubeconfig
  15. -rw-r--r-- 1 root root     1391 Sep  2 13:05 awsuser.pem
  16. -rw-r--r-- 1 root root      459 Sep  2 13:03 ca-config.json
  17. -rw-r--r-- 1 root root 16659824 Aug 31 03:00 cfssl
  18. -rw-r--r-- 1 root root 13502544 Aug 31 03:00 cfssl-certinfo
  19. -rw-r--r-- 1 root root 11029744 Aug 31 03:00 cfssljson
  20. -rw-r--r-- 1 root root       69 Sep  2 12:43 velero-auth.txt
  21. root@k8s-master01:/data/velero# cat awsuser.kubeconfig
  22. apiVersion: v1
  23. clusters:
  24. - cluster:
  25.     certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  26.     server: https://192.168.0.111:6443
  27.   name: kubernetes
  28. contexts: null
  29. current-context: ""
  30. kind: Config
  31. preferences: {}
  32. users: null
  33. root@k8s-master01:/data/velero#
复制代码
6.3.1、设置客户端证书认证
  1. root@k8s-master01:/data/velero# kubectl config set-credentials awsuser \
  2. > --client-certificate=/etc/kubernetes/ssl/awsuser.pem \
  3. > --client-key=/etc/kubernetes/ssl/awsuser-key.pem \
  4. > --embed-certs=true \
  5. > --kubeconfig=./awsuser.kubeconfig
  6. User "awsuser" set.
  7. root@k8s-master01:/data/velero# cat awsuser.kubeconfig
  8. apiVersion: v1
  9. clusters:
  10. - cluster:
  11.     certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  12.     server: https://192.168.0.111:6443
  13.   name: kubernetes
  14. contexts: null
  15. current-context: ""
  16. kind: Config
  17. preferences: {}
  18. users:
  19. - name: awsuser
  20.   user:
  21.     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  22.     client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  23. root@k8s-master01:/data/velero#
复制代码
6.3.2、设置上下文参数
  1. root@k8s-master01:/data/velero# kubectl config set-context kubernetes \
  2. > --cluster=kubernetes \
  3. > --user=awsuser \
  4. > --namespace=velero-system \
  5. > --kubeconfig=./awsuser.kubeconfig
  6. Context "kubernetes" created.
  7. root@k8s-master01:/data/velero# cat awsuser.kubeconfig
  8. apiVersion: v1
  9. clusters:
  10. - cluster:
  11.     certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  12.     server: https://192.168.0.111:6443
  13.   name: kubernetes
  14. contexts:
  15. - context:
  16.     cluster: kubernetes
  17.     namespace: velero-system
  18.     user: awsuser
  19.   name: kubernetes
  20. current-context: ""
  21. kind: Config
  22. preferences: {}
  23. users:
  24. - name: awsuser
  25.   user:
  26.     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  27.     client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  28. root@k8s-master01:/data/velero#
复制代码
6.3.3、设置默认上下文
  1. root@k8s-master01:/data/velero# kubectl config use-context kubernetes --kubeconfig=awsuser.kubeconfig
  2. Switched to context "kubernetes".
  3. root@k8s-master01:/data/velero# cat awsuser.kubeconfig            
  4. apiVersion: v1
  5. clusters:
  6. - cluster:
  7.     certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURtakNDQW9LZ0F3SUJBZ0lVTW01blNKSUtCdGNmeXY3MVlZZy91QlBsT3JZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13TkRJeU1UTXpNekF3V2hnUE1qRXlNekF6TWpreE16TXpNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJFd0R3WURWUVFJRXdoSVlXNW5XbWh2ZFRFTE1Ba0dBMVVFQnhNQ1dGTXhEREFLQmdOVgpCQW9UQTJzNGN6RVBNQTBHQTFVRUN4TUdVM2x6ZEdWdE1SWXdGQVlEVlFRREV3MXJkV0psY201bGRHVnpMV05oCk1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBcTRmdWtncjl2ditQWVVtQmZnWjUKTVJIOTZRekErMVgvZG5hUlpzN1lPZjZMaEZ5ZWJxUTFlM3k2bmN3Tk90WUkyemJ3SVJKL0c3YTNsTSt0Qk5sTQpwdE5Db1lxalF4WVY2YkpOcGNIRFJldTY0Z1BYcHhHY1FNZGE2Q1VhVTBrNENMZ0I2ZGx1OE8rUTdaL1dNeWhTClZQMWp5dEpnK1I4UGZRUWVzdnlTanBzaUM4cmdUQjc2VWU0ZXJqaEFwb2JSbzRILzN2cGhVUXRLNTBQSWVVNlgKTnpuTVNONmdLMXRqSjZPSStlVkE1dWdTTnFOc3FVSXFHWmhmZXZSeFBhNzVBbDhrbmRxc3cyTm5WSFFOZmpGUApZR3lNOFlncllUWm9sa2RGYk9Wb2g0U3pncTFnclc0dzBpMnpySVlJTzAzNTBEODh4RFRGRTBka3FPSlRVb0JyCmtRSURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlYKSFE0RUZnUVU5SjZoekJaOTNZMklac1ZYYUYwZk1uZ0crS1V3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUZLNwpjZ3l3UnI4aWt4NmpWMUYwVUNJRGxEN0FPQ3dTcE1Odithd1Zyd2k4Mk5xL3hpL2RjaGU1TjhJUkFEUkRQTHJUClRRS2M4M2FURXM1dnpKczd5Nnl6WHhEbUZocGxrY3NoenVhQkdFSkhpbGpuSHJ0Z09tL1ZQck5QK3hhWXdUNHYKZFNOdEIrczgxNGh6OWhaSitmTHRMb1RBS2tMUjVMRjkyQjF2c0JsVnlkaUhLSnF6MCtORkdJMzdiY1pvc0cxdwpwbVpROHgyWUFxWHE2VFlUQnoxLzR6UGlSM3FMQmxtRkNMZVJCa1RJb2VhUkFxU2ZkeDRiVlhGeTlpQ1lnTHU4CjVrcmQzMEdmZU5pRUpZVWJtZzNxcHNVSUlQTmUvUDdHNU0raS9GSlpDcFBOQ3Y4aS9MQ0Z2cVhPbThvYmdYYm8KeDNsZWpWVlZ6eG9yNEtOd3pUZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  8.     server: https://192.168.0.111:6443
  9.   name: kubernetes
  10. contexts:
  11. - context:
  12.     cluster: kubernetes
  13.     namespace: velero-system
  14.     user: awsuser
  15.   name: kubernetes
  16. current-context: kubernetes
  17. kind: Config
  18. preferences: {}
  19. users:
  20. - name: awsuser
  21.   user:
  22.     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxekNDQXIrZ0F3SUJBZ0lVTmttQUJ6ZjVhdCtoZC9vYmtONXBVV3JWOU1Nd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEZqQVVCZ05WQkFNVERXdDFZbVZ5CmJtVjBaWE10WTJFd0lCY05Nak13T1RBeU1UTXdNVEF3V2hnUE1qQTNNekE0TWpBeE16QXhNREJhTUdReEN6QUoKQmdOVkJBWVRBa05PTVJBd0RnWURWUVFJRXdkVGFVTm9kV0Z1TVJJd0VBWURWUVFIRXdsSGRXRnVaMWwxWVc0eApEREFLQmdOVkJBb1RBMnM0Y3pFUE1BMEdBMVVFQ3hNR1UzbHpkR1Z0TVJBd0RnWURWUVFERXdkaGQzTjFjMlZ5Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12ekkKSHNqRmFZNzNmTm5aWVhqU0lsVEJKeDNqY1dYVGh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOAorVVFYTkFhTVYxOFhVaGdvSmJZaHRCWStpSGhjK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBECjNIZG1TUGJ0am0xRkVWaTFkVHpSeVhDSWxrTkJFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejMKaTlBS3ArOUhENUV6bE5QaVUwY1FlZkxERGEwRXp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNQpuVG1NNlNucEh5aFltNW5sd2FRdUZvekY2bWt4UzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6CkxRSURBUUFCbzM4d2ZUQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUcKQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGREtyYkphanpDTTNzM2ZHUzBtUwpLV0lHbm5XM01COEdBMVVkSXdRWU1CYUFGUFNlb2N3V2ZkMk5pR2JGVjJoZEh6SjRCdmlsTUEwR0NTcUdTSWIzCkRRRUJDd1VBQTRJQkFRQXd4b043eUNQZzFRQmJRcTNWT1JYUFVvcXRDVjhSUHFjd3V4bTJWZkVmVmdPMFZYanQKTHR5aEl2RDlubEsyazNmTFpWTVc2MWFzbVhtWkttUTh3YkZtL1RieE83ZkdJSWdpSzJKOGpWWHZYRnhNeExZNQpRVjcvd3QxUUluWjJsTjBsM0c3TGhkYjJ4UjFORmd1eWNXdWtWV3JKSWtpcU1Ma0lOLzdPSFhtSFZXazV1a1ZlCmNoYmVIdnJSSXRRNHBPYjlFZVgzTUxiZXBkRjJ4TWs5NmZrVXJGWmhKYWREVnB5NXEwbHFpUVJkMVpIWk4xSkMKWVBrZGRXdVQxbHNXaWJzN3BWTHRXMXlnV0JlS2hKQ0FVeTlUZEZ1WEt1QUZvdVJKUUJWQUs4dTFHRU1aL2JEYgp2eXRxN2N6ZndWOFNreVNpKzZHcldZRVBXUXllUEVEWjBPU1oKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  23.     client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeVU3ZWtvQ0ZFS0Jnd3Z1SU12eklIc2pGYVk3M2ZOblpZWGpTSWxUQkp4M2pjV1hUCmh1eno5a013WktPRFNybmxWcTF0SnZ1dHRVNWpCaHRielJKOCtVUVhOQWFNVjE4WFVoZ29KYllodEJZK2lIaGMKK1dBNTYwaEEybEJaaFU2RGZzQjNVam9RbjNKdU02YUQ0eHBEM0hkbVNQYnRqbTFGRVZpMWRUelJ5WENJbGtOQgpFR3hLam5MSjZ5dC9YcnVuNW9wdjBudE9jQWw0VWJSWHFGejNpOUFLcCs5SEQ1RXpsTlBpVTBjUWVmTEREYTBFCnp3N1NyaDFpNG9rdnhVSnhyd0FhcTdaK1Q5blVRWkV5a0RpNW5UbU02U25wSHloWW01bmx3YVF1Rm96RjZta3gKUzFPYXJJMStiZGhYSHU1T3ViYWJKOXY0bVA5TDhYRFY3TDd6TFFJREFRQUJBb0lCQVFDTWVnb2RSNndUcHliKwp5WklJcXBkbnpBamVtWktnd0ZET2tRWnFTS1NsREZsY0Y0ZWRueHE3WGJXV2RQZzRuREtxNHNqSnJGVlNzUW12CkNFWnVlNWxVUkt6QWRGVlkzeFdpQnhOMUJYek5jN3hkZFVqRUNOOUNEYUNiOS9nUWEzS2RiK2VVTE1yT3lZYVgKYW5xY2J3YXVBWEFTT0tZYmZxcjA2T2R2a1dwLzIxWnF2bnAvdmZrN3dIYzduUktLWmt1ZVg0bVExdFFqdVBoZQpDQXN5WWZOeWM0VjVyUDF1K3AzTGU4Ly9sTXZQZ0wydFBib3NaaGYvM0dCUGpPZHFabVdnL0R5blhzN21qcnhqCng2OHJOcHIxU2ZhQUNOMjNQdE9HbXcreXh4NjdENTNUSVJUaXZIY2Izd0FIMnNRdkVzbG9HN0lMU0d2THJ1S3IKS0c2RkQwb0JBb0dCQVBsWFdydWxQa3B6bzl5ZUJnd3hudmlyN2x2THZrVkV1Q3ZmRVg2MHViYm5HOVVsZm1BQgpEaVduOFcvUkVHVjE0cFBtcjQ2eE5QLzkrb1p3cDNRNUMzbFNocENVWEVxZjVHTzBUSXdSb1NVdndKcUo2UHc0Cm4yb0xEbXBNS3k5bkZEcFFCQTFoYUZSQnZJd3ZxOXdHc0NmK3Fyc3pTNHM2bHp1Qm1KVDZXYUdCQW9HQkFNNnYKSWJrSXJnVW54NlpuNngyRjVmSEQ0YVFydWRLbWFScnN3NzV4dFY0ZVlEdU1ZVVIrYWxwY2llVDZ4Y1Z3NUFvbQp6Q2o1VUNsejZJZ3pJc2MyTGRMR3JOeDVrcUFTMzA1K0UxaVdEeGh6UG44bUhETkI2NGY5WTVYdjJ6bm9maWVsCmNKd2pBaE5OZlR1ck45ODR5RXpQL0tHa1NsbGNxdHFsOVF6VVZrK3RBb0dCQU81c0RGTy85NmRqbW4yY0VYWloKZ0lTU2l2TDJDUFBkZVNwaVBDMW5qT29MWmI3VUFscTB4NTFVVVBhMTk3SzlIYktGZEx2Q1VVYXp5bm9CZ080TwptaDBodjVEQ2ZOblN1S1pxUW9QeFc2RGVYNUttYXNXN014eEloRGs2cWxUQ2dVSWRQeks0UVBYSWdnMmVpL3h4CjNNSHhyN29mbTQzL3NacnlHai9pZ0JDQkFvR0FWb3BzRTE3NEJuNldrUzIzKzUraUhXNElYOFpUUTBtY2ZzS2UKWDNLYkgzS1dscmg3emNNazR2c1dYZ05HcGhwVDBaQlhNZHphWE5FRWoycmg2QW5lZS8vbVIxYThOenhQdGowQgorcml5VDJtSnhKRi9nMUxadlJJekRZZm1Ba1EvOW5mR1JBcEFoemFOOWxzRnhQaXduY0VFcGVYMW41ODJodUN3ClQ1UGxJKzBDZ1lFQTN1WmptcXl1U0ZsUGR0Q3NsN1ZDUWc4K0N6L1hBTUNQZGt0SmF1bng5VWxtWVZXVFQzM2oKby9uVVRPVHY1TWZPTm9wejVYOXM4SCsyeXFOdWpna2NHZmFTeHFTNlBkbWNhcTJGMTYxTDdTR0JDb2w1MVQ5ZwpXQkRObnlqOFprSkQxd2pRNkNDWG4zNDZIMS9YREZjbmhnc2c2UHRjTGh3RC8yS0l3eFVmdzFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
  24. root@k8s-master01:/data/velero#
复制代码
6.3.4、k8s集群中创建awsuser账户
  1. root@k8s-master01:/data/velero# kubectl create clusterrolebinding awsuser --clusterrole=cluster-admin --user=awsuser
  2. clusterrolebinding.rbac.authorization.k8s.io/awsuser created
  3. root@k8s-master01:/data/velero# kubectl get clusterrolebinding -A|grep awsuser
  4. awsuser                                                ClusterRole/cluster-admin                                          47s
  5. root@k8s-master01:/data/velero#
复制代码
6.3.5、验证证书的可用性
  1. root@k8s-master01:/data/velero# kubectl --kubeconfig ./awsuser.kubeconfig get nodes
  2. NAME           STATUS                     ROLES    AGE    VERSION
  3. 192.168.0.31   Ready,SchedulingDisabled   master   132d   v1.26.4
  4. 192.168.0.32   Ready,SchedulingDisabled   master   132d   v1.26.4
  5. 192.168.0.33   Ready,SchedulingDisabled   master   132d   v1.26.4
  6. 192.168.0.34   Ready                      node     132d   v1.26.4
  7. 192.168.0.35   Ready                      node     132d   v1.26.4
  8. 192.168.0.36   Ready                      node     132d   v1.26.4
  9. root@k8s-master01:/data/velero# kubectl --kubeconfig ./awsuser.kubeconfig get pods -n kube-system
  10. NAME                                       READY   STATUS    RESTARTS       AGE
  11. calico-kube-controllers-5456dd947c-pwl2n   1/1     Running   31 (79m ago)   132d
  12. calico-node-4zmb4                          1/1     Running   26 (79m ago)   132d
  13. calico-node-7lc66                          1/1     Running   28 (79m ago)   132d
  14. calico-node-bkhkd                          1/1     Running   28 (13d ago)   132d
  15. calico-node-mw49k                          1/1     Running   28 (79m ago)   132d
  16. calico-node-v726r                          1/1     Running   26 (79m ago)   132d
  17. calico-node-x9r7h                          1/1     Running   28 (79m ago)   132d
  18. coredns-77879dc67d-k9ztn                   1/1     Running   4 (79m ago)    27d
  19. coredns-77879dc67d-qwb48                   1/1     Running   4 (79m ago)    27d
  20. snapshot-controller-0                      1/1     Running   28 (79m ago)   132d
  21. root@k8s-master01:/data/velero#
复制代码
使用--kubeconfig选项来指定认证文件,如果能够正常查看k8s集群,pod等信息,说明该认证文件没有问题;
6.3.6、k8s集群中创建namespace
  1. root@k8s-master01:/data/velero# kubectl create ns velero-system
  2. namespace/velero-system created
  3. root@k8s-master01:/data/velero# kubectl get ns
  4. NAME              STATUS   AGE
  5. argocd            Active   129d
  6. default           Active   132d
  7. kube-node-lease   Active   132d
  8. kube-public       Active   132d
  9. kube-system       Active   132d
  10. magedu            Active   90d
  11. myserver          Active   98d
  12. velero-system     Active   5s
  13. root@k8s-master01:/data/velero#
复制代码
6.4、执行安装velero服务端
  1. root@k8s-master01:/data/velero# velero --kubeconfig  ./awsuser.kubeconfig \
  2. >     install \
  3. >     --provider aws \
  4. >     --plugins velero/velero-plugin-for-aws:v1.5.5 \
  5. >     --bucket velerodata  \
  6. >     --secret-file ./velero-auth.txt \
  7. >     --use-volume-snapshots=false \
  8. >     --namespace velero-system \
  9. > --backup-location-config region=minio,s3ForcePath,s3Url=http://192.168.0.42:9000
  10. CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource
  11. CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client
  12. CustomResourceDefinition/backuprepositories.velero.io: created
  13. CustomResourceDefinition/backups.velero.io: attempting to create resource
  14. CustomResourceDefinition/backups.velero.io: attempting to create resource client
  15. CustomResourceDefinition/backups.velero.io: created
  16. CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
  17. CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
  18. CustomResourceDefinition/backupstoragelocations.velero.io: created
  19. CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
  20. CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
  21. CustomResourceDefinition/deletebackuprequests.velero.io: created
  22. CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
  23. CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
  24. CustomResourceDefinition/downloadrequests.velero.io: created
  25. CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
  26. CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
  27. CustomResourceDefinition/podvolumebackups.velero.io: created
  28. CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
  29. CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
  30. CustomResourceDefinition/podvolumerestores.velero.io: created
  31. CustomResourceDefinition/restores.velero.io: attempting to create resource
  32. CustomResourceDefinition/restores.velero.io: attempting to create resource client
  33. CustomResourceDefinition/restores.velero.io: created
  34. CustomResourceDefinition/schedules.velero.io: attempting to create resource
  35. CustomResourceDefinition/schedules.velero.io: attempting to create resource client
  36. CustomResourceDefinition/schedules.velero.io: created
  37. CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
  38. CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
  39. CustomResourceDefinition/serverstatusrequests.velero.io: created
  40. CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
  41. CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
  42. CustomResourceDefinition/volumesnapshotlocations.velero.io: created
  43. Waiting for resources to be ready in cluster...
  44. Namespace/velero-system: attempting to create resource
  45. Namespace/velero-system: attempting to create resource client
  46. Namespace/velero-system: already exists, proceeding
  47. Namespace/velero-system: created
  48. ClusterRoleBinding/velero-velero-system: attempting to create resource
  49. ClusterRoleBinding/velero-velero-system: attempting to create resource client
  50. ClusterRoleBinding/velero-velero-system: created
  51. ServiceAccount/velero: attempting to create resource
  52. ServiceAccount/velero: attempting to create resource client
  53. ServiceAccount/velero: created
  54. Secret/cloud-credentials: attempting to create resource
  55. Secret/cloud-credentials: attempting to create resource client
  56. Secret/cloud-credentials: created
  57. BackupStorageLocation/default: attempting to create resource
  58. BackupStorageLocation/default: attempting to create resource client
  59. BackupStorageLocation/default: created
  60. Deployment/velero: attempting to create resource
  61. Deployment/velero: attempting to create resource client
  62. Deployment/velero: created
  63. Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero-system' to view the status.
  64. root@k8s-master01:/data/velero#
复制代码
6.5、验证安装velero服务端
  1. root@k8s-master01:/data/velero# kubectl get pod -n velero-system
  2. NAME                      READY   STATUS    RESTARTS   AGE
  3. velero-5d675548c4-2dx8d   1/1     Running   0          105s
  4. root@k8s-master01:/data/velero#
复制代码
能够在velero-system名称空间下看到velero pod 正常running,这意味着velero服务端已经部署好了;
7、velero备份数据

7.1、对default ns进行备份
  1. root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
  2. root@k8s-master01:/data/velero#  velero backup create default-backup-${DATE} \
  3. > --include-cluster-resources=true \
  4. > --include-namespaces default \
  5. > --kubeconfig=./awsuser.kubeconfig \
  6. > --namespace velero-system
  7. Backup request "default-backup-20230902133242" submitted successfully.
  8. Run `velero backup describe default-backup-20230902133242` or `velero backup logs default-backup-20230902133242` for more details.
  9. root@k8s-master01:/data/velero#
复制代码
验证备份
  1. root@k8s-master01:/data/velero# velero backup describe default-backup-20230902133242 --kubeconfig=./awsuser.kubeconfig --namespace velero-system
  2. Name:         default-backup-20230902133242
  3. Namespace:    velero-system
  4. Labels:       velero.io/storage-location=default
  5. Annotations:  velero.io/source-cluster-k8s-gitversion=v1.26.4
  6.               velero.io/source-cluster-k8s-major-version=1
  7.               velero.io/source-cluster-k8s-minor-version=26
  8. Phase:  Completed
  9. Namespaces:
  10.   Included:  default
  11.   Excluded:  <none>
  12. Resources:
  13.   Included:        *
  14.   Excluded:        <none>
  15.   Cluster-scoped:  included
  16. Label selector:  <none>
  17. Storage Location:  default
  18. Velero-Native Snapshot PVs:  auto
  19. TTL:  720h0m0s
  20. CSISnapshotTimeout:    10m0s
  21. ItemOperationTimeout:  1h0m0s
  22. Hooks:  <none>
  23. Backup Format Version:  1.1.0
  24. Started:    2023-09-02 13:33:01 +0000 UTC
  25. Completed:  2023-09-02 13:33:09 +0000 UTC
  26. Expiration:  2023-10-02 13:33:01 +0000 UTC
  27. Total items to be backed up:  288
  28. Items backed up:              288
  29. Velero-Native Snapshots: <none included>
  30. root@k8s-master01:/data/velero#
复制代码
minio验证备份数据

删除pod并验证数据恢复

  • 删除pod
  1. root@k8s-master01:/data/velero# kubectl get pods
  2. NAME   READY   STATUS    RESTARTS      AGE
  3. bash   1/1     Running   5 (93m ago)   27d
  4. root@k8s-master01:/data/velero# kubectl delete pod bash -n default
  5. pod "bash" deleted
  6. root@k8s-master01:/data/velero# kubectl get pods                  
  7. No resources found in default namespace.
  8. root@k8s-master01:/data/velero#
复制代码

  • 恢复pod
  1. root@k8s-master01:/data/velero# velero restore create --from-backup default-backup-20230902133242 --wait --kubeconfig=./awsuser.kubeconfig --namespace velero-system
  2. Restore request "default-backup-20230902133242-20230902134421" submitted successfully.
  3. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
  4. ..............................
  5. Restore completed with status: Completed. You may check for more information using the commands `velero restore describe default-backup-20230902133242-20230902134421` and `velero restore logs default-backup-20230902133242-20230902134421`.
  6. root@k8s-master01:/data/velero#
复制代码

  • 验证pod
  1. root@k8s-master01:/data/velero# kubectl get pods
  2. NAME   READY   STATUS    RESTARTS   AGE
  3. bash   1/1     Running   0          77s
  4. root@k8s-master01:/data/velero# kubectl exec -it bash bash
  5. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  6. [root@bash ~]# ping www.baidu.com
  7. PING www.a.shifen.com (14.119.104.189) 56(84) bytes of data.
  8. 64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=1 ttl=53 time=42.5 ms
  9. 64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=2 ttl=53 time=42.2 ms
  10. ^C
  11. --- www.a.shifen.com ping statistics ---
  12. 2 packets transmitted, 2 received, 0% packet loss, time 1001ms
  13. rtt min/avg/max/mdev = 42.234/42.400/42.567/0.264 ms
  14. [root@bash ~]#
复制代码
对myserver ns进行备份
  1. root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
  2. root@k8s-master01:/data/velero# velero backup create myserver-ns-backup-${DATE} \
  3. > --include-cluster-resources=true \
  4. > --include-namespaces myserver \
  5. > --kubeconfig=/root/.kube/config \
  6. > --namespace velero-system
  7. Backup request "myserver-ns-backup-20230902134938" submitted successfully.
  8. Run `velero backup describe myserver-ns-backup-20230902134938` or `velero backup logs myserver-ns-backup-20230902134938` for more details.
  9. root@k8s-master01:/data/velero#
复制代码
minio验证备份

删除pod并验证恢复
  1. root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
  2. root@k8s-master01:/data/velero# velero backup create myserver-ns-backup-${DATE} \
  3. > --include-cluster-resources=true \
  4. > --include-namespaces myserver \
  5. > --kubeconfig=/root/.kube/config \
  6. > --namespace velero-system
  7. Backup request "myserver-ns-backup-20230902134938" submitted successfully.
  8. Run `velero backup describe myserver-ns-backup-20230902134938` or `velero backup logs myserver-ns-backup-20230902134938` for more details.
  9. root@k8s-master01:/data/velero# kubectl get pods -n myserver NAME                                                  READY   STATUS    RESTARTS        AGEmyserver-myapp-deployment-name-6965765b9c-h4kj6       1/1     Running   13 (104m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running   13 (104m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running   13 (104m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running   13 (13d ago)    98droot@k8s-master01:/data/velero# kubectl get deployments.apps -n myserver NAME                                 READY   UP-TO-DATE   AVAILABLE   AGEmyserver-myapp-deployment-name       1/1     1            1           98dmyserver-myapp-frontend-deployment   3/3     3            3           98droot@k8s-master01:/data/velero# kubectl delete deployments.apps myserver-myapp-deployment-name -n myserver deployment.apps "myserver-myapp-deployment-name" deletedroot@k8s-master01:/data/velero# kubectl get pods -n myserver NAME                                                  READY   STATUS    RESTARTS        AGEmyserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running   13 (106m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running   13 (106m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running   13 (13d ago)    98droot@k8s-master01:/data/velero# velero restore create --from-backup myserver-ns-backup-20230902134938 --wait \> --kubeconfig=./awsuser.kubeconfig \> --namespace velero-systemRestore request "myserver-ns-backup-20230902134938-20230902135401" submitted successfully.Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background................................Restore completed with status: Completed. You may check for more information using the commands `velero restore describe myserver-ns-backup-20230902134938-20230902135401` and `velero restore logs myserver-ns-backup-20230902134938-20230902135401`.root@k8s-master01:/data/velero# kubectl get deployments.apps -n myserver NAME                                 READY   UP-TO-DATE   AVAILABLE   AGEmyserver-myapp-deployment-name       0/1     1            0           37smyserver-myapp-frontend-deployment   3/3     3            3           98droot@k8s-master01:/data/velero# kubectl get pods -n myserver                 NAME                                                  READY   STATUS            RESTARTS        AGEmyserver-myapp-deployment-name-6965765b9c-h4kj6       0/1     PodInitializing   0               69smyserver-myapp-frontend-deployment-6bd57599f4-8zw5s   1/1     Running           13 (108m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-j276c   1/1     Running           13 (108m ago)   98dmyserver-myapp-frontend-deployment-6bd57599f4-p76bw   1/1     Running           13 (13d ago)    98droot@k8s-master01:/data/velero#
复制代码
7.2、备份指定资源对象

备份指定namespace中的pod或特定资源
  1. root@k8s-master01:/data/velero# DATE=`date +%Y%m%d%H%M%S`
  2. root@k8s-master01:/data/velero#  velero backup create pod-backup-${DATE} --include-cluster-resources=true \
  3. >  --ordered-resources 'pods=default/bash,magedu/ubuntu1804,magedu/mysql-0;deployments.apps=myserver/myserver-myapp-frontend-deployment,magedu/wordpress-app-deployment;services=myserver/myserver-myapp-service-name,magedu/mysql,magedu/zookeeper' \
  4. >  --namespace velero-system --include-namespaces=myserver,magedu,default
  5. Backup request "pod-backup-20230902141842" submitted successfully.
  6. Run `velero backup describe pod-backup-20230902141842` or `velero backup logs pod-backup-20230902141842` for more details.
  7. root@k8s-master01:/data/velero#
复制代码
删除pod并验证恢复

  • 删除资源
  1. root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu
  2. NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
  3. magedu-consumer-deployment     3/3     3            3           22d
  4. magedu-dubboadmin-deployment   1/1     1            1           22d
  5. magedu-provider-deployment     3/3     3            3           22d
  6. wordpress-app-deployment       1/1     1            1           13d
  7. zookeeper1                     1/1     1            1           90d
  8. zookeeper2                     1/1     1            1           90d
  9. zookeeper3                     1/1     1            1           90d
  10. root@k8s-master01:/data/velero# kubectl delete deployments.apps wordpress-app-deployment -n magedu
  11. deployment.apps "wordpress-app-deployment" deleted
  12. root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu
  13. NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
  14. magedu-consumer-deployment     3/3     3            3           22d
  15. magedu-dubboadmin-deployment   1/1     1            1           22d
  16. magedu-provider-deployment     3/3     3            3           22d
  17. zookeeper1                     1/1     1            1           90d
  18. zookeeper2                     1/1     1            1           90d
  19. zookeeper3                     1/1     1            1           90d
  20. root@k8s-master01:/data/velero# kubectl get svc -n magedu
  21. NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
  22. magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
  23. magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
  24. magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
  25. mysql                       ClusterIP   None             <none>        3306/TCP                                       79d
  26. mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
  27. mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
  28. redis                       ClusterIP   None             <none>        6379/TCP                                       88d
  29. redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
  30. wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
  31. zookeeper                   ClusterIP   10.100.237.95    <none>        2181/TCP                                       90d
  32. zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
  33. zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
  34. zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
  35. root@k8s-master01:/data/velero# kubectl delete svc mysql -n magedu
  36. service "mysql" deleted
  37. root@k8s-master01:/data/velero# kubectl delete svc zookeeper -n magedu      
  38. service "zookeeper" deleted
  39. root@k8s-master01:/data/velero# kubectl get svc -n magedu
  40. NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
  41. magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
  42. magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
  43. magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
  44. mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
  45. mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
  46. redis                       ClusterIP   None             <none>        6379/TCP                                       88d
  47. redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
  48. wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
  49. zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
  50. zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
  51. zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
  52. root@k8s-master01:/data/velero#
复制代码

  • 恢复资源
  1. root@k8s-master01:/data/velero# velero restore create --from-backup pod-backup-20230902141842 --wait \
  2. > --kubeconfig=./awsuser.kubeconfig \
  3. > --namespace velero-system
  4. Restore request "pod-backup-20230902141842-20230902142341" submitted successfully.
  5. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
  6. ............................................
  7. Restore completed with status: Completed. You may check for more information using the commands `velero restore describe pod-backup-20230902141842-20230902142341` and `velero restore logs pod-backup-20230902141842-20230902142341`.
  8. root@k8s-master01:/data/velero#
复制代码

  • 验证对应资源是否恢复
  1. root@k8s-master01:/data/velero# kubectl get kubectl get deployments.apps -n magedu
  2. error: the server doesn't have a resource type "kubectl"
  3. root@k8s-master01:/data/velero# kubectl get deployments.apps -n magedu
  4. NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
  5. magedu-consumer-deployment     3/3     3            3           22d
  6. magedu-dubboadmin-deployment   1/1     1            1           22d
  7. magedu-provider-deployment     3/3     3            3           22d
  8. wordpress-app-deployment       1/1     1            1           2m57s
  9. zookeeper1                     1/1     1            1           90d
  10. zookeeper2                     1/1     1            1           90d
  11. zookeeper3                     1/1     1            1           90d
  12. root@k8s-master01:/data/velero# kubectl get pods -n magedu
  13. NAME                                            READY   STATUS      RESTARTS        AGE
  14. magedu-consumer-deployment-798c7d785b-fp4b9     1/1     Running     3 (140m ago)    22d
  15. magedu-consumer-deployment-798c7d785b-wmv9p     1/1     Running     3 (140m ago)    22d
  16. magedu-consumer-deployment-798c7d785b-zqm74     1/1     Running     3 (13d ago)     22d
  17. magedu-dubboadmin-deployment-798c4dfdd8-kvfvh   1/1     Running     3 (140m ago)    22d
  18. magedu-provider-deployment-6fccc6d9f5-k6z7m     1/1     Running     3 (140m ago)    22d
  19. magedu-provider-deployment-6fccc6d9f5-nl4zd     1/1     Running     3 (140m ago)    22d
  20. magedu-provider-deployment-6fccc6d9f5-p94rb     1/1     Running     3 (140m ago)    22d
  21. mysql-0                                         2/2     Running     12 (140m ago)   79d
  22. mysql-1                                         2/2     Running     12 (140m ago)   79d
  23. mysql-2                                         2/2     Running     12 (140m ago)   79d
  24. redis-0                                         1/1     Running     8 (13d ago)     88d
  25. redis-1                                         1/1     Running     8 (140m ago)    88d
  26. redis-2                                         1/1     Running     8 (140m ago)    88d
  27. redis-3                                         1/1     Running     8 (13d ago)     87d
  28. redis-4                                         1/1     Running     8 (140m ago)    88d
  29. redis-5                                         1/1     Running     8 (140m ago)    88d
  30. ubuntu1804                                      0/1     Completed   0               88d
  31. wordpress-app-deployment-64c956bf9c-6qp8q       2/2     Running     0               3m31s
  32. zookeeper1-675c5477cb-vmwwq                     1/1     Running     10 (13d ago)    90d
  33. zookeeper2-759fb6c6f-7jktr                      1/1     Running     10 (140m ago)   90d
  34. zookeeper3-5c78bb5974-vxpbh                     1/1     Running     10 (140m ago)   90d
  35. root@k8s-master01:/data/velero# kubectl svc -n magedu
  36. error: unknown command "svc" for "kubectl"
  37. Did you mean this?
  38.         set
  39. root@k8s-master01:/data/velero# kubectl get svc -n magedu
  40. NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
  41. magedu-consumer-server      NodePort    10.100.208.121   <none>        80:49630/TCP                                   22d
  42. magedu-dubboadmin-service   NodePort    10.100.244.92    <none>        80:31080/TCP                                   22d
  43. magedu-provider-spec        NodePort    10.100.187.168   <none>        80:44873/TCP                                   22d
  44. mysql                       ClusterIP   None             <none>        3306/TCP                                       4m6s
  45. mysql-0                     ClusterIP   None             <none>        3306/TCP                                       13d
  46. mysql-read                  ClusterIP   10.100.15.127    <none>        3306/TCP                                       79d
  47. redis                       ClusterIP   None             <none>        6379/TCP                                       88d
  48. redis-access                NodePort    10.100.117.185   <none>        6379:36379/TCP                                 88d
  49. wordpress-app-spec          NodePort    10.100.189.214   <none>        80:30031/TCP,443:30033/TCP                     13d
  50. zookeeper                   ClusterIP   10.100.177.73    <none>        2181/TCP                                       4m6s
  51. zookeeper1                  NodePort    10.100.63.118    <none>        2181:32181/TCP,2888:30541/TCP,3888:31200/TCP   90d
  52. zookeeper2                  NodePort    10.100.199.43    <none>        2181:32182/TCP,2888:32670/TCP,3888:32264/TCP   90d
  53. zookeeper3                  NodePort    10.100.41.9      <none>        2181:32183/TCP,2888:31329/TCP,3888:32546/TCP   90d
  54. root@k8s-master01:/data/velero#
复制代码
7.3、批量备份所有namespace
  1. root@k8s-master01:/data/velero# cat all-ns-backup.sh
  2. #!/bin/bash
  3. NS_NAME=`kubectl get ns | awk '{if (NR>2){print}}' | awk '{print $1}'`
  4. DATE=`date +%Y%m%d%H%M%S`
  5. cd /data/velero/
  6. for i in $NS_NAME;do
  7. velero backup create ${i}-ns-backup-${DATE} \
  8. --include-cluster-resources=true \
  9. --include-namespaces ${i} \
  10. --kubeconfig=/root/.kube/config \
  11. --namespace velero-system
  12. done
  13. root@k8s-master01:/data/velero#
复制代码
执行脚本备份
  1. root@k8s-master01:/data/velero# bash all-ns-backup.sh
  2. Backup request "default-ns-backup-20230902143131" submitted successfully.
  3. Run `velero backup describe default-ns-backup-20230902143131` or `velero backup logs default-ns-backup-20230902143131` for more details.
  4. Backup request "kube-node-lease-ns-backup-20230902143131" submitted successfully.
  5. Run `velero backup describe kube-node-lease-ns-backup-20230902143131` or `velero backup logs kube-node-lease-ns-backup-20230902143131` for more details.
  6. Backup request "kube-public-ns-backup-20230902143131" submitted successfully.
  7. Run `velero backup describe kube-public-ns-backup-20230902143131` or `velero backup logs kube-public-ns-backup-20230902143131` for more details.
  8. Backup request "kube-system-ns-backup-20230902143131" submitted successfully.
  9. Run `velero backup describe kube-system-ns-backup-20230902143131` or `velero backup logs kube-system-ns-backup-20230902143131` for more details.
  10. Backup request "magedu-ns-backup-20230902143131" submitted successfully.
  11. Run `velero backup describe magedu-ns-backup-20230902143131` or `velero backup logs magedu-ns-backup-20230902143131` for more details.
  12. Backup request "myserver-ns-backup-20230902143131" submitted successfully.
  13. Run `velero backup describe myserver-ns-backup-20230902143131` or `velero backup logs myserver-ns-backup-20230902143131` for more details.
  14. Backup request "velero-system-ns-backup-20230902143131" submitted successfully.
  15. Run `velero backup describe velero-system-ns-backup-20230902143131` or `velero backup logs velero-system-ns-backup-20230902143131` for more details.
  16. root@k8s-master01:/data/velero#
复制代码
在minio验证备份

        出处:https://www.cnblogs.com/qiuhom-1874/        本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利.
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

一给

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表