去皮卡多 发表于 2024-6-14 21:02:22

Openstack云盘算(六)Openstack环境对接ceph

一、实行步骤:

(1)客户端也要有cent用户:
useradd cent && echo "123" | passwd --stdin cent
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph  
(2)openstack要用ceph的节点(好比compute-node和storage-node)安装下载的软件包:
yum localinstall ./* -y  
或则:每个节点安装 clients(要访问ceph集群的节点):
yum install python-rbdyum
install ceph-common
如果先采用上面的方式安装客户端,其实这两个包在rpm包中早已经安装过了  
(3)摆设节点上实行,为openstack节点安装ceph:
ceph-deploy install controller

ceph-deploy admin controller  
(4)客户端实行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 1

(5)create pools,只需在一个ceph节点上操作即可:
ceph osd pool create images 1024
ceph osd pool create vms 1024
ceph osd pool create volumes 1024  
 
表现pool的状态
ceph osd lspools  
(6)在ceph集群中,创建glance和cinder用户, 只需在一个ceph节点上操作即可:
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

nova使用cinder用户,就不单独创建了  
(7)拷贝ceph-ring, 只需在一个ceph节点上操作即可:
1
2
ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring
利用scp拷贝到其他节点(ceph集群节点和openstack的要用ceph的节点好比compute-node和storage-node,本次对接的是一个all-in-one的环境,以是copy到controller节点即可 )
1
2
3
4
# ls
ceph.client.admin.keyring  ceph.client.cinder.keyring  ceph.client.glance.keyring  ceph.conf  rbdmap  tmpR3uL7W
#
# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:/etc/ceph/
(8)更改文件的权限(全部客户端节点均实行)
1
2
chown glance:glance /etc/ceph/ceph.client.glance.keyring
chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
(9)更改libvirt权限(只需在nova-compute节点上操作即可,每个盘算节点都做)
uuidgen

940f0485-e206-4b49-b878-dcd0cb9c70a4

在/etc/ceph/目录下(在什么目录没有影响,放到/etc/ceph目录方便管理):

cat > secret.xml <<EOF

<secret ephemeral='no' private='no'>

<uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid>

<usage type='ceph'>

 <name>client.cinder secret</name>

</usage>

</secret>

EOF

将 secret.xml 拷贝到所有compute节点,并执行::

virsh secret-define --file secret.xml

ceph auth get-key client.cinder > ./client.cinder.key

virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key) 配置Glance, 在全部的controller节点上做如下更改:
vim /etc/glance/glance-api.conf

default_store = rbd



connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8


auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance










flavor = keystone




在全部的controller节点上做如下更改
1
2
systemctl restart openstack-glance-api.service
systemctl status openstack-glance-api.service
创建image验证:
1
2
3
4
5
# openstack image create "cirros"   --file cirros-0.3.3-x86_64-disk.img.img   --disk-format qcow2 --container-format bare --public
  
# rbd ls images
9ce5055e-4217-44b4-a237-e7b577a20dac
**********有输出镜像说明乐成了


(8)配置 Cinder:
vim /etc/cinder/cinder.conf

my_ip = 172.16.254.63
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
state_path = /var/lib/cinder
transport_url = rabbit://openstack:admin@controller








connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder




auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder


lock_path = /var/lib/cinder/tmp












volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4
volume_backend_name=ceph
重启cinder服务:
1
2
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
 创建volume验证:
1
2
# rbd ls volumes
volume-43b7c31d-a773-4604-8e4a-9ed78ec18996
 (9)配置Nova:
vim /etc/nova/nova.conf

my_ip=172.16.254.63
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:admin@controller

auth_strategy = keystone

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api




os_region_name = RegionOne








connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova



api_servers = http://controller:9292







auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

virt_type=qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4




url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET



lock_path=/var/lib/nova/tmp









os_region_name = RegionOne
auth_type = password
auth_url = http://controller:35357/v3
project_name = service
project_domain_name = Default
username = placement
password = placement
user_domain_name = Default













enabled=true
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html




重启nova服务:
1
2
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-compute.service openstack-nova-cert.service
systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-no
ceph常用下令

1、查看ceph集群配置信息
1
ceph daemon /var/run/ceph/ceph-mon.$(hostname -s).asok config show
2、在摆设节点修改了ceph.conf文件,将新配置推送至全部的ceph节点
1
ceph-deploy  --overwrite-conf config push dlp node1 node2 node3
3、查抄仲裁状态,查看mon添加是否乐成
1
ceph quorum_status --format json-pretty
4、列式pool列表
1
ceph osd lspools
5、列示pool详细信息
1
ceph osd dump |grep pool
6、查抄pool的副本数
1
ceph osd dump|grep -i size
7、创建pool
1
ceph osd pool create pooltest 128
8、删除pool
1
2
ceph osd pool delete data
ceph osd pool delete data data  --yes-i-really-really-mean-it
9、设置pool副本数
1
2
ceph osd pool get data size
ceph osd pool set data size 3
10、设置pool配额
1
2
ceph osd pool set-quota data max_objects 100                              #最大100个对象
ceph osd pool set-quota data max_bytes $((10 * 1024 * 1024 * 1024))       #容量巨细最大为10G
11、重命名pool
1
ceph osd pool rename data date


12、PG, Placement Groups。CRUSH先将数据分解成一组对象,然后根据对象名称、复制级别和体系中的PG数等信息实行散列操作,再将结果生成PG ID。可以将PG看做一个逻辑容器,这个容器包含多个对 象,同时这个逻辑对象映射之多个OSD上。如果没有PG,在成千上万个OSD上管理和跟踪数百万计的对象的复制和流传是相当困难的。没有PG这一层,管理海量的对象所斲丧的盘算资源也是不可想象的。发起每个OSD上配置50~100个PG。
      PGP是为了实现定位而设置的PG,它的值应该和PG的总数(即pg_num)保持同等。对于Ceph的一个pool而言,如果增长pg_num,还应该调解pgp_num为同样的值,这样集群才可以开始再平衡。
      参数pg_num定义了PG的数目,PG映射至OSD。当任意pool的PG数增长时,PG依然保持和源OSD的映射。直至目前,Ceph还未开始再平衡。此时,增长pgp_num的值,PG才开始从源OSD迁移至其他的OSD,正式开始再平衡。PGP,Placement Groups of Placement。
盘算PG数:
https://img-blog.csdnimg.cn/direct/76bf5f95cac84f3c8364fd2033da3d9c.jpeg
ceph集群中的PG总数
1
PG总数 = (OSD总数 * 100) / 最大副本数        ** 结果必须舍入到最靠近的2的N次方幂的值
ceph集群中每个pool中的PG总数
1
存储池PG总数 = (OSD总数 * 100 / 最大副本数) / 池数
获取现有的PG数和PGP数值
1
2
ceph osd pool get data pg_num
ceph osd pool get data pgp_num

13、修改存储池的PG和PGP
1
2
ceph osd pool set data pg_num = 1
ceph osd pool set data pgp_num = 1


免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页: [1]
查看完整版本: Openstack云盘算(六)Openstack环境对接ceph