100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > openstack运维实战系列(十七)之glance与ceph结合

openstack运维实战系列(十七)之glance与ceph结合

时间:2023-11-16 03:06:30

相关推荐

openstack运维实战系列(十七)之glance与ceph结合

1. 需求说明

glance作为openstack中p_w_picpath服务,支持多种适配器,支持将p_w_picpath存放到本地文件系统,http服务器,ceph分布式文件系统,glusterfs和sleepdog等开源的分布式文件系统上,本文,通过将讲述glance如何和ceph结合。

目前glance采用的是本地filesystem的方式存储,存放在默认的路径/var/lib/glance/p_w_picpaths下,当把本地的文件系统修改为分布式的文件系统ceph之后,原本在系统中镜像将无法使用,所以建议当前的镜像删除,部署好ceph之后,再统一上传至ceph中存储。

2.原理解析

使用ceph的rbd接口,需要通过libvirt,所以需要在客户端机器上安装libvirt和qemu,关于ceph和openstack结合的结构如下,同时,在openstack中,需要用到存储的地方有三个:1. glance的镜像,默认的本地存储,路径在/var/lib/glance/p_w_picpaths目录下,2. nova虚拟机存储,默认本地,路径位于/var/lib/nova/instances目录下,3. cinder存储,默认采用LVM的存储方式。

3. glance与ceph联动

1.创建资源池pool

1、ceph默认创建了一个pool:rbd[root@controller_10_1_2_230~]#cephosdlspools0rbd,[root@controller_10_1_2_230~]#cephosdpoolstatspoolrbdid0nothingisgoingon2、创建一个pool,指定pg_num的大小为128[root@controller_10_1_2_230~]#cephosdpoolcreatep_w_picpaths128pool'p_w_picpaths'created3、查看pool的pg_num和pgp_num大小[root@controller_10_1_2_230~]#cephosdpoolgetp_w_picpathspg_numpg_num:128[root@controller_10_1_2_230~]#cephosdpoolgetp_w_picpathspgp_numpgp_num:1284、查看ceph中的pools[root@controller_10_1_2_230~]#cephosdlspools0rbd,1p_w_picpaths,[root@controller_10_1_2_230~]#cephosdpoolstatspoolrbdid0nothingisgoingonpoolp_w_picpathsid1#增加了一个pool,id号码是1nothingisgoingon

2.配置ceph客户端

1.glance作为ceph的客户端,即glance-api,需要有ceph的配置文件,从ceph的monitor节点复制一份配置文件过去即可,我所在环境中控制节点和cephmonitor为同一台机器,不需要操作#如果controller节点和ceph的monitor节点是分开,则需要复制[root@controller_10_1_2_230~]#scp/etc/ceph/ceph.confroot@controller_10_1_2_230:/etc/ceph/ceph.conf2.安装客户端rpm包[root@controller_10_1_2_230~]#yuminstallpython-rbd-y

3.配置ceph认证

1.添加认证的key[root@controller_10_1_2_230~]#cephauthget-or-createclient.glancemon'allowr'osd'class-readobject_prefixrbd_children,allowrwxpool=p_w_picpaths'[client.glance]key=AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==2.查看认证列表[root@controller_10_1_2_230~]#cephauthlistinstalledauthentries:osd.0key:AQDsx6lWYGehDxAAGwcYP9jDvH2Zaa8JlGwj1Q==caps:[mon]allowprofileosdcaps:[osd]allow*osd.1key:AQD1x6lWQCYBERAAjIKO1LVpj8FvVefDvNQZSA==caps:[mon]allowprofileosdcaps:[osd]allow*client.adminkey:AQCexqlWQL6OGBAA2v5LsYEB5VgLyq/K2huY3A==caps:[mds]allowcaps:[mon]allow*caps:[osd]allow*client.bootstrap-mdskey:AQCexqlWUMNRMRAAZEp/UlhQuaixMcNy5d5pPw==caps:[mon]allowprofilebootstrap-mdsclient.bootstrap-osdkey:AQCexqlWQFfpJBAAfPCx4sTLNztBESyFKys9LQ==caps:[mon]allowprofilebootstrap-osdclient.glance#glance连接ceph的认证信息key:AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==caps:[mon]allowrcaps:[osd]class-readobject_prefixrbd_children,allowrwxpool=p_w_picpaths3.将glance生成的key拷贝至[root@controller_10_1_2_230~]#cephauthget-or-createclient.glance[client.glance]key=AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==#将key导出到客户端[root@controller_10_1_2_230~]#cephauthget-or-createclient.glance|tee/etc/ceph/ceph.client.glance.keyring[client.glance]key=AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==[root@controller_10_1_2_230~]#chownglance:glance/etc/ceph/ceph.client.glance.keyring[root@controller_10_1_2_230~]#ll/etc/ceph/ceph.client.glance.keyring-rw-r--r--1glanceglance64Jan2817:17/etc/ceph/ceph.client.glance.keyring

4. 配置glance使用ceph做为后端存储

1、备份glance-api的配置文件,以便于恢复[root@controller_10_1_2_230~]#cp/etc/glance/glance-api.conf/etc/glance/glance-api.conf.orig2、修改glance配置文件,连接至ceph[root@controller_10_1_2_230~]#vim/etc/glance/glance-api.conf[DEFAULT]notification_driver=messagingrabbit_hosts=10.1.2.230:5672rabbit_retry_interval=1rabbit_retry_backoff=2rabbit_max_retries=0rabbit_ha_queues=Truerabbit_durable_queues=Falserabbit_userid=glancerabbit_password=GLANCE_MQPASSrabbit_virtual_host=/glancedefault_store=rbd#glance使用的后端存储known_stores=glance.store.rbd.Store#配置rbd的驱动rbd_store_ceph_conf=/etc/ceph/ceph.conf#ceph的配置文件,包含有monitor的地址,通过查找monitor,可以获取认证信息rbd_store_user=glance#认证用户,即是刚创建的用户rbd_store_pool=p_w_picpaths#连接的存储池rbd_store_chunk_size=8#设置chunksize,即切割的大小3.重启glance服务[root@controller_10_1_2_230~]#/etc/init.d/openstack-glance-apirestartStoppingopenstack-glance-api:[OK]Startingopenstack-glance-api:[OK][root@controller_10_1_2_230~]#/etc/init.d/openstack-glance-registryrestartStoppingopenstack-glance-registry:[OK]Startingopenstack-glance-registry:[OK][root@controller_10_1_2_230~]#tail-2/etc/glance/glance-api.conf#locationstrategydefinedbythe'location_strategy'configoption.#store_type_preference=[root@controller_10_1_2_230~]#tail-2/var/log/glance/registry.log-01-2818:40:25.23121890INFOglance.wsgi.server[-]Startedchild21896-01-2818:40:25.23221896INFOglance.wsgi.server[-](21896)wsgistartinguponhttp://0.0.0.0:9191/

5. 测试glance和ceph联动情况

[root@controller_10_1_2_230~]#glance--debugp_w_picpath-create--nameglance_ceph_test--disk-formatqcow2--container-formatbare--filecirros-0.3.3-x86_64-disk.imgcurl-i-XPOST-H'x-p_w_picpath-meta-container_format:bare'-H'Transfer-Encoding:chunked'-H'User-Agent:python-glanceclient'-H'x-p_w_picpath-meta-size:1396'-H'x-p_w_picpath-meta-is_public:False'-H'X-Auth-Token:062af9027a85487997d176c9f1e963f2'-H'Content-Type:application/octet-stream'-H'x-p_w_picpath-meta-disk_format:qcow2'-H'x-p_w_picpath-meta-name:glance_ceph_test'-d'<openfileu'cirros-0.3.3-x86_64-disk.img',mode'rb'at0x1ba24b0>'http://controller:9292/v1/p_w_picpathsHTTP/1.1201Createdcontent-length:489etag:133eae9fb1c98f45894a4e60d8736619location:http://controller:9292/v1/p_w_picpaths/348a90e8-3631-4a66-a45d-590ec6413e7ddate:Thu,28Jan10:42:06GMTcontent-type:application/jsonx-openstack-request-id:req-b993bc0b-447e-49b4-a8ce-bd7765199d5a{"p_w_picpath":{"status":"active","deleted":false,"container_format":"bare","min_ram":0,"updated_at":"-01-28T10:42:06","owner":"ef4b83a909dc4689b663ff2c70022478","min_disk":0,"is_public":false,"deleted_at":null,"id":"348a90e8-3631-4a66-a45d-590ec6413e7d","size":1396,"virtual_size":null,"name":"glance_ceph_test","checksum":"133eae9fb1c98f45894a4e60d8736619","created_at":"-01-28T10:42:04","disk_format":"qcow2","properties":{},"protected":false}}+------------------+--------------------------------------+|Property|Value|+------------------+--------------------------------------+|checksum|133eae9fb1c98f45894a4e60d8736619||container_format|bare||created_at|-01-28T10:42:04||deleted|False||deleted_at|None||disk_format|qcow2||id|348a90e8-3631-4a66-a45d-590ec6413e7d||is_public|False||min_disk|0||min_ram|0||name|glance_ceph_test||owner|ef4b83a909dc4689b663ff2c70022478||protected|False||size|1396||status|active||updated_at|-01-28T10:42:06||virtual_size|None|+------------------+--------------------------------------+[root@controller_10_1_2_230~]#glancep_w_picpath-list+--------------------------------------+---------------------+-------------+------------------+----------+--------+|ID|Name|DiskFormat|ContainerFormat|Size|Status|+--------------------------------------+---------------------+-------------+------------------+----------+--------+|56e96957-1308-45c7-9c66-1afff680b217|cirros-0.3.3-x86_64|qcow2|bare|1396|active||348a90e8-3631-4a66-a45d-590ec6413e7d|glance_ceph_test|qcow2|bare|1396|active|#上传成功+--------------------------------------+---------------------+-------------+------------------+----------+--------+

6.查看ceph池的数据

[root@controller_10_1_2_230~]#rados-pp_w_picpathslsrbd_directoryrbd_header.10d7caaf292rbd_data.10dd1fd73446.0000000000000001rbd_id.348a90e8-3631-4a66-a45d-590ec6413e7drbd_header.10dd1fd73446rbd_data.10d7caaf292.0000000000000000rbd_data.10dd1fd73446.0000000000000000rbd_id.8a09b280-5916-44c6-9ce8-33bb57a09dad@@@glance中的数据存储到了ceph文件系统中@@@

4. 总结

将openstack的glance的数据存储到ceph中是一种非常好的解决方案,既能够保障p_w_picpath数据的安全性,同时glance和nova在同个存储池中,能够基于copy-on-write的方式快速创建虚拟机,能够在秒级为单位实现vm的创建。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。