This document outlines the planning and implementation of a unified CloveStorage CEPH storage pool, which successfully integrates with TECS, RedHat OpenStack, and VMware. This solution addresses the increasingly prominent issue of insufficient storage space across various integration projects. Each integrated data center (DC) can be deployed according to commercial storage specifications, enhancing the efficiency and quality of integration verification.
1. Background Introduction
Ceph’s storage type is defined through software, known as software-defined storage (SDS). SDS is a data storage approach where all storage-related control operations are performed in software external to the physical storage hardware. Ceph’s advantages lie in its ability to provide high performance, high reliability, high capacity, and scalability on inexpensive storage devices without relying on proprietary storage servers, making it suitable for massive storage scenarios.
Ceph storage achieves high availability and scalability through the CRUSH algorithm, overcoming the limitations of traditional storage devices by ensuring high availability and decentralization. Through an external API interface, Ceph offers object storage, file storage, and block storage. The underlying RADOS (Reliable Autonomic Distributed Object Store) ensures high availability and scalability. Due to these advantages, an increasing number of data centers are selecting Ceph as their storage resource.
2. Unified CEPH Storage Pool Planning
The unified CEPH storage pool connects to TECS, RedHat OSP, and VMware clouds through switches or routers, providing storage services to these clouds. The CEPH storage pool is composed of low-cost, high-capacity rack servers.
The initial planning for the Ceph cloud storage center includes 10 ZXCLOUD R5300 servers, 1 9904 switch, and 1 5928 switch, with reserved resources for future expansion to 20 ZXCLOUD R5300 storage devices. The 5928 switch is used for data exchange in the management plane, while the 9904 switch handles data plane exchanges.
Management Plane: Used for configuring server operating systems via debugging machines and managing interactions between the web management and storage nodes. This network uses 1G network ports.
Data Plane: Facilitates interactions between server storage nodes using 10G optical ports.
Public Service Plane: A storage service network responsible for interactions between clients (TECS) and the storage cluster, MON to MON, and MON to OSD. This network also uses 10G optical ports.
Hardware Management Plane: Connects to the PIM of hardware servers for easy server operation and maintenance. This network uses 1G network ports.
3. Harmonization of CEPH storage pool construction
The CEPH setup process can be roughly divided into four steps: obtaining the software installation version, RAID setup, OS installation, and cluster deployment.
4. Unified CEPH Storage Pool Application Practice
A Ceph cluster can act as a storage backend for OpenStack, providing volume storage space to OpenStack.OpenStack can boot an image through a volume or add additional volumes to a running VM.OpenStack utilizes QEMU to access TECS Storage (Ceph) block devices through the librbd route.
Practice 1: Integrating TECS with the Storage Pool
4.1 Establishing Routing Between TECS and CEPH
1. Data Center and 9904 Connection Planning
The CEPH storage pool uses the 9904 switch with a 10G switching card to connect to external data centers. Each computing center is connected via separate VLANs with layer 3 routing enabled for mutual communication. Each G14 port connects to an ODF fiber aggregation switch, which in turn links to the 9904 switch at D4. Currently, all external network cards of CEPH are connected to the 9904 switch, with ports in SHUTDOWN status. During integration, the connection status of the G14 port on the rack where TECS resides needs to be checked, ensuring the route to the 9904 switch is established through this fiber.
2. Configuring Routes to the DC on the 9904
The VLAN configuration and interface IP configuration on the 9904 have been completed. It only requires adding specific routes to each data center on the 9904.
3. Configuring Routes from CEPH Nodes to TECS
On all CEPH nodes, configure the routes to each TECS address segment.
4.2 Deploying the CEPH Client on TECS
1. Configuring Routes from TECS to CEPH
Establish the route from all TECS nodes (including computing and control nodes) to CEPH. Ensure that the IP routes from all TECS nodes to the designated CEPH network segment are established.
2. Deploying the CEPH Client
3. Connecting TECS to CEPH
The first OpenStack cluster can be integrated with the Ceph cluster using scripts. For integrating the second OpenStack cluster with the same Ceph cluster, the steps currently must be performed manually.
4.3 Creating Resource Pools on CEPH
Create the required storage pools for Cinder, Nova, and Glance through the WEB interface, corresponding to volumes2, vms2, and images2, and select the low performance, medium performance, and high performance security rules created earlier as needed.
Configure the authentication of the Ceph client and storage pools, generate the corresponding key files, and copy the generated key files to all other OpenStack nodes:
ceph auth get-or-create client.cinder2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes2, allow rwx pool=vms2, allow rx pool=images2' -o /etc/ceph/ceph.client.cinder2.keyring
ceph auth get-or-create client.glance2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images2' -o /etc/ceph/ceph.client.glance2.keyring
(2) Under /etc/ceph, generate ceph.client.grace2.keyring and ceph.client.cinder2.keyring, and copy these two files to all other nodes.
(3) Modify the key file owner to the corresponding component user in the OpenStack control node.
1. Generate ceph.client.grace2.keyring and ceph.client.cinder2.keyring under /etc/ceph and copy these two files to all other nodes.
2. Modify the key file owner to the corresponding component user on the OpenStack control node.
3. On the nodes running nova-compute, add the key to libvirt and remove the temporary key file.
4. Run uuidgen on the compute node. uuidgen needs to be run only once, and this generated uuid is shared by everything involving uuids.
5. Edit a secret.xml file under compute node etc/ceph.
6. Execute the configuration on the compute node.
4.5 Modifying the OpenStack configuration file
4.6 Verifying Docking Validity
Practice 2: Integrating VMware with the CEPH Storage Pool
1. Create a VMkernel Network Adapter and Configure Storage Connection Address
- Select a host and navigate to Manager → Networking → VMkernel adapter → Add network.
- Choose VMkernel network adapter, then click Next to proceed.
- Select Select an existing network, click Browse, choose the previously created storage connection port group, then click OK.
- Click Next to continue.
- Click Next again.
- Enter the IP address and subnet mask, then click Next to continue.
- Click Finish to complete the VMkernel network adapter configuration.
2. Add Software iSCSI Adapter
- Select a host, click Storage to switch to the storage management interface, then click to add the iSCSI Software Adapter.
- Click OK to complete the addition of the software iSCSI adapter.
3. Configure the Ceph Cluster
Prerequisite: The Ceph cluster must be set up and support iSCSI mounting.
- Create a 100GB block storage volume named vmwarelun.
- Create three clients (hosts) named vmware1, vmware2, and vmware3. Each client’s IQN can be found in the vSphere Center interface by selecting the host and navigating to Configuration → Storage → Storage Adapters → iSCSI Software Adapter.
- Create a mapping group named vmwareGroup.
- Add vmwarelun and the three client hosts (vmware1, vmware2, and vmware3) to the same mapping group vmwareGroup. At this point, vmwarelun becomes shared storage for the three ESXi hosts.
- Enable iSCSI on the server.
4. Connecting VMware to the Ceph Cluster
Add the Ceph Cluster’s Service Address
On the host’s Manage page, navigate to Storage → Storage Adapters → iSCSI Software Adapter.
Click on Targets, select Dynamic Discovery, and then click Add.
Enter the Ceph cluster’s service address (ensure this is the external service address of the cluster, not the gateway server address mentioned earlier).
Rescan Storage Controllers
1. After adding the service address, click the button to rescan the storage controllers.
2. The system will identify the volumes created in the Ceph cluster.
3. Click OK to scan all devices and volumes.
4. In the iSCSI Software Adapters section, you will see the scanned disk devices.
5. Verification:Create virtual machines in VMware using the Ceph storage pool and verify the storage functionality to ensure the VMware hosts are effectively utilizing the shared Ceph storage pool.
Practice 3: Integrating RedHat OSP with the Storage Pool
There are two ways to integrate RedHat OSP with the unified storage pool: installing either a custom-developed Ceph client or the RedHat client on the control and compute nodes. Both methods are feasible; this article uses the custom-developed client on the compute nodes. The integration process is similar to the one described in section 3, and thus will not be detailed here. During client installation, some dependency packages might be missing, which could cause installation failures. To resolve this, mount the RedHat ISO image to `/mnt` and use `rpm -ivh xxx.rpm` to install the necessary dependencies.
Ceph supports multiple storage methods, including object storage, block storage, and file storage. This article successfully verifies that the custom-developed Ceph storage pool can provide storage services for cloud environments from different vendors within various integration projects. The Ceph product offers web-based operation and maintenance, which is simple and efficient.
Conclusion
In this article, we planned, designed, and successfully built a unified Ceph storage pool, achieving seamless integration with TECS, RedHat OpenStack, and VMware. This solution not only addresses the issue of insufficient storage space but also enhances the efficiency and quality of integration validation. As our projects continue to progress, we will keep optimizing and refining this solution to provide robust storage support for more integration projects.