Computing & Cloud

CREODIAS users have different storage options with different prices, IO performance, access speed and data resilience. Users can choose configurations that best suite their projects. Worth noting are new, powerful configurations with local storage. Users can benefit from more than 10x better IO performance and 10x lower latency, comparing to the previously available solutions.
 
From this article you will learn about features and performance of different types of storage.
 
From the service point of view, you can choose on CREODIAS:
 
  • Volume storage (SSD or HDD network storage) - if configured, it can be used to boot a VM
  • VM related storage (SSD network storage) which you can provide only together with VM and it is used as a default system disc. This storage is physically identical do SSD Volume Storage;
  • Object Storage which is a mix of storage space and metadata with a special protocol to access the data;
  • Local NVMe storage for DS servers (each DS server receives two identical very fast NVME PCIe drives);
  • Local ephemeral storage for HMD virtual machine in which a physical very fast NVMe drive is attached to a VM
 
From the physical point of view these storage options correspond to three storage media types:
 
  • Network HDD Ceph storage – this is a cheap, reliable, very resilient and immensely big storage pool. This storage is available both as block (volumes) and object (S3) storage. Both storage types have different points of access with different costs and performance.
  • Network SSD Ceph storage – a fast, reliable and resilient storage. It is a default storage media for VMs. VM related storage and volume SSD storage are stored on this type of media.
  • Local compute storage (usually NVME) – this storage is located inside a server that hosts your VM on a very fast disc. It means that when the computer server encourages hardware malfunction, the storage media becomes inaccessible or, on very rare occasions, you may experience data loss. The NVMe drive that hosts data for HMD configuration is a single, very reliable high-performance drive with MTBF>2M h and up to 400k IOPS.
 
The two drives in DS server are usually NVMe SSD drives that use a passthrough mechanism to present the drives to the client OS. We encourage our users to configure the drives in RAID1 software RAID (mdma) to introduce some data protection against hardware malfunctions.
 
Network drives are more reliable by a few orders of magnitude because they are built from hundreds of storage servers and thousands of discs. The obvious downside of any network storage is a need to transport the data over the network. As a result, in comparison with  local solutions, it takes more time to get the response from the network storage medium.
 
In scenarios where the data can be accessed or written in many queues, the network storage offers a substantial advantage of having hundreds of individual drives to write parallelly. This increases the IO and bandwidth performance exponentially. That is why, the network storage is ideal for parallel operations.
 
The local storage is dependent on physical media performance and cannot rely on thousands of drives to boost performance. In HMD and DS solution we use very fast local NVMe drives. For this reason those configurations are ideal in scenarios that need very low latency and very hight IO.
 
Here are some examples of results we performed on VM on CREODIAS.
 
fio --filename=XXX--direct=1 --sync=1 --rw=read(write) --bs=4k(4M) --numjobs=1(16) --iodepth=1(128) --runtime=20 --time_based --group_reporting --name=journal-test
Read
Network HDD Storage single que IOPS performance 4k blocks - 1120 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 44000 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 2169 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 1500 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 47000 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 3269 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 34500 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 337000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 2963 MiB/s
 
 
Write
Network HDD Storage single que IOPS performance 4k blocks - 96 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 2948 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 260 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 650 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 6006 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 550 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 25000 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 270000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 1371Mib/s
 
All the above tests were carried out on 8 vCPU HMD VM. The multi queue performance is very CPU dependant as numbers of vCPU correspond to the number of maximum storage and network operations, and if we used larger VM we would obtain better results for the network storage. For big blocks and high queue depth, the vCPU and network may be a limiting factor of IOPS/Bandwitch performance, not the storage medium itself.
 
It is important to know that CEPH storage is designed in a way that practically eliminates a risk of data loss. In this case, natural disasters or human errors are more probable by a few orders of magnitude than any hardware failure leading to data corruption.
 
 
Cloud resources prices, in particular storage prices, are available in our price-list