- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Processing Sentinel-5P data using HARP and Python
- EO Data Access (R)evolution
- Land cover classification using remote sensing and AI/ML technology
- AI-based satellite image enhancer and mosaicking tools
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- EO4UA
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Legal Matters
- FAQ
- News
- Partner Services
- About Us
- Forum
- Knowledgebase
Computing & Cloud
CREODIAS users have different storage options with different prices, IO performance, access speed and data resilience. Users can choose configurations that best suite their projects. Worth noting are new, powerful configurations with local storage. Users can benefit from more than 10x better IO performance and 10x lower latency, comparing to the previously available solutions.
From this article you will learn about features and performance of different types of storage.
From the service point of view, you can choose on CREODIAS:
- Volume storage (SSD or HDD network storage) - if configured, it can be used to boot a VM
- VM related storage (SSD network storage) which you can provide only together with VM and it is used as a default system disc. This storage is physically identical do SSD Volume Storage;
- Object Storage which is a mix of storage space and metadata with a special protocol to access the data;
- Local NVMe storage for DS servers (each DS server receives two identical very fast NVME PCIe drives);
- Local ephemeral storage for HMD virtual machine in which a physical very fast NVMe drive is attached to a VM
From the physical point of view these storage options correspond to three storage media types:
- Network HDD Ceph storage – this is a cheap, reliable, very resilient and immensely big storage pool. This storage is available both as block (volumes) and object (S3) storage. Both storage types have different points of access with different costs and performance.
- Network SSD Ceph storage – a fast, reliable and resilient storage. It is a default storage media for VMs. VM related storage and volume SSD storage are stored on this type of media.
- Local compute storage (usually NVME) – this storage is located inside a server that hosts your VM on a very fast disc. It means that when the computer server encourages hardware malfunction, the storage media becomes inaccessible or, on very rare occasions, you may experience data loss. The NVMe drive that hosts data for HMD configuration is a single, very reliable high-performance drive with MTBF>2M h and up to 400k IOPS.
The two drives in DS server are usually NVMe SSD drives that use a passthrough mechanism to present the drives to the client OS. We encourage our users to configure the drives in RAID1 software RAID (mdma) to introduce some data protection against hardware malfunctions.
Network drives are more reliable by a few orders of magnitude because they are built from hundreds of storage servers and thousands of discs. The obvious downside of any network storage is a need to transport the data over the network. As a result, in comparison with local solutions, it takes more time to get the response from the network storage medium.
In scenarios where the data can be accessed or written in many queues, the network storage offers a substantial advantage of having hundreds of individual drives to write parallelly. This increases the IO and bandwidth performance exponentially. That is why, the network storage is ideal for parallel operations.
The local storage is dependent on physical media performance and cannot rely on thousands of drives to boost performance. In HMD and DS solution we use very fast local NVMe drives. For this reason those configurations are ideal in scenarios that need very low latency and very hight IO.
Here are some examples of results we performed on VM on CREODIAS.
fio --filename=XXX--direct=1 --sync=1 --rw=read(write) --bs=4k(4M) --numjobs=1(16) --iodepth=1(128) --runtime=20 --time_based --group_reporting --name=journal-test
Read
Network HDD Storage single que IOPS performance 4k blocks - 1120 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 44000 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 2169 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 1500 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 47000 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 3269 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 34500 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 337000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 2963 MiB/s
Write
Network HDD Storage single que IOPS performance 4k blocks - 96 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 2948 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 260 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 650 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 6006 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 550 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 25000 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 270000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 1371Mib/s
All the above tests were carried out on 8 vCPU HMD VM. The multi queue performance is very CPU dependant as numbers of vCPU correspond to the number of maximum storage and network operations, and if we used larger VM we would obtain better results for the network storage. For big blocks and high queue depth, the vCPU and network may be a limiting factor of IOPS/Bandwitch performance, not the storage medium itself.
It is important to know that CEPH storage is designed in a way that practically eliminates a risk of data loss. In this case, natural disasters or human errors are more probable by a few orders of magnitude than any hardware failure leading to data corruption.
Find out more in recorded webinar How to choose the right computing resources for your project on CREODIAS.
Cloud resources prices, in particular storage prices, are available in our price-list