- What is CREODIAS?
- Computing & Cloud
- Data & Processing
- Pricing Plans
- Fight with COVID-19
- Examples of usage
- Monitoring air pollution using Sentinel-5P data
- Species classification of forests
- Enabling AI / ML workflows with CREODIAS vGPUs
- Satellite remote sensing analyses of the forest
- Satellite-based Urban Heat Island Mapping on CREODIAS
- Old but gold - historical EO data immediately available and widely used on CREODIAS
- CREODIAS for emergency fire management
- AgroTech project as an example of how CREODIAS can be used for food and environmental research
- Monitoring Air Quality of Germany in Pre vs During COVID Lockdown Period
- Common Agricultural Policy monitoring with Earth Observation
- Applications of CREODIAS data
- Meteorological data usage on the CREODIAS platform
- Building added value under Horizon Europe with CREODIAS
- CREODIAS: Introduction to SAR Sentinel-1 data
- Land subsidence and landslides monitoring based on satellite data
- Satellite imagery in support of the Common Agriculture Policy (CAP) and crop statistics
- Useful tools for data processing, available on CREODIAS platform
- CREODIAS for hydrological drought modelling
- CREODIAS for managing Urban Heat Islands
- CREODIAS for Digitising Green Spaces
- CREODIAS for Air Quality
- Advanced data processors on CREODIAS
- CREODIAS for your application
- Solutions for agriculture with CREODIAS
- Earth Observation data for Emergency response
- Security Applications with Satellite Data
- Climate Monitoring with Satellite Data
- Water Analysis on CREODIAS
- CREODIAS for land and agriculture monitoring
- Solutions for atmospheric analysis
- Example of tool usage
- Processing EO Data and Serving www services
- Processing and Storing EO
- Embedding OGC WMS Services into Your website
- GPU Use Case
- Using the EO Browser
- EO Data Finder API Manual
- Use of SNAP and QGIS on a CREODIAS Virtual Machine
- Use of WMS Configurator
- DNS as a Service - user documentation
- Use of Sinergise Sentinel Hub on the CREODIAS EO Data Hub
- Load Balancer as a Service
- Jupyter Hub
- Use of CREODIAS Finder for ordering data
- ESRI ArcGIS on CREODIAS
- Use of CEMS data through CREODIAS
- Searching, processing and analysis of Sentinel-5P data on CREODIAS
- ASAR data available on CREODIAS
- Satellite remote sensing analyses of the forest
- EO Data Catalogue API Manual
- Public Reporting Dashboards
- Sentinel Hub Documentation
- Integration Guides
- OGC API
- Custom Processing Scripts
- Legal Matters
- Partner Services
- About Us
Computing & Cloud
Latest CREODIAS Statistics
Queued old / reprocessed
Last 1h (GB)
Last 24h (GB)
Avg. per product (GB)
VM related storage
VM related-storage is a fast SSD network storage connected to individual Virtual Machines. It is directly available to the VM without the need for mounting or connecting network shares. The quantity of VM related storage reserved for a VM depends on the VM Flavor selected.
VM related storage is fast – it is based on performant Solid State Drives.
VM storage can be used for fast, temporary or permanent data storage within a VM.
VMs come with VM storage included. The quantity of VM storage depends on the VM Flavor.
VM storage is closely associated with a given VM which has exclusive access to this type of storage. Once the VM is terminated, its VM storage disappears.
VM ephemeral NVMe storage in HMD line
VM ephemeral local NVME storage is only used in HMD VMS (“D” stands for local disk) in which you get part of physical, very fast NVMe drive attached to one VM. The NVMe drive that hosts data for HMD configuration is a single, very reliable, high performance drive with MTBF>2M h and up to 400k IOPS. We designed it as ephemeral storage, to expressly underline the dangers of losing the data and we encourage clients to back up the data stored on such disk or treat it as a cache only. VM related-storage is a fast solid state SSD storage connected to individual Virtual Machines. It is directly available to the VM without the need for mounting or connecting network shares. The quantity of VM related storage reserved for a VM depends on the VM Flavor selected.
This storage can give up to 35k IOPS in single queue in read and 25k IOPS in write. Multi queue performance can reach about 350k IOPS for read and 250k IOPS for write. The bandwidth performance is 3000 MiB/s for read and 1371 MiB/s for write
This type of storage should be used in all scenarios that require very high performance. HMDs are designed for applications such as data processing requiring fast local cache, control nodes for Kubernetes clusters with fast storage for etcd, very fast data entry, handling events from IoT devices, very fast saving of calculation results, hosting nonrelational databases.
HMD VMs come with VM storage included. The quantity of VM storage depends on the VM Flavor.
VM storage is closely associated with a given VM which has exclusive access to this type of storage. Once the VM is terminated, its VM storage disappears. If the instance or the compute server on which the instance is running experience a failure, is deleted, goes into error state or needs to be moved to different server all the data may be lost. The data may be lost under any of the following events:
- Physical hard disk failure
- Server (hosting the instance) failure
- Instance terminations
- Instance failure or migration
- Server (hosting the instance) reboot
Therefore, do not relay on instance for storing valuable, long term data.
This type of storage consists of network Volume Storage that can be attached to VMs as block devices to dynamically extend their storage capabilities. Volume Storage are independent from VMs, they can be easily moved from one VM to another. Users may take snapshots of Volume Storage to be able to revert to their ‘frozen’ state later. Their size is limited only by the size of available storage space; Volumes can also be resized without unmounting. Volume-based VMs can be easily migrated between servers. Volume Storage can be encrypted if a User requests such an option. It is also possible to make a live backup copy of a volume.
Volume Storage is implemented as a distributed, redundant, highly available storage cluster with separate HDD and SSD tiers. The SSD tier provides high performance both in terms of transfer bandwidth and IOPS. The HDD tier provides cost-effective high capacity magnetic storage scalable to hundreds of Terabytes and beyond. Storage pools can be local to the computing resources or can be placed in remote locations (Warsaw WAW-2 or Frankfurt).
Volume Storage can be used as high capacity, high availability, scalable long term storage independent of VMs. SSD volumes should be selected for applications that require high performance in random access operations, such as databases and transactional systems.
HDD volumes are best used for high capacity file or media storage applications that are less demanding on random-access performance.
Both SSD and HDD volumes may be used as base /root storage VMs.
Volume Storage can be provisioned from the Cloud Dashboard. Users can select the storage tier (SSD or HDD) and setup volume attributes such as name, description and size. They can also select whether the Volume Storage should be encrypted or not. Volumes can be created empty, as a copy of another volume or containing a bootable operating system image. Volume Storage can be also purchased in Fixed Term mode for longer periods of time.
Once a volume has been created, it can be attached to a running VM. If the volume contains an OS image, a Virtual Machine can be booted directly from it.
Volume Storage are billed per available GBytes of storage space per month (or longer period) or hour. It can be bought either in Per Usage mode or for Fixed Terms.
Figure 1 - Volume Storage
Object Storage is a scalable storage for objects/files with HTTP REST interface. All objects/files operations are performed via REST API. Objects/files can be organized into buckets which act as standard file directories. User can also define access policy for buckets and objects/files. The API is compatible with Amazon S3 so existing AWS S3 tools can be used to manage objects/files and buckets on the storage. Users can also manage objects/files in an easy way via the Cloud Dashboard. The storage can be accessible from the public Internet and from VMs.
Object Storage can be used when communication between different Projects or Domains is required or when data is to be made available for the outside world via Internet.
Like Volume Storage.
Object Storage is billed per used GBytes of storage space per month (or longer period) or hour. It can be bought either in Per Usage mode or for Fixed Terms. It has to be noted that when Object Storage is accessed from other Projects or from the Internet then also the fees for the data transfer are applied. The pricing of Object Storage decreases significantly with the total amount of storage used in a given Project or Environment.
- It is highly advisable to put not more than 1 Mil (1 000 000) objects into one bucket (container).
- More objects makes listing of the objects very inefficient.
- Single file should have not more than 5GB using S3FS.
We suggest to create many buckets with small amount of objects instead of small amount of buckets with many objects.
Figure 2 - Containers
The Image service holds VM base OS images and volume images. OS images are used to create a VM - they hold the VM's root filesystem. There are many prebuilt OS images available in the EO Cloud with different OSes such as Linux and Windows and preconfigured tools and libraries. We keep a backward repository of OS images. Volume images are copies of Volume Storage. A User can create a new Storage Volume or a new VM from the Image or keep it as a backup.
Images can be managed via the API and EO Cloud Dashboard. A Project/Tenant may have private Images which are invisible for other Projects/Tenants.
Some software available in the form of images is free (or open source). Some however is used under commercial licenses. The use of such images is then billed according to the price list.
Figure 3 - Images
The CREODIAS Platform infrastructure consists of several Data Centers - main T-Mobile Piekna DCand separate offsite locations totally independent from the main DC allowing offsite secure disk-based backups. All available localizations are fully secure Tier III+ Data Centers connected by a dedicated redundant Nx10Gbps WAN connections.
Several services are available to perform backup functionalities:
- Volume Storage in the remote DC - The storage volume service offers HDD volumes located in the backup DC. Such volumes can be used by customers to run a custom backup mechanism of their choice.
- Backup of Volume Storage - Backups of User’s persistent volume data are performed using the OpenStack Cinder Backup module with the Ceph Backup driver.
The functionality offered includes:
- Full and incremental, backups of selected volumes;
- User-scriptable filesystem and application quiescing for Linux and Windows guests to guarantee consistent backups at the filesystem and application level (guarantees consistent database backups);
- Snapshot-based backups of live-mounted volumes.
Backups can be launched manually from the Cloud Dashboard Volumes tab or programmatically via the REST API or OpenStack command line.
There is also a scheduler to allow for automated periodic backup schemes and backup rotation.
Tenants are billed for the backup storage space used according to the Price List.
Requirements and limitations
The backup system allows for uninterrupted functioning of the User’s VMs, operating systems and applications.
A guest system agent will be preinstalled in every VM in order to allow for filesystem and application quiescing during backup. Standard quiescing consists of flushing all buffered data to disk before performing the snapshot necessary for data backup and pausing the VM-s write activities for the duration of the snapshot. Users may define custom quiescing activities to ensure backup consistency for their applications (ex. databases). This may cause a few seconds freeze to the VM being backed-up.
Thanks to the usage of incremental backups, only storage blocks that have changed since the previous backup need to be copied. Blocks are being compressed before being stored. Together, this allows for efficient usage of WAN backup bandwidth and storage space.
The Remote Storage and Backup as a Service services are billed according to GBytes used.
Large amounts of data may be imported/exported to/from CREODIAS Platform either over network or on physical media (disks, NAS).
The simplest network services (HTTP/FTP/SFTP etc.) run directly from User’s VMs are a simplest and most common mechanism to import/export data between User's Environment and external world. Additionally the Object Storage with REST interface (accessible from Internet) can also be used for such purpose.
The amount of data transferred to/from the Internet in all the above cases is measured and billed according to the appropriate Price List.
Physical Media Data Import
The physical disks, as received by the CREODIAS Platform Operator, are copied one-to-one onto volumes of Volume Storages. As the copy process is finished (depending on physical capabilities and size of the imported disk), the volume is made accessible for User's VM for further use. Standard interfaces (SATA, USB) are supported.
Large data sets, imported on NAS devices are copied using file mode into the volume of appropriate size . The User may choose any filesystem, supported by Linux (e.g. ext4, xfs, ntfs). After the copy procedure is completed (depending on the amount of data and NAS speed), the volume is made accessible for the User.
A one-time fee is charged per each case of Physical Media Transfer.
Physical Media Data Export
Several types of the disks (and USB keys) are available for sale (the list reflects current market trends). The User, requesting data transfer on such medium receives a volume (by means of Volume Storage) of appropriate size. Then, the User has to upload his data to this volume. After that, the volume is copied one-to-one onto purchased disk, which is then shipped to desired destination.
Export of large data collections, exceeding capacity of several disks (max 8TB for a disk for now) can be done by means of NAS devices – this, however, is treated as a special service, arranged on individual basis.
A one-time fee is charged per each case of Physical Media Transfer. Additionally physical media for Data Export are sold with pricing dependent on the physical size of the media.