Computing & Cloud

Computing services


Virtual Machines


Virtual Machines (VMs) are fully functional computational instances. They operate as if they were real physical entities with all the elements of a physical server. A user obtains his VM with full root access. He can fully manage it and install any software he has and needs.

In the EO Cloud Users can use Virtual Machines (VMs) by defining their different parameters and characteristics, including machine type (physical or virtual), RAM, CPU (vCores), Storage quantity and type, Operating System, middleware components, Virtual Networks connected to the machine.

Users determine the characteristics of a newly provisioned VM by selecting its Flavor and base image. Currently available flavours are presented in the following table:


Figure 1 - VM Flavours
Virtual Machines
Available VMs #vCores RAM (GB) SSD Network Storage (GB) NVMe Local
Storage (GB) 
eo1.xsmall 1 1 8 0
eo1.small 2 2 16 0
eo1.xmedium 1 2 8 0
eo1.medium 2 4 16 0
eo1.large 4 8 32 0
eo2.medium 1 4 16 0
eo2.large 2 8 32 0
eo2.xlarge 4 16 64 0
eo2.2xlarge 8 32 128 0
eo2a.medium 1 4 16 0
eo2a.large 2 8 32 0
eo2a.xlarge 4 16 64 0
eo2a.2xlarge 8 32 128 0
eo2a.3xlarge 16 64 256 0
eo2a.4xlarge 32 128 512 0
eo2a.5xlarge 64 256 1024 0
hm.medium 2 16 64 0
hm.large 4 32 128 0
hm.xlarge 8 64 256 0
hm.2xlarge 16 128 384 0
hm.3xlarge 32 256 384 0
hm.4xlarge 48 496 384 0
hmd.medium 2 16 0 50
hmd.large 4 32 0 100
hmd.xlarge 8 64 0 200
hmd.2xlarge 16 128 0 400
hmd.3xlarge 32 256 0 800
gpu.medium 12 117 64  
Software ready***
ArcGIS.eo2.xlarge 4 16 64 0
ArcGIS.eo2.2xlarge 8 32 128 0 8 64 256 0 16 128 384 0
ArcGIS.ds.large.gpu**** 40 (20 cores) 112 128 2 x 1000


* Flavours with AMD processors.
** gpu Virtual Machines are equipped with GeForce RTX 2080TI (4352 CUDA Cores, 11GB GDDR6).
*** All Software ready Cloud Servers are available only with Windows Server Standard in bundle with preconfigured Esri ArcGIS Pro Desktop.
**** Arc.GIS.ds.large.gpu is equipped with GeForce RTX 2080Ti (4352 CUDA Cores, 11GB GDDR6).
*****HMD VM come with only one local drive based on NVMe physical drive located in compute server

The list of currently available operating system images is presented below:

  • CentOS 6, 7
  • Ubuntu 14.04 LTS, 16.04 LTS, 18.04 LTS
  • Windows 2016 mini / full
  • RHEL 6,7 mini / full
  • SLES 12 mini / full
  • OSGeo 11.0
  • App Catalog Image


All the VMs come fully configured (based on the image selected) and ready-for-use, with an administrative User account, network access, preconfigured toolboxes and software components. Volume Storage may be attached to running VM-s to extend the storage space available. VMs can be started, stopped, rebooted, paused, suspended and snapshotted. Live backup functionality is also available, including server quiescing. VMs may also be attached to Virtual Networks. Virtual Networks may be system-defined or User-defined.


System defined networks include:

  • Internet network used to access the global internet
  • the EO Storage network available in Projects/Environments that are allowed access the EO Storage



Users may utilize VM-s and other cloud Resources using the EO Cloud Dashboard, the REST API, a command line client or the Openstack Orchestration scripts (Heat). VMs can be connected to the network using virtual interfaces.


The VMs can be billed in monthly or longer quanta (Fixed Term Mode) or can be billed per hours of usage (Per Usage Mode). Users may also temporarily shelve their VMs based on persistent storage, paying only for the persistent storage space they occupy.


Loud Dashboard - Instances

Figure 3 - Cloud Dashboard - Instances


The instances as seen in the Cloud Dashboard are presented in a picture above.


Dedicated Server Virtual Machine


DSs are special VMs. Each DS is a virtual machine that fills the full computing machine (hypervisor server). There are no other VMs in this server. So a User of an DS has a full physical server for his own use. Additionally SVMs are equipped with very efficient SSD disks installed in pass through mode. This way full capacity and speed of those disks can be utilized. DSs are a perfect solution for everybody who wants to have efficiency and independence of Baremetal Servers but simultaneously wants to utilize all the elements of the OpenStack cloud platform. For details – see DS flavor list below.

Figure 4 - DS  Flavours

Dedicated Servers*
Available DSs #vCores RAM (GB) SSD NVMe Local Storage (GB)
ds.medium 40 (20 cores) 48 2 x 500
ds.large 40 (20 cores) 112 2x 1000
ds.2xlarge 40 (20 cores) 368 2x 1920
ds.3xlarge 48 (24 cores) 496 2x 1920
ds.large.gpu** 40 (20 cores) 112 2x 1000

* If the above configurations do not fit your project, please contact our sales team ( to design a custom solution.
** ds.large.gpu is equipped with GeForce GTX 2080 Ti (4352 CUDA Cores, 11GB GDDR6)


DS can be provisioned in exactly the same way as standard VMs. Users may utilize DS-s and other cloud Resources using the CREODIAS Dashboard, the REST API, a command line client or the Openstack Orchestration scripts (Heat). DSs can be connected to the network using virtual interfaces.


The DSs are billed in exactly the same way as standard VMs.



Containers are isolated, portable environments where one can run applications along with the libraries and dependencies they need. Containers are not VMs. In some ways they are similar, but there are even more ways that they are different. Like VMs Containers share system resources for access to compute, networking and storage. They are different because all containers on the same host share the same Operating System kernel and keep applications, runtimes and various other services separated from each other using kernel features known as namespaces and groups. Docker added the concept of a container image which allows containers to be used on any host with a modern Linux Kernel. The Container image allows for much more rapid deployment of applications than if they were packaged in a VM image.


We prepared an easy to fallow guide that describes how to install Kubernetes on CREODIAS OpenStack cloud, with support for adding/removing nodes, persistent volumes and load balancing. This deployment method uses terraform and ansible playbooks from the upstream kubespray project with prepared ansible configuration to enable required features. We believe it describes best way to run Kubernetes smoothly and effectively.


The Containers billing depends on the billing of the underlying VMs. The VMs creating a Bay are billed as described in the section on VMs.