TAGS

There are no tags associated with this article.

How to register to CREODIAS Portal?

Go to https://creodias.eu and click on "Register":

main page

Fill out all the required information and click the "Register" button:

registration form

You will be redirected to the LOGIN page:

login page

After logging in, you will see your home page:

your home page

You will also receive the confirmation via e-mail:

welcome email

Now you can use CREODIAS services.


Can't ping VM

If you have problems with access to your VM - ping is not responding. Try the following:


install arping:

in CentOS:

sudo yum install arping

in Ubuntu:

sudo apt install arping

 

check the name of the interface connected to private network:

ifconfig

based on the response, find the number of  the interface of 192.168.x.x (eth<number> or ens<number>)

after that invoke the following commands:

in CentOS:

sudo arping -U -c 2 -I eth<number> $(ip -4 a show dev eth<number> | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p')


in Ubuntu:

sudo arping -U -c 2 -I ens<number> $(ip -4 a show dev ens<number> | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p')


Next ping your external ip address and check if it helped.


Cannot attach interface to VM?

In the example below the project has the following networks:

If you get a message "Unable to attach interface" error - please first reboot the Virtual Machine.

To do that, open the drop down menu to the right of an instance:

Select "Soft Reboot Instance ".

If it does not help, try "Hard Reboot Instance".

Now choose option "Attach Interface":

and select your network:

eg. eodata:

click - attach Interface.

In the column IP Address you should see the new address (eodata: 10.111.x.x):

 

If it still not working you should restart interface so first choose "Detach Interface" and again select "Attach Interface".

If you still have problems, please contact CREODIAS support (https://creodias.eu/contact-us).


Bucket sharing using S3 Bucket Policy

S3 Bucket Policy

Ceph - the Software Defined Storage used in CloudFerro clouds, provides object storage compatibility with a subset of Amazon S3 API. Bucket policy allows for a selective access sharing to object storage buckets between users of different projects in the same cloud.

Naming convention used in this document

  • Bucket Owner - OpenStack tenant who created an object storage bucket in their project, intending to share to their bucket or a subset of objects in the bucket to another tenant in the same cloud.
  • Bucket User - OpenStack tenant who wants to gain access to a Bucket Owner's object storage bucket.
  • Bucket Owner's Project - a project in which a shared bucket is created.
  • Bucket User's Project - a project which gets access to Bucket Owner's object storage bucket.
  • Tenant Admin - a tenant's administrator user who can create OpenStack projects and manage users and roles within their domain.
  • In code examples, values typed in all-capital letters, such as BUCKET_OWNER_PROJECT_ID, are placeholders which should be replaced with actual values matching your use-case. 

Limitations

It is possible to grant access at the project level only, not at the user level. In order to grant access to an individual user, a Bucket User's Tenant Admin must create a separate project within their domain, which only selected users will be granted access to.

Ceph S3 implementation supports the following S3 actions by setting bucket policy.

Ceph S3 implementation does not support user, role or group policies.

Declaring bucket policy

Policy JSON file's sections

Bucket policy is declared using a JSON file. An example policy JSON template:

{
 "Version": "2012-10-17",
 "Id": "POLICY_NAME",
 "Statement": [
   {
     "Sid": "STATEMENT_NAME",
     "Effect": "EFFECT",
     "Principal": {
       "AWS": "arn:aws:iam::PROJECT_ID:root"
     },
     "Action": [
       "ACTION_1",
       "ACTION_2"
     ],
     "Resource": [
       "arn:aws:s3:::KEY_SPECIFICATION"
     ]
   }
 ]
}

Bucket policy description follows:

Key Value
Version "2012-10-07" - This cannot be changed.
Id an arbitrary policy name
Statement a list of statements
Statement.Sid an arbitrary statement name
Statement,Effect values of: "Allow", "Deny"
Statement.Principal

A list of values specifying the account in Amazon's arn format.

"AWS": "arn:aws:iam::PROJECT_ID:root"

or

"AWS": [
    "arn:aws:iam::FIRST_PROJECT_ID:root",
    "arn:aws:iam::SECOND_PROJECT_ID:root"

]

*_PROJECT_ID and is an OpenStack project id which should or should not have access to the bucket.

Statement.Action

A list of actions from:

  • s3:AbortMultipartUpload
  • s3:CreateBucket
  • s3:DeleteBucketPolicy
  • s3:DeleteBucket
  • s3:DeleteBucketWebsite
  • s3:DeleteObject
  • s3:DeleteObjectVersion
  • s3:GetBucketAcl
  • s3:GetBucketCORS
  • s3:GetBucketLocation
  • s3:GetBucketPolicy
  • s3:GetBucketRequestPayment
  • s3:GetBucketVersioning
  • s3:GetBucketWebsite
  • s3:GetLifecycleConfiguration
  • s3:GetObjectAcl
  • s3:GetObject
  • s3:GetObjectTorrent
  • s3:GetObjectVersionAcl
  • s3:GetObjectVersion
  • s3:GetObjectVersionTorrent
  • s3:ListAllMyBuckets
  • s3:ListBucketMultiPartUploads
  • s3:ListBucket
  • s3:ListBucketVersions
  • s3:ListMultipartUploadParts
  • s3:PutBucketAcl
  • s3:PutBucketCORS
  • s3:PutBucketPolicy
  • s3:PutBucketRequestPayment
  • s3:PutBucketVersioning
  • s3:PutBucketWebsite
  • s3:PutLifecycleConfiguration
  • s3:PutObjectAcl
  • s3:PutObject
  • s3:PutObjectVersionAcl
Statement.Resource

A list of resources in Amazon arn format:

"arn:aws:s3:::KEY_SPECIFICATION"

KEY_SPECIFICATION defines a bucket and its keys / objects. For example:

  • "arn:aws:s3:::*" - the bucket and its all objects
  • "arn:aws:s3:::mybucket/*" - all objects of mybucket
  • "arn:aws:s3:::mybucket/myfolder/*" - all objects which are subkeys to myfolder in mybucket

Setting a policy on a bucket

The policy may be set on a bucket using s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command. See S3 Tools s3cmd usage for complete documentation.

After installing s3cmd, initiate its configuration, by issuing: s3cmd --configure -c s3cmd-config-file.

A sample s3cmd config file in CloudFerro CF2 cloud:

[default]
access_key = MY_ACCESS_KEY
secret_key = MY_SECRET_KEY
bucket_location = RegionOne
host_base = cf2.cloudferro.com:8080
host_bucket = cf2.cloudferro.com:8080
use_https = True
verbosity = WARNING
signature_v2 = False

The access key and the secret key may be generated using openstack-cli:

openstack ec2 credentials create
+------------+-------------------------------------+
| Field      | Value                               |
+------------+-------------------------------------+
| access     | [access key]                        |
| links      | [link]                              |
| project_id | db39778a89b242f0a8ba818eaf4f3329    |
| secret     | [secret key]                        |
| trust_id   | None                                |
| user_id    | 121fa8dadf084e2fba46b00850aeb7aa    |
+------------+-------------------------------------+

Assuming that Bucket Owner's s3cmd config file name is 'owner-project-s3cfg', a simple example of setting policy follows:

s3cmd -c owner-project-s3cfg setpolicy sample-policy.json s3://mysharedbucket

To check policy on a bucket, use the following command:

s3cmd -c owner-project-s3cfg info s3://mysharedbucket

 

  • Setting a new policy overrides the policy which was previously applied.
  • The policy JSON file may have a maximum size up to 20 Kb. The policy file may be compacted with jq command:

    cat pretty-printed-policy.json | jq -c '.' > compacted-policy.json

     

  • Only s3cmd version 2.1.0 or newer support Ceph multitenancy projectid:bucketname naming convention. While older versions of s3cmd allow setting a policy on a bucket, only the 2.1.0 or newer version supports accessing to other tenant's bucket.

Deleting the policy from a bucket

The policy may be deleted a bucket using s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command. See S3 Tools s3cmd usage for complete documentation.

A simple example of setting policy follows:

s3cmd -c owner-project-s3cfg delpolicy s3://mysharedbucket

Sample scenarios

Grant another project access to read and write to the bucket

A Bucket Owner wants to grant a bucket a read/write access to a Bucket User.

{
  "Version": "2012-10-17",
  "Id": "read-write",
  "Statement": [
    {
      "Sid": "project-read-write",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root",
          "arn:aws:iam::BUCKET_USER_PROJECT_ID:root"
        ]
      },
      "Action": [
        "s3:ListBucket",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy read-write-policy.json s3://mysharedbucket

The Bucket Owner should send to the Bucket User information about Bucket Owner's project id and the name of the bucket.

After Bucket User prepared their s3cmd config file called 'user-project-s3cfg', to access the bucket, for example to list the bucket, the Bucket User should issue:

s3cmd -c user-project-s3cfg ls s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket

Grant any user read access to the bucket

A bucket Owner wants to grant a bucket read access to anyone.

{
  "Version": "2012-10-17",
  "Id": "policy-read-any",
  "Statement": [
    {
      "Sid": "read-any",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
		   "*"
		]
      },
      "Action": [
        "s3:ListBucket",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy read-any-policy.json s3://mysharedbucket

The Bucket Owner should publish the Bucket Owner's project id and the name of the bucket.

Users from other projects can access to bucket's contents, for example retrieve pictures/mypic.png from the Bucket Owner's bucket:

s3cmd -c user-project-s3cfg get s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/pictures/mypic.png

Grant one user write access and another user read access to a subfolder of a bucket

A Bucket Owner wants to share a folder in a bucket with read/write permissions to First Bucket User and read permissions to Second Bucket User.

{
    "Version": "2012-10-17",
    "Id": "complex-policy",
    "Statement": [
        {
            "Sid": "project-write",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root",
                    "arn:aws:iam::FIRST_BUCKET_USER_PROJECT_ID:root"
                ]
            },
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mysharedbucket/mysharedfolder/*"
            ]
        },
        {
            "Sid": "project-read",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SECOND_BUCKET_USER_PROJECT_ID:root"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mysharedbucket/mysharedfolder/*"
            ]
        }
    ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy complex-policy.json s3://mysharedbucket

The Bucket Owner should send to the First and Second Bucket Users information about Bucket Owner's project id and the name of the bucket.

To access the bucket, for example to write productlist.db to the bucket, the First Bucket User should issue:

s3cmd -c first-user-project-s3cfg put productlist.db s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/mysharedfolder/

The Second Bucket User can read produclist.db from the bucket:

s3cmd -c second-user-project-s3cfg get s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/mysharedfolder/productlist.db

Access to shared buckets with boto3

The following python script shows how Bucket User can list objects in Bucket Owner's bucket mybucket using boto3 library.

#!/usr/bin/python3


import boto3
from botocore.session import Session
from botocore.handlers import validate_bucket_name


ACCESS_KEY = "Bucket User's Access Key"
SECRET_KEY = "Bucket User's Secret Key"
ENDPOINT = "https://cf2.cloudferro.com:8080"
BUCKET_OWNER_PROJECT_ID = "Bucket Owner's Project ID"
BUCKET = "mybucket"

bucket_location = BUCKET_OWNER_PROJECT_ID + ":" + BUCKET

# We need to skip bucket name validation due to
# multitenancy Ceph bucket naming: tenantId:bucket.
# Otherwise we will receive an exception:
# "Parameter validation failed: Invalid bucket name".
botocore_session = Session()
botocore_session.unregister('before-parameter-build.s3', validate_bucket_name)
boto3.setup_default_session(botocore_session = botocore_session)

s3 = boto3.client(
    's3',
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY,
    endpoint_url=ENDPOINT,
)

response = s3.list_objects(Bucket=bucket_location)

print(response)

Please note that since AWS S3 standard does not support colons in the bucket names and here we address bucket names as tenantId:bucket, we have to workaround botocore validator, which checks if the bucket name matches regular expression: ^[a-zA-Z0-9.\-_]{1,255}$. Hence the extra line in the code:

botocore_session.unregister('before-parameter-build.s3', validate_bucket_name)
boto3.setup_default_session(botocore_session = botocore_session)

Without the code, the bucket name validator raises an exception:

botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid bucket name "TENANT_ID:BUCKET_NAME": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
or be an ARN matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-]{1,63}$
|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]{1,63}$"

 

 References

 


How to install OpenStackClient (Linux)?

OpenStackClient is very useful tool to gain a powerful management of our projects in Command Line Interface. It implements new features and advantages: You can run the commands from the CLI, or include prepared scripts in Python to automate the functionality of your cloud storage. Moreover we could reach our OpenStack access on any computer if we inset credentials (username and password).

Eventually everyone may admit that generally OpenStackClient provides more opportunities to look into our compute facility deeply and much more precisely.

 

Attention:

It is strongly recommended to use virtual environments which do not affect system variables globally. If something goes wrong, everything is happening in separated space.

 

FAQ covers installation of OpenStackClient under Ubuntu 18.04 LTS and Python 3.
Installation under other distros may be simillar.

 

Before we start, you might consider running below code in a virtual environment:

sudo apt install python3-venv
python3 -m venv openstack_cli
source openstack_cli/bin/activate

 

 

Firstly, run apt-get update:

sudo apt update

 

Install python3-pip:

sudo apt install python3-pip

 

Install setuptools:

sudo python3 -m pip install setuptools

 

If you intend to use newer version of python, eg. 3.8.2 - you have to install python3.8-dev and reinstall netifaces python module which may cause problems. Remember to run pip commands with appropiate python version (eg. change python3 -m pip to python3.8 -m pip)

sudo apt install python3.8-dev
sudo python3.8 -m pip install --upgrade netifaces

 

Finally install python-openstackclient:

sudo python3 -m pip install python-openstackclient

 

After this, you should be able to run openstack command from consone, eg:

openstack --help

 

 

If everything seems to work, we might firmly move on to our Horizon Panel.

Log in to your account.

Head straight to our email button in the upper-right-corner. Click on it.

 

Choose Openstack RC File v2 or v3.

Save your RC file on your disk.

If you use some text editor like vim it should consist of many variables related to your domain account like it:

Change your directory to downloading destination and execute the configuration file:

For example t will look like: cloud_xxxxx\project_without_eo-openrc.sh

Time to run this sh file:

. cloud_xxxxx\project_without_eo-openrc.sh

 

The prompted message will ask you for your domain password. Type it in and press enter.

If the password is correct then you've got access granted.

 

After that you can eg. check list of your servers by typing in console:

openstack server list

Output of this command should contain table with ID, Name, Status, Networks, Image and Flavor of your virtual machines.

 

 

I recommend you to check Openstack documentation which contains lists of available commands. There you may reconsider many scenarios linked with your management.

 

 

v.2020-05-19