TAGS

There are no tags associated with this article.

Bucket sharing using S3 Bucket Policy

S3 Bucket Policy

Ceph - the Software Defined Storage used in CloudFerro clouds, provides object storage compatibility with a subset of Amazon S3 API. Bucket policy allows for a selective access sharing to object storage buckets between users of different projects in the same cloud.

Naming convention used in this document

  • Bucket Owner - OpenStack tenant who created an object storage bucket in their project, intending to share to their bucket or a subset of objects in the bucket to another tenant in the same cloud.
  • Bucket User - OpenStack tenant who wants to gain access to a Bucket Owner's object storage bucket.
  • Bucket Owner's Project - a project in which a shared bucket is created.
  • Bucket User's Project - a project which gets access to Bucket Owner's object storage bucket.
  • Tenant Admin - a tenant's administrator user who can create OpenStack projects and manage users and roles within their domain.
  • In code examples, values typed in all-capital letters, such as BUCKET_OWNER_PROJECT_ID, are placeholders which should be replaced with actual values matching your use-case. 

Limitations

It is possible to grant access at the project level only, not at the user level. In order to grant access to an individual user, a Bucket User's Tenant Admin must create a separate project within their domain, which only selected users will be granted access to.

Ceph S3 implementation supports the following S3 actions by setting bucket policy.

Ceph S3 implementation does not support user, role or group policies.

Declaring bucket policy

Policy JSON file's sections

Bucket policy is declared using a JSON file. An example policy JSON template:

{
 "Version": "2012-10-17",
 "Id": "POLICY_NAME",
 "Statement": [
   {
     "Sid": "STATEMENT_NAME",
     "Effect": "EFFECT",
     "Principal": {
       "AWS": "arn:aws:iam::PROJECT_ID:root"
     },
     "Action": [
       "ACTION_1",
       "ACTION_2"
     ],
     "Resource": [
       "arn:aws:s3:::KEY_SPECIFICATION"
     ]
   }
 ]
}

Bucket policy description follows:

Key Value
Version "2012-10-07" - This cannot be changed.
Id an arbitrary policy name
Statement a list of statements
Statement.Sid an arbitrary statement name
Statement,Effect values of: "Allow", "Deny"
Statement.Principal

A list of values specifying the account in Amazon's arn format.

"AWS": "arn:aws:iam::PROJECT_ID:root"

or

"AWS": [
    "arn:aws:iam::FIRST_PROJECT_ID:root",
    "arn:aws:iam::SECOND_PROJECT_ID:root"

]

*_PROJECT_ID and is an OpenStack project id which should or should not have access to the bucket.

Statement.Action

A list of actions from:

  • s3:AbortMultipartUpload
  • s3:CreateBucket
  • s3:DeleteBucketPolicy
  • s3:DeleteBucket
  • s3:DeleteBucketWebsite
  • s3:DeleteObject
  • s3:DeleteObjectVersion
  • s3:GetBucketAcl
  • s3:GetBucketCORS
  • s3:GetBucketLocation
  • s3:GetBucketPolicy
  • s3:GetBucketRequestPayment
  • s3:GetBucketVersioning
  • s3:GetBucketWebsite
  • s3:GetLifecycleConfiguration
  • s3:GetObjectAcl
  • s3:GetObject
  • s3:GetObjectTorrent
  • s3:GetObjectVersionAcl
  • s3:GetObjectVersion
  • s3:GetObjectVersionTorrent
  • s3:ListAllMyBuckets
  • s3:ListBucketMultiPartUploads
  • s3:ListBucket
  • s3:ListBucketVersions
  • s3:ListMultipartUploadParts
  • s3:PutBucketAcl
  • s3:PutBucketCORS
  • s3:PutBucketPolicy
  • s3:PutBucketRequestPayment
  • s3:PutBucketVersioning
  • s3:PutBucketWebsite
  • s3:PutLifecycleConfiguration
  • s3:PutObjectAcl
  • s3:PutObject
  • s3:PutObjectVersionAcl
Statement.Resource

A list of resources in Amazon arn format:

"arn:aws:s3:::KEY_SPECIFICATION"

KEY_SPECIFICATION defines a bucket and its keys / objects. For example:

  • "arn:aws:s3:::*" - the bucket and its all objects
  • "arn:aws:s3:::mybucket/*" - all objects of mybucket
  • "arn:aws:s3:::mybucket/myfolder/*" - all objects which are subkeys to myfolder in mybucket

Setting a policy on a bucket

The policy may be set on a bucket using s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command. See S3 Tools s3cmd usage for complete documentation.

After installing s3cmd, initiate its configuration, by issuing: s3cmd --configure -c s3cmd-config-file.

A sample s3cmd config file in CloudFerro CF2 cloud:

[default]
access_key = MY_ACCESS_KEY
secret_key = MY_SECRET_KEY
bucket_location = RegionOne
host_base = cf2.cloudferro.com:8080
host_bucket = cf2.cloudferro.com:8080
use_https = True
verbosity = WARNING
signature_v2 = False

The access key and the secret key may be generated using openstack-cli:

openstack ec2 credentials create
+------------+-------------------------------------+
| Field      | Value                               |
+------------+-------------------------------------+
| access     | [access key]                        |
| links      | [link]                              |
| project_id | db39778a89b242f0a8ba818eaf4f3329    |
| secret     | [secret key]                        |
| trust_id   | None                                |
| user_id    | 121fa8dadf084e2fba46b00850aeb7aa    |
+------------+-------------------------------------+

Assuming that Bucket Owner's s3cmd config file name is 'owner-project-s3cfg', a simple example of setting policy follows:

s3cmd -c owner-project-s3cfg setpolicy sample-policy.json s3://mysharedbucket

To check policy on a bucket, use the following command:

s3cmd -c owner-project-s3cfg info s3://mysharedbucket

 

  • Setting a new policy overrides the policy which was previously applied.
  • The policy JSON file may have a maximum size up to 20 Kb. The policy file may be compacted with jq command:

    cat pretty-printed-policy.json | jq -c '.' > compacted-policy.json

     

  • Only s3cmd version 2.1.0 or newer support Ceph multitenancy projectid:bucketname naming convention. While older versions of s3cmd allow setting a policy on a bucket, only the 2.1.0 or newer version supports accessing to other tenant's bucket.

Deleting the policy from a bucket

The policy may be deleted a bucket using s3cmd setpolicy POLICY_JSON_FILE s3://MY_SHARED_BUCKET command. See S3 Tools s3cmd usage for complete documentation.

A simple example of setting policy follows:

s3cmd -c owner-project-s3cfg delpolicy s3://mysharedbucket

Sample scenarios

Grant another project access to read and write to the bucket

A Bucket Owner wants to grant a bucket a read/write access to a Bucket User.

{
  "Version": "2012-10-17",
  "Id": "read-write",
  "Statement": [
    {
      "Sid": "project-read-write",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root",
          "arn:aws:iam::BUCKET_USER_PROJECT_ID:root"
        ]
      },
      "Action": [
        "s3:ListBucket",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy read-write-policy.json s3://mysharedbucket

The Bucket Owner should send to the Bucket User information about Bucket Owner's project id and the name of the bucket.

After Bucket User prepared their s3cmd config file called 'user-project-s3cfg', to access the bucket, for example to list the bucket, the Bucket User should issue:

s3cmd -c user-project-s3cfg ls s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket

Grant any user read access to the bucket

A bucket Owner wants to grant a bucket read access to anyone.

{
  "Version": "2012-10-17",
  "Id": "policy-read-any",
  "Statement": [
    {
      "Sid": "read-any",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
		   "*"
		]
      },
      "Action": [
        "s3:ListBucket",
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::*"
      ]
    }
  ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy read-any-policy.json s3://mysharedbucket

The Bucket Owner should publish the Bucket Owner's project id and the name of the bucket.

Users from other projects can access to bucket's contents, for example retrieve pictures/mypic.png from the Bucket Owner's bucket:

s3cmd -c user-project-s3cfg get s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/pictures/mypic.png

Grant one user write access and another user read access to a subfolder of a bucket

A Bucket Owner wants to share a folder in a bucket with read/write permissions to First Bucket User and read permissions to Second Bucket User.

{
    "Version": "2012-10-17",
    "Id": "complex-policy",
    "Statement": [
        {
            "Sid": "project-write",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::BUCKET_OWNER_PROJECT_ID:root",
                    "arn:aws:iam::FIRST_BUCKET_USER_PROJECT_ID:root"
                ]
            },
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mysharedbucket/mysharedfolder/*"
            ]
        },
        {
            "Sid": "project-read",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::SECOND_BUCKET_USER_PROJECT_ID:root"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mysharedbucket/mysharedfolder/*"
            ]
        }
    ]
}

To apply the policy, the Bucket Owner should issue:

s3cmd -c owner-project-s3cfg setpolicy complex-policy.json s3://mysharedbucket

The Bucket Owner should send to the First and Second Bucket Users information about Bucket Owner's project id and the name of the bucket.

To access the bucket, for example to write productlist.db to the bucket, the First Bucket User should issue:

s3cmd -c first-user-project-s3cfg put productlist.db s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/mysharedfolder/

The Second Bucket User can read produclist.db from the bucket:

s3cmd -c second-user-project-s3cfg get s3://BUCKET_OWNER_PROJECT_ID:mysharedbucket/mysharedfolder/productlist.db

Access to shared buckets with boto3

The following python script shows how Bucket User can list objects in Bucket Owner's bucket mybucket using boto3 library.

#!/usr/bin/python3


import boto3
from botocore.session import Session
from botocore.handlers import validate_bucket_name


ACCESS_KEY = "Bucket User's Access Key"
SECRET_KEY = "Bucket User's Secret Key"
ENDPOINT = "https://cf2.cloudferro.com:8080"
BUCKET_OWNER_PROJECT_ID = "Bucket Owner's Project ID"
BUCKET = "mybucket"

bucket_location = BUCKET_OWNER_PROJECT_ID + ":" + BUCKET

# We need to skip bucket name validation due to
# multitenancy Ceph bucket naming: tenantId:bucket.
# Otherwise we will receive an exception:
# "Parameter validation failed: Invalid bucket name".
botocore_session = Session()
botocore_session.unregister('before-parameter-build.s3', validate_bucket_name)
boto3.setup_default_session(botocore_session = botocore_session)

s3 = boto3.client(
    's3',
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY,
    endpoint_url=ENDPOINT,
)

response = s3.list_objects(Bucket=bucket_location)

print(response)

Please note that since AWS S3 standard does not support colons in the bucket names and here we address bucket names as tenantId:bucket, we have to workaround botocore validator, which checks if the bucket name matches regular expression: ^[a-zA-Z0-9.\-_]{1,255}$. Hence the extra line in the code:

botocore_session.unregister('before-parameter-build.s3', validate_bucket_name)
boto3.setup_default_session(botocore_session = botocore_session)

Without the code, the bucket name validator raises an exception:

botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid bucket name "TENANT_ID:BUCKET_NAME": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
or be an ARN matching the regex "^arn:(aws).*:s3:[a-z\-0-9]+:[0-9]{12}:accesspoint[/:][a-zA-Z0-9\-]{1,63}$
|^arn:(aws).*:s3-outposts:[a-z\-0-9]+:[0-9]{12}:outpost[/:][a-zA-Z0-9\-]{1,63}[/:]accesspoint[/:][a-zA-Z0-9\-]{1,63}$"

 

 References

 


How to install OpenStackClient (Linux)?

OpenStackClient is very useful tool to gain a powerful management of our projects in Command Line Interface. It implements new features and advantages: You can run the commands from the CLI, or include prepared scripts in Python to automate the functionality of your cloud storage. Moreover we could reach our OpenStack access on any computer if we inset credentials (username and password).

Eventually everyone may admit that generally OpenStackClient provides more opportunities to look into our compute facility deeply and much more precisely.

 

Attention:

It is strongly recommended to use virtual environments which do not affect system variables globally. If something goes wrong, everything is happening in separated space.

 

FAQ covers installation of OpenStackClient under Ubuntu 18.04 LTS and Python 3.
Installation under other distros may be simillar.

 

Before we start, you might consider running below code in a virtual environment:

sudo apt install python3-venv
python3 -m venv openstack_cli
source openstack_cli/bin/activate

 

 

Firstly, run apt-get update:

sudo apt update

 

Install python3-pip:

sudo apt install python3-pip

 

Install setuptools:

sudo python3 -m pip install setuptools

 

If you intend to use newer version of python, eg. 3.8.2 - you have to install python3.8-dev and reinstall netifaces python module which may cause problems. Remember to run pip commands with appropiate python version (eg. change python3 -m pip to python3.8 -m pip)

sudo apt install python3.8-dev
sudo python3.8 -m pip install --upgrade netifaces

 

Finally install python-openstackclient:

sudo python3 -m pip install python-openstackclient

 

After this, you should be able to run openstack command from consone, eg:

openstack --help

 

 

If everything seems to work, we might firmly move on to our Horizon Panel.

Log in to your account.

Head straight to our email button in the upper-right-corner. Click on it.

 

Choose Openstack RC File v2 or v3.

Save your RC file on your disk.

If you use some text editor like vim it should consist of many variables related to your domain account like it:

Change your directory to downloading destination and execute the configuration file:

For example t will look like: cloud_xxxxx\project_without_eo-openrc.sh

Time to run this sh file:

. cloud_xxxxx\project_without_eo-openrc.sh

 

The prompted message will ask you for your domain password. Type it in and press enter.

If the password is correct then you've got access granted.

 

After that you can eg. check list of your servers by typing in console:

openstack server list

Output of this command should contain table with ID, Name, Status, Networks, Image and Flavor of your virtual machines.

 

 

I recommend you to check Openstack documentation which contains lists of available commands. There you may reconsider many scenarios linked with your management.

 

 

v.2020-05-19


How to share private container (object storage) to another user

If you want to learn HOW TO USE OBJECT STORAGE? please refer to this article How to use Object Storage?

If you want to learn about BUCKET SHARING USING S3 BUCKET POLICY? Please refer to this article Bucket sharing using S3 Bucket Policy

Another method to access object storage is to use OpenStack Swift commands (https://docs.openstack.org/ocata/cli-reference/swift.html)

To use CLI you should prepare the python environment on your desktop (please see: Openstack CLI).

 

You can create your own private containers in Object Store of your projects and you can grant access to other users.
If you want to limit the access for chosen users to specific containers, the other users have to be the members of other projects (it is recommended one user or group of users per one project).
The project can be in one or more domains.
Otherwise, if users are members of the same project, they see all containers in that project and you cannot limit control the access to specific containers. 
 
In the example below there are
 
3 projects: "main", "project_1", "project_2"
 
 
3 users:
 
"owner"  - the user with _member_ role in project "main"
"user_1" - the user with _member_ role in project "project_1"
"user_2" - the user with _member_ role in project "project_2"
 
 
"owner" will have 3 containers in her/his project "main"
 
c-main-a
c-main-b
c-main-d
 
 
and the following files in the containers:
 
c-main-a
 
test-main-a1.txt
test-main-a2.txt
 
c-main-b
 
test-main-b.txt
 
c-main-d
 
test-main-d.txt
 
In the example below the user "owner" will grant "read only" access to container "c-main-a"  for "user_1"  
 
At first "owner" should login to her/his domain:
 
 
choose project main
 
 
download "OpenStack RC File v3" for user "owner" and project "main"
 
 
You can see the contents of the file in the Linux terminal:
$ cat main-openrc.sh
main-openrc.sh
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=https://cf2.cloudferro.com:5000/v3
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=ef26caa2cbde426da6d64666dd85cad8
export OS_PROJECT_NAME="main"
export OS_USER_DOMAIN_NAME="cloud_10996"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="6446426185844d558b77ac2c4b6fba60"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="owner"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

copy the file  main-openrc.sh to your CLI directory (please see: Openstack CLI).

The "user_1" should do the same procedure:

  1. login to her/his "project_1"
  2. download "OpenStack RC File v3" for user "user_1" and project "project_1"

project_1-openrc.sh

#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=https://cf2.cloudferro.com:5000/v3
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=a17851a54804450cada382b997421c5b
export OS_PROJECT_NAME="project_1"
export OS_USER_DOMAIN_NAME="cloud_10996"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="6446426185844d558b77ac2c4b6fba60"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="user_1"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

The "user_2" should do the same procedure as above.

Each user should open her/his terminal and source the openrc file:
 

terminal of user "owner"

$ source main-openrc.sh
Please enter your OpenStack Password for project main as user owner:  <here enter the password for owner>

(owner) $ swift list
c-main-a
c-main-b
c-main-d

terminal of user "user_1"

$ source project_1-openrc.sh
Please enter your OpenStack Password for project project_1 as user user_1:
  <here enter the password for user_1>

(user_1) $ swift list
c-project_1-a
c-project_1-b

terminal of user "user_2"

$ source project_2-openrc.sh
Please enter your OpenStack Password for project project_2 as user user_2: <here enter the password for user_2>

(user_2) $ swift list
c-project_2-a
c-project_2-b

"owner" prepares and uploads test files

(owner) $ touch test-main-a1.txt
(owner) $ touch test-main-a2.txt
(owner) $ swift upload c-main-a test-main-a1.txt
test-main-a1.txt
(owner) $ swift upload c-main-a test-main-a2.txt
test-main-a2.txt
 
(owner) $ touch test-main-b.txt
(owner) $ touch test-main-d.txt
(owner) $ swift upload c-main-b test-main-b.txt
test-main-b.txt
 
(owner) $ swift upload c-main-d test-main-d.txt
test-main-d.txt
 

 

check the id of user_1

(user_1) $ openstack user show --format json "${OS_USERNAME}" | jq -r .id
d6657d163fa24d4e8eaa9697bb22a730

check the id of user_2

(user_2) $ openstack user show --format json "${OS_USERNAME}" | jq -r .id
9f35b35da2764700bba2b21d5021c79c

You can check the status of container "c-main-a"

"Read ACL" and "Write ACL" are not set

(owner) $ swift stat c-main-a
                      Account: AUTH_ef26caa2cbde426da6d64666dd85cad8
                    Container: c-main-a
                      Objects: 2
                        Bytes: 0
                     Read ACL:
                    Write ACL:
                      Sync To:
                     Sync Key:
                  X-Timestamp: 1591219102.38223
X-Container-Bytes-Used-Actual: 0
             X-Storage-Policy: default-placement
                   X-Trans-Id: tx0000000000000019f88aa-005ed8d856-2142535c3-dias_default
       X-Openstack-Request-Id: tx0000000000000019f88aa-005ed8d856-2142535c3-dias_default
                Accept-Ranges: bytes
                 Content-Type: text/plain; charset=utf-8

grant access to container "c-main-a" for user_1

(owner) $ swift post --read-acl "*:d6657d163fa24d4e8eaa9697bb22a730 " c-main-a

get the credentials to acess Object Store in "main"

(owner) $ swift auth | awk -F = '/OS_STORAGE_URL/ {print $2}'
https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

pass the link:

https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

to "user_1"

"user_1" should create an environmental variable "SURL"

(user_1) $ SURL=https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

now the user_1 has access to the "c-main-a" container in "main" project

(user_1) $ swift --os-storage-url="${SURL}" list c-main-a
test-main-a1.txt
test-main-a2.txt

but the user_1 has no access to other containers in "main" project

(user_1) $ swift --os-storage-url="${SURL}" list c-main-b
Container GET failed: https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8/c-main-b?format=json 403 Forbidden  [first 60 chars of response] b'{"Code":"AccessDenied","BucketName":"c-main-b","RequestId":"'
Failed Transaction ID: tx0000000000000019ff870-005ed8dce3-2142535c3-dias_default

similar procedure cna be used to grant "write" permission to "user_1"

(owner) $ swift post --write-acl "*:d6657d163fa24d4e8eaa9697bb22a730 " c-main-a

Can't access eodata

If you have problems with access to eodata try the following:


install arping:

in CentOS:

sudo yum install arping

in Ubuntu:

sudo apt install arping

 

check the name of the interface connected to eodata network:

ifconfig

based on the response, find the number of  the interface of 10.111.x.x (eth<number> or ens<number>)

after that invoke the following commands:

in CentOS:

sudo arping -U -c 2 -I eth<number> $(ip -4 a show dev eth<number> | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p')


in Ubuntu:

sudo arping -U -c 2 -I ens<number> $(ip -4 a show dev ens<number> | sed -n 's/.*inet \([0-9\.]\+\).*/\1/p')


Next ping data.cloudferro.com again. If you receive answers, remount the resource:

sudo umount -lf /eodata

sudo mount /eodata

 

in Windows:

in command line run as administrator:

route add 10.97.0.0/16 10.11.0.1

 

and than run "mount_eodata" script from desktop.

 


How to attach a volume and migrate Sen4CAP products

Sen4CAP may need a lot more space than was initially planned. The easiest way to extend space for Sen4CAP applications is to attach the additional volume. First, we need to create a volume. In OpenStack Dashboard go to Project → Volumes → Volumes and select "Create Volume".

Name your new volume and provide a size

 

When the new volume is created navigate to Project → Compute → Instances and attach the new volume.

 

and select the newly created volume from the menu:

Now everything is ready for the instance to mount additional space for Sen4CAP applications. Check new volume available in the system with "lsblk" command.

Before any other action it good to make a backup copy of the instance. You can use "Create Snapshot" option in the instance menu.

The new volume should be visible in your system. Run "lsblk" command to see attaches by system name.

[eouser@sen4cap ~]$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  128G  0 disk
└─sda1   8:1    0  128G  0 part /
sdb      8:16   0    5T  0 disk

From the out above we can see the new volume is visible in the system as sdb. Create a new partition with mkfs. The following command will format the new volume with all available space there.

[eouser@sen4cap ~]$ sudo mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
167772160 inodes, 1342177280 blocks
67108864 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3489660928
40960 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000, 550731776, 644972544
 
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

We have the volume partition ready. We need to mount it somewhere temporary to move Sen4CAP data. In this example, we will use /disk directory

sudo mkdir /disk
sudo chmod 777 /disk/
sudo mount /dev/sdb /disk/

Stop Sen4CAP services

sudo systemctl stop sen2agri-services

Sen4CAP keeps downloaded data in /mnt. We need to sync all data from /mnt directory to the volume. We can do it with rsync command like on the following example:

sudo rsync -rtva /mnt/ /disk/

When there is a lot of data and rsync would stop you can resume the syncing with the same command. rsync will continue where stopped. When finished, compare the sizes of directories to check rsync went properly.

[eouser@sen4cap ~]$ du -hs /mnt/
22G /mnt/
[eouser@sen4cap ~]$ du -hs /disk/
22G /disk/

When volume is ready we need to add a proper entry in fstab to have it mounted automatically every reboot. Label your new volume with the following command:

[eouser@sen4cap /]$ sudo e2label /dev/sdb volume1
[eouser@sen4cap /]$ lsblk --fs /dev/sdb
NAME FSTYPE LABEL   UUID                                 MOUNTPOINT
sdb  ext4   volume1 930affa6-8971-47c8-8591-e46e9333c34b

UUID showed by your "lsblk --fs /dev/sdb" command will be used in the next step with fstab

Edit fstab:

sudo nano /etc/fstab

Add the following line (use your own UUID):

UUID=930affa6-8971-47c8-8591-e46e9333c34b    /mnt            ext4    rw,user,exec 0 0

Delete the content of /mnt directory

Sen4CAP uses /mnt directory to keep its data. There should be two directories only in /mnt directory: "archive" and "upload". If there anything else in your system please check what is the origin of that data. Linux systems use /mnt directory to mount drives. 

Unmount and mount volume back according to fstab entry:

sudo umount -lf /disk && sudo mount -a

Start Sen4CAP services.