How to share private container (object storage) to another user

If you want to learn HOW TO USE OBJECT STORAGE? please refer to this article How to use Object Storage?

Another method to access object storage is to use OpenStack Swift commands (https://docs.openstack.org/ocata/cli-reference/swift.html)

To use CLI you should prepare the python environment on your desktop (please see: Openstack CLI).

 

You can create your own private containers in Object Store of your projects and you can grant access to other users.
If you want to limit the access for chosen users to specific containers, the other users have to be the members of other projects (it is recommended one user or group of users per one project).
The project can be in one or more domains.
Otherwise, if users are members of the same project, they see all containers in that project and you cannot limit control the access to specific containers. 
 
In the example below there are
 
3 projects: "main", "project_1", "project_2"
 
 
3 users:
 
"owner" - the user with _member_ role in project "main"
"user_1" - he user with _member_ role in project "project_1"
"user_2" - he user with _member_ role in project "project_2"
 
 
"owner" will have 3 containers in her/his project "main"
 
c-main-a
c-main-b
c-main-d
 
 
and the following files in the containers:
 
c-main-a
 
test-main-a1.txt
test-main-a2.txt
 
c-main-b
 
test-main-b.txt
 
c-main-d
 
test-main-d.txt
 
In the example below the user "owner" will grant "read only" access to container "c-main-a"  for "user_1"  
 
At first "owner" should login to her/his domain:
 
 
choose project main
 
 
download "OpenStack RC File v3" for user "owner" and project "main"
 
 
You can see the contents of the file in the Linux terminal:
$ cat main-openrc.sh
main-openrc.sh
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=https://cf2.cloudferro.com:5000/v3
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=ef26caa2cbde426da6d64666dd85cad8
export OS_PROJECT_NAME="main"
export OS_USER_DOMAIN_NAME="cloud_10996"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="6446426185844d558b77ac2c4b6fba60"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="owner"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

copy the file  main-openrc.sh to your CLI directory (please see: Openstack CLI).

The "user_1" should do the same procedure:

  1. login to her/his "project_1"
  2. download "OpenStack RC File v3" for user "user_1" and project "project_1"

project_1-openrc.sh

#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=https://cf2.cloudferro.com:5000/v3
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=a17851a54804450cada382b997421c5b
export OS_PROJECT_NAME="project_1"
export OS_USER_DOMAIN_NAME="cloud_10996"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="6446426185844d558b77ac2c4b6fba60"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="user_1"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3

The "user_2" should do the same procedure as above.

Each user should open her/his terminal and source the openrc file:
 

terminal of user "owner"

$ source main-openrc.sh
Please enter your OpenStack Password for project main as user owner:  <here enter the password for owner>

(owner) $ swift list
c-main-a
c-main-b
c-main-d

terminal of user "user_1"

$ source project_1-openrc.sh
Please enter your OpenStack Password for project project_1 as user user_1:
  <here enter the password for user_1>

(user_1) $ swift list
c-project_1-a
c-project_1-b

terminal of user "user_2"

$ source project_2-openrc.sh
Please enter your OpenStack Password for project project_2 as user user_2: <here enter the password for user_2>

(user_2) $ swift list
c-project_2-a
c-project_2-b

"owner" prepares and uploads test files

(owner) $ touch test-main-a1.txt
(owner) $ touch test-main-a2.txt
(owner) $ swift upload c-main-a test-main-a1.txt
test-main-a1.txt
(owner) $ swift upload c-main-a test-main-a2.txt
test-main-a2.txt
 
(owner) $ touch test-main-b.txt
(owner) $ touch test-main-d.txt
(owner) $ swift upload c-main-b test-main-b.txt
test-main-b.txt
 
(owner) $ swift upload c-main-d test-main-d.txt
test-main-d.txt
 

 

check the id of user_1

(user_1) $ openstack user show --format json "${OS_USERNAME}" | jq -r .id
d6657d163fa24d4e8eaa9697bb22a730

check the id of user_2

(user_2) $ openstack user show --format json "${OS_USERNAME}" | jq -r .id
9f35b35da2764700bba2b21d5021c79c

You can check the status of container "c-main-a"

"Read ACL" and "Write ACL" are not set

(owner) $ swift stat c-main-a
                      Account: AUTH_ef26caa2cbde426da6d64666dd85cad8
                    Container: c-main-a
                      Objects: 2
                        Bytes: 0
                     Read ACL:
                    Write ACL:
                      Sync To:
                     Sync Key:
                  X-Timestamp: 1591219102.38223
X-Container-Bytes-Used-Actual: 0
             X-Storage-Policy: default-placement
                   X-Trans-Id: tx0000000000000019f88aa-005ed8d856-2142535c3-dias_default
       X-Openstack-Request-Id: tx0000000000000019f88aa-005ed8d856-2142535c3-dias_default
                Accept-Ranges: bytes
                 Content-Type: text/plain; charset=utf-8

grant access to container "c-main-a" for user_1

(owner) $ swift post --read-acl "*:d6657d163fa24d4e8eaa9697bb22a730 " c-main-a

get the credentials to acess Object Store in "main"

(owner) $ swift auth | awk -F = '/OS_STORAGE_URL/ {print $2}'
https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

pass the link:

https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

to "user_1"

"user_1" should create an environmental variable "SURL"

(user_1) $ SURL=https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8

now the user_1 has access to the "c-main-a" container in "main" project

(user_1) $ swift --os-storage-url="${SURL}" list c-main-a
test-main-a1.txt
test-main-a2.txt

but the user_1 has no access to other containers in "main" project

(user_1) $ swift --os-storage-url="${SURL}" list c-main-b
Container GET failed: https://cf2.cloudferro.com:8080/swift/v1/AUTH_ef26caa2cbde426da6d64666dd85cad8/c-main-b?format=json 403 Forbidden  [first 60 chars of response] b'{"Code":"AccessDenied","BucketName":"c-main-b","RequestId":"'
Failed Transaction ID: tx0000000000000019ff870-005ed8dce3-2142535c3-dias_default

similar procedure cna be used to grant "write" permission to "user_1"

(owner) $ swift post --write-acl "*:d6657d163fa24d4e8eaa9697bb22a730 " c-main-a

Sen4CAP ready to use solution on CREODIAS 

The Sen4CAP software is available on CREODIAS cloud environment as an easy to run image. When this image is installed on a CREODIAS Virtual Machine, any user can run the Sen4CAP software, benefitting both from direct access to the complete Copernicus Sentinel satellite data repository and dynamically scalable processing opportunities of the CREODIAS cloud computing environment.

In order to meet the needs of Paying Agencies and the companies supporting them, we advise in the choice of the size of the environment and support the substantive knowledge about Sen4CAP software itself and the use of satellite data in it. Thanks to customer support and constant contact during the use of our service, we consult with software manufacturers on all the information obtained in order to develop it continuously.

Contact with us and join to group of Sen4CAP users who have taken their agricultural crop monitoring to a higher level.
Send email to our Sales Department: sales@creodias.eu.

For more information please visit our Sen4CAP section and How to use Sen4CAP on CREODIAS tutorial.

PARTNERS

RELATED EO DIAS THIRD PARTIES