CloudFerro Cloud region migration tips
In this article, we will focus on a few important aspects of migration between older CloudFerro Cloud regions as WAW3-1 or CF2 and new ones as WAW3-2, FRA1-2 and WAW4-1.
We will focus on how to:
- prepare migration toolbox
- perform virtual machines migration
- copy data volumes
- reconnect EO Data access at new regions
Migration Toolbox
To perform migration effectively, you need access to:
- Horizon GUI
- OpenStack command line client
We also recommend creating a virtual machine at the destination region, dedicated to running migration commands. This instance should have the OpenStack command line interface client installed. Additionally RC files or Application Credentials to access both source and destination cloud regions should be copied here. This instance can be based on some low-performance and low-cost flavor such as eo1.small or even eo1.xsmall.
Purposes for keeping this type of instance during migration are:
- You can execute time-consuming tasks in a batch mode.
- If you attach a volume for storing instance images, then entire network transfer of large files would be made internally in high-performance internal CloudFerro infrastructure without involving your internet access.
VM migration
The VM migration process is described in detail in the CREODIAS documentation:
"OpenStack instance migration using command line on CREODIAS"
https://creodias.docs.cloudferro.com/en/latest/networking/OpenStack-instance-migrationcommand-line-on-Creodias.html
It is worth mentioning a few aspects:
- The instance should be shut down before creating an image.
- Before shutting down the instance, consider if your workflow requires manual stopping according to specific requirements, for example, a fixed order of shutting services. If yes, do not shut down the instance from Horizon GUI, or by command line before properly shutting down all your services.
- Analyze or test how your software on this instance will behave when started without mounted volumes. If it causes issues, please consider disabling autostart before shutting it down and making an image.
- If your instance has attached volumes, please note the exact order of attachments and device names used. For example, with the command openstack volume list and on the instance with commands lsblk and mount. Recreation of volume attachments would be significantly easier at the destination region.
Volumes migration
- The first possible way of migrating Volumes is to:
- Create identical volumes at the destination.
- Attach them to the migrated instance.
- Recreate partitions and file systems exactly as in the source region.
- Then copy data using the rsync command. For this step, please check the documentation “How to Upload and Synchronise Files with SCP/RSYNC?”
https://creodias.docs.cloudferro.com/en/latest/networking/How-To-Upload-And-Synchronise-Files-With-SCP-RSYNC-Creodias.html
The advantage of this workflow is that the rsync command synchronizes all data and verifies the transfer. However, transferring larger volumes with this method would be time-consuming.
- The second possibility starts in a similar way:
- Create identical volumes at the destination.
- Attach them to the migrated instance, but do not recreate partitions and file systems.
- Verify if:
- The source machine has access to the internet
- The destination has a floating IP associated and is accessible via SSH.
- The private key to access the destination is copied to the source directory $HOME/.ssh/
- Before starting the operation, both volumes (source and destination) are unmounted.
- Then execute the command on the source machine:
sudo dd if=VOLUME_DEVICE_AT_SOURCE bs=10M conv=fsync status=progress | gzip -c -9 | ssh -i .ssh/DESTINATION_PRIVATE_KEY eouser@DESTINATION_IP 'gzip -d | sudo dd of=VOLUME_DEVICE_AT_DESTINATION bs=10M'
This would do the work in a significantly shorter time than using rsync.
If volumes are attached as /dev/sdb on both machines, the command would look like:sudo dd if=/dev/sdb bs=10M conv=fsync status=progress | gzip -c -9 | ssh -i .ssh/destination_private_key eouser@Destination_ip 'gzip -d | sudo dd of=/dev/sdb bs=10M'
- After successful execution of this command, please check if the entire partition table was copied using the command lsblk at the destination instance. You should see exactly the same partitions as at the source volume.
- Finally, mount partitions at the same points as at the source.
How to update EO Data mounting when VM was migrated from CF2 or WAW3-1 to WAW3-2 or FRA1-2
New cloud regions such as WAW3-2 and FRA1-2, or any future ones, have EO Data access configured differently than older CF2 or WAW3-1 regions.
Differences between these regions
- Different endpoint names
- At CF2 and WAW3-1, EOData is available at: http://data.cloudferro.com
- At WAW3-2, WAW4-1 and FRA1-2, it is available at: https://eodata.cloudferro.com
- S3fs authorization is required at WAW3-2, WAW4-1 and FRA1-2 regions.
This policy will be continued for any new region provided by CloudFerro.
Credentials for s3fs are distributed by Dynamic Vendor Data at WAW3-2, FRA1-2 and WAW4-1.
In the next part of this article, we will use as an example the regions CF2 and WAW3-2.
It causes different content of the /etc/systemd/system/eodata.mount file.
Content of /etc/systemd/system/eodata.mount created with VM at CF2
[Unit]
Before=remote-fs.target
[Mount]
Where=/eodata
What=s3fs#DIAS
Type=fuse
Options=noauto,_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=http://data.cloudferro.com,max_stat_cache_size=60000,list_object_max_keys=10000
[Install]
WantedBy=multi-user.target
Content of /etc/systemd/system/eodata.mount created with VM at WAW3-2
[Unit]
Before=remote-fs.target
After=dynamic-vendor-call.service
Requires=network-online.target
[Mount]
Where=/eodata
What=s3fs#eodata
Type=fuse
Options=_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=https://eodata.cloudferro.com,passwd_file=/etc/passwd-s3fs-eodata,max_stat_cache_size=60000,list_object_max_keys=10000,sigv2
[Install]
WantedBy=multi-user.target
Preparation to update EO Data mount
Before we start the reconfiguration of EO Data access, it is important to verify that networks are properly configured at the destination project:
- Execute the command: openstack network list You should get an output table containing a minimum of 3 networks:
- “external”
- YOUR_PROJECT_NAME
- “eodata”
- Execute the command: openstack subnet list You should get an output table containing a minimum of 4 subnets:
- YOUR_PROJECT_NAME
- “eodata1-subnet”
- “eodata2-subnet”
- “eodata3-subnet”
- After the creation of the migrated instance, check if it was added to the necessary networks: openstack server show -c address INSTANCE_NAME You should get an output table containing the following networks:
- YOUR_PROJECT_NAME
- “eodata”
EOData mount update Manual Procedure
- Log in via SSH to the migrated VM.
- Edit the file with the configuration of mounting EOData
- cd /etc/systemd/system sudo YOUR_EDITOR_OF_CHOICE eodata.mount
- Replace the content of this file for CF2 with content for WAW3-2 as shown above in the “Differences” chapter.
- Save this file.
- Execute cd /etc
- Execute
curl http://169.254.169.254/openstack/latest/vendor_data2.json | jq .nova.vmconfig.mountpoints[0]
And save/note values of:- s3_access_key
- s3_secret_key
- Execute
sudo YOUR_EDITOR_OF_CHOICE passwd-s3fs-eodata
- Paste the saved values here in the format:
s3_access_key:s3_secret_key
- Execute
sudo chmod go-rwx passwd-s3fs-eodata
- Activate EOData access by restarting the VM:
sudo reboot
- Execute tests verifying that your services using EOData work properly.
After taking those steps, the VM migrated from CF2 should be able to access EOData as before the migration.
EOData mount update Automation
The entire procedure explained can be automated by executing a script remotely with SSH:
- Save the script listed below to a file named
eodata_mount_update.sh
- Execute the command:
ssh -i .ssh/YOUR_PRIVATE_KEY -t eouser@IP_OF_VM "sudo bash -s" < eodata_mount_update.sh
Automation script “eodata_mount_update.sh” content is:
#!/bin/bash
cat <<EOF > /etc/systemd/system/eodata.mount
[Unit]
Before=remote-fs.target
After=dynamic-vendor-call.service
Requires=network-online.target
[Mount]
Where=/eodata
What=s3fs#eodata
Type=fuse
Options=_netdev,allow_other,use_path_request_style,uid=0,umask=0222,mp_umask=0222,mp_umask=0222,multipart_size=50,gid=0,url=https://eodata.cloudferro.com,passwd_file=/etc/passwd-s3fs-eodata,max_stat_cache_size=60000,list_object_max_keys=10000,sigv2
[Install]
WantedBy=multi-user.target
EOF
S3_ACCESS=`curl http://169.254.169.254/openstack/latest/vendor_data2.json | jq -r .nova.vmconfig.mountpoints[0].s3_access_key`
S3_SECRET=`curl http://169.254.169.254/openstack/latest/vendor_data2.json | jq -r .nova.vmconfig.mountpoints[0].s3_secret_key`
echo "$S3_ACCESS":"$S3_SECRET" > /etc/passwd-s3fs-eodata
chmod go-rwx /etc/passwd-s3fs-eodata
sleep 30s
reboot
We provide support in the field of CREODIAS services
Our team of consultants will answer all your questions related to the use of the CREODIAS platform. You can also read our documentation, where you can find all the FAQs and guides for CREODIAS services.