We have previously seen how to install and configure rclone and rclone-changer to use clouds as back-up storage for Bacula, using the Amazon S3 example.
Now, let’s look at how the same installation can be used to write to Oracle Cloud storages, which has been heavily in demand due to competing prices.
Configuração do Rclone
Here you should have already installed rclone and rclone-changer according to this post.
As root, create a rclone – /root/.config/rclone/rclone.conf configuration file, stating the login and password for access to the Oracle cloud console. The storage and authentication url can be found on the Oracle Cloud web console, as shown in Figure 1 – Dashboard – Service Details. The authentication token (required), we will update later through a script.
[oracle] type = swift env_auth = false user = hfaria@bacula.com.br key = oracle_access_password storage_url = https://bacula.br.storage.oraclecloud.com/v1/Storage-bacula auth = https://bacula.br.storage.oraclecloud.com/auth/v1.0 auth_token = AUTH_xxxx
Figure 1 – Details of authentication url and storage service
Create a script to always be updating the authentication token in the previous configuration (for example, /etc/ora_token.sh). Oracle Cloud tokens run for 30 minutes and this time cannot be renewed. Bacula must already be installed to have the directory /etc/bacula/. Modify the name of your oracle storage, user, password and authentication url according to your reality.
#!/bin/bash ORASTORAGE=Storage-bacula USER=heitor@bacula.com.br PASSWD=oracle_access_password AUTH_URL=https://bacula.br.storage.oraclecloud.com/auth/v1.0 TOKEN=$(curl -v -X GET -H "X-Storage-User: $ORASTORAGE:$USER" -H "X-Storage-Pass: $PASSWD" $AUTH_URL 2>&1 |grep X-Auth-Token |cut -f 3 -d " ") && sed -i "s/auth_token.*/auth_token = $TOKEN/g" /root/.config/rclone/rclone.conf cp /root/.config/rclone/rclone.conf /etc/bacula/rclone.conf chown bacula /etc/bacula/rclone.con
Add the following line in /etc/crontab to try to refresh the token every minute, preventing the connection to Storage from being lost.
* * * * * root /etc/ora_token.sh
Run the /etc/ora_token.sh script. Run some rclone commands to verify that the connection is successful and create a container named bacula.
# Lista containers/buckets rclone -v lsd oracle: # Cria um container rclone mkdir oracle:bacula
Bacula Configuration
Create directories and virtual pointing to storage if you have not already done so:
mkdir /mnt/vtapes & touch /mnt/vtapes/tape & chown -R bacula /mnt/vtapes/ mkdir /mnt/bacula-spool & chown bacula /mnt/bacula-spool
Add a device similar to the Bacula Storage Daemon Configuration file (bacula-sd.conf). In the case of Enterprise Bacula this can be done through the graphical interface – Bweb. If you are using Archive Storage that has a latency of a few hours to restore (similar to Glacier), you may need to increase the Maximum Changer Wait (seconds) further.
Autochanger { Name = "rclone_ora" Device = Drive-1 Changer Device = 'oracle:bacula' # rclone remote configuration and write bucket. You can also specify a directory in bucket Changer Command = "/usr/sbin/rclone-changer %c %o %S %a" # don't change that } Device { Name = Drive-1 Media Type = ora Maximum Changer Wait = 18000 # may need to change according to the size of the volumes Archive Device = /mnt/vtapes/tape # need to match the file created in the previous step Autochanger = yes LabelMedia = yes; Random Access = Yes AutomaticMount = no RemovableMedia = no AlwaysOpen = no Spool Directory = /mnt/bacula-spool Maximum Spool Size = 524288000 }
Now tie the new Storage Daemon device to its Director and also create a Pool for it:
Autochanger { Name = OracleCloud # Do not use "localhost" here Address = hfaria-asus-i5 SDPort = 9103 Password = "O0yFMuPy2jZM8L7eMg9TQccW4SvXdVl-n" Device = rclone_ora Media Type = ora Maximum Concurrent Jobs = 10 Autochanger = OracleCloud # point to ourself } Pool { Name = Offsite Pool Type = Backup Recycle = yes AutoPrune = yes Storage = OracleCloud Maximum Volume Bytes = 5G # recommended AutoPrune = yes Volume Retention = 4 weeks }
It is important to keep in mind developer considerations if you want to change the Maximum Volume Bytes:
It is recommended that you use many smallish volumes rather than large ones. Try to stay at a size that your average file fits into without getting too tiny. The reasoning here is in a recovery situation if you’re working on random files you want to minimize the amount of unrelated data you need to copy localy in order to recover. If you have a 10G virtual tape and only need 500M of it, you still need to wait for the full 10G to download before you can beging recovering data. Hence, if your typical file size is averaging 700M or so, 1G volumes are probably a good idea. [https://github.com/travisgroth/rclone-changer]
There’s a soft-coded limit of 8192 tape “slots” in rclone-changer but this can be manually changed by manipulating the script if required. If you want to change for 100 slots, for example, run the following command:
sed -i 's/slots = 8192/slots = 100/g' /usr/sbin/rclone-changer
Then, create virtual tape Bacula labels. Use bconsole for that:
label barcodes storage=OracleCloud pool=Offsite
If you have trouble labeling the virtual tapes, check all previous commands and permission grants. To restart the VTL and try again, delete the vtape and rclone-change state file.
rm -f /mnt/vtapes/tape* rm /var/lib/bacula/rclone-changer.state touch /mnt/vtapes/tape & chown -R bacula /mnt/vtapes/
All done. Run a test Backup Job in the Offsite Pool and create Schedule routines for the cloud, according to your needs. The virtual tapes will be changed and loaded in the cloud automagically, as in Figures 2 and 3.
Figures 2 and 3: Successfully backup and list of virtual tapes (files) on the Oracle web console
Rclone-changer Troubleshooting
These are the places where rclone and rclone-changer save their log messages:
cat /tmp/rclone.log cat /var/log/bacula/rclone-changer.log
Disponível em: Português (Portuguese (Brazil))
English