Skip to content

Backups

Backups from webapp

You can do manual backups, download, upload and restore full or individual parts of the tables in the system.

To do that you should got to administration and then in config you can operate backups.

Also, you can activate an automated scheduled cron task at any hour to do automatically the backup.

The backups can be found on disk at /opt/isard-local/backups by default or BACKUP_DIR environment variable.

Backups from system

You can activate backups from system (preferred in production). You should dump the databases, config and also disks (usually at least /opt/isard/templates and recommended the full /opt/isard path).

To easy the backup process we have added a new container: isard-backupninja.

This will create backups from db and disks, using borg backup and backupninja. This allows you to setup in isardvdi.cfg.example the backup behaviour like this:

# ------ Backups -------------------------------------------------------------

## Automated backups (https://0xacab.org/liberate/backupninja)
# If BACKUP_NFS_ENABLED is not enabled it will use this directory to create backups
# If BACKUP_NFS_ENABLED is enabled then this variable should be commented
#BACKUP_DIR=/opt/isard-local/backup

# If nfs enabled you need to set server and folder also
#BACKUP_NFS_ENABLED=false
#BACKUP_NFS_SERVER=172.16.0.10
#BACKUP_NFS_FOLDER=/remote/backupfolder

#BACKUP_DB_ENABLED=false
#BACKUP_DB_WHEN="everyday at 01"
#BACKUP_DB_PRUNE="--keep-weekly=8 --keep-monthly=12 --keep-within=14d --save-space"

#BACKUP_REDIS_ENABLED=false
#BACKUP_REDIS_WHEN="everyday at 01"
#BACKUP_REDIS_PRUNE="--keep-weekly=8 --keep-monthly=12 --keep-within=14d --save-space"

#BACKUP_STATS_ENABLED=false
#BACKUP_STATS_WHEN="everyday at 01"
#BACKUP_STATS_PRUNE="--keep-weekly=8 --keep-monthly=12 --keep-within=14d --save-space"

#BACKUP_CONFIG_ENABLED=false
#BACKUP_CONFIG_WHEN="everyday at 01"
#BACKUP_CONFIG_PRUNE="--keep-weekly=8 --keep-monthly=12 --keep-within=14d --save-space"

#BACKUP_DISKS_ENABLED=false
#BACKUP_DISKS_WHEN="everyday at 01"
#BACKUP_DISKS_PRUNE="--keep-weekly=4 --keep-monthly=3 --keep-within=7d --save-space"
#BACKUP_DISKS_TEMPLATES_ENABLED=false
#BACKUP_DISKS_GROUPS_ENABLED=false
#BACKUP_DISKS_MEDIA_ENABLED=false

The variables are self explanatory. By default no backup is activated and no isard-backupninja container started. If you activate any (database, disks, ...) by uncommenting variable and setting it to true then the container will be build (./build.sh) and will come up when you execute docker-compose up -d again.

WARNING: The BACKUP_DIR (or the default backup dir /opt/isard-local/backup) will hold new db and disks folders that will be created the first time. If they exist, should be empty the first time as backupninja will initialize the first time with borg. If not, the backup will fail.

If you have an external NAS with NFS4 export (note: only nvs v4 supported) you can set up the BACKUP_NFS_... vars and it will be mounted the before executing the backup and unmounted at the end. You can check whether it works as the first time the container will create db, disks, etc... folders based on what you activated and also it will show in container logs the successfull NFS mount. The export at the server should allow de DOCKER_NET (172.31.255.X/24) and(or) the isard-backupninja host (.88) to access the exported path. (Release 13.2.2)

Note: with disk backups enabled, at least one of the disks folder should also be enabled (templates/groups/media)

The folders created depending on the backup will be:

4,0K    /opt/isard-local/backup/extract     # files will be output here when using extract option
952K    /opt/isard-local/backup/db          # rethinkdb-dump files from /opt/isard/database/rethinkdb
12M     /opt/isard-local/backup/redis       # redis dump files from /opt/isard/redis/
128K    /opt/isard-local/backup/disks       # disks backups from /opt/isard/{templates/groups/media}
98M     /opt/isard-local/backup/stats       # stats data from /opt/isard/stats
952K    /opt/isard-local/backup/config      # the rest of the folders in /opt/isard

Also we have included an script that will make the process of restoring backup, listing, checking integrity, etc...

  • List disks|db backups:

# docker exec -ti isard-backupninja run.sh list disks
2021-11-18T10:35:58
2021-11-18T10:37:05
2021-11-18T10:38:13
- Show repository info
# docker exec -ti isard-backupninja run.sh info disks
Warning: Attempting to access a previously unknown unencrypted repository!
Do you want to continue? [yN] yes (from BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK)
Repository ID: 1f195947728add72fd71a5d633f66fd2a17948bc4d474225045ea879274ce709
Location: /backup/disks
Encrypted: No
Cache: /root/.cache/borg/1f195947728add72fd71a5d633f66fd2a17948bc4d474225045ea879274ce709
Security dir: /root/.config/borg/security/1f195947728add72fd71a5d633f66fd2a17948bc4d474225045ea879274ce709
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
All archives:                2.20 GB              1.65 GB            330.91 MB

                       Unique chunks         Total chunks
Chunk index:                     169                  830

  • Check backup integrity:
# docker exec -ti isard-backupninja run.sh check-integrity disks 2021-11-18T10:55:58
opt/isard/templates
opt/isard/groups
opt/isard/groups/default
opt/isard/groups/default/default
opt/isard/groups/default/default/local
opt/isard/groups/default/default/local/admin-admin
opt/isard/groups/default/default/local/admin-admin/downloaded_zxspectrum.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_tetros.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_slax93.qcow2
  • Only list files:
# docker exec -ti isard-backupninja run.sh show-files disks 2021-11-18T10:55:58
opt/isard/templates
opt/isard/groups
opt/isard/groups/default
opt/isard/groups/default/default
opt/isard/groups/default/default/local
opt/isard/groups/default/default/local/admin-admin
opt/isard/groups/default/default/local/admin-admin/downloaded_zxspectrum.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_tetros.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_slax93.qcow2
  • Extract all files
# docker exec -ti isard-backupninja run.sh extract disks 2021-11-18T10:55:58
opt/isard/templates
opt/isard/groups
opt/isard/groups/default
opt/isard/groups/default/default
opt/isard/groups/default/default/local
opt/isard/groups/default/default/local/admin-admin
opt/isard/groups/default/default/local/admin-admin/downloaded_zxspectrum.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_tetros.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_slax93.qcow2

# tree /opt/isard-local/backup/extract/
/opt/isard-local/backup/extract/
└── opt
    └── isard
        ├── groups
        │   └── default
        │       └── default
        │           └── local
        │               └── admin-admin
        │                   ├── downloaded_slax93.qcow2
        │                   ├── downloaded_tetros.qcow2
        │                   └── downloaded_zxspectrum.qcow2
        └── templates
  • Extract one file only
# rm -rf /opt/isard-local/backup/extract/*

# docker exec -ti isard-backupninja run.sh extract disks 2021-11-18T10:55:58 opt/isard/groups/default/default/local/admin-admin/downloaded_slax93.qcow2
opt/isard/groups/default/default/local/admin-admin/downloaded_slax93.qcow2

# tree /opt/isard-local/backup/extract/
/opt/isard-local/backup/extract/
└── opt
    └── isard
        └── groups
            └── default
                └── default
                    └── local
                        └── admin-admin
                            └── downloaded_slax93.qcow2
  • Do a manual backup:
# docker exec -ti isard-backupninja run.sh execute-now disks
------------------------------------------------------------------------------
Archive name: 2021-11-19T09:33:39
Archive fingerprint: 3c0b9403e34830b2baca706bc54ad5e663b5d621f96087ce018fe47b0a77adb7
Time (start): Fri, 2021-11-19 09:33:39
Time (end):   Fri, 2021-11-19 09:33:42
Duration: 2.58 seconds
Number of files: 8
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:              440.85 MB            330.91 MB                510 B
All archives:                2.65 GB              1.99 GB            330.91 MB

                       Unique chunks         Total chunks
Chunk index:                     170                  996
------------------------------------------------------------------------------
  • Check nfs mount
# docker exec -ti isard-backupninja run.sh check-nfs-mount
BACKUP NFS FOLDER MOUNTED:
192.168.0.200:/storage /backup nfs4 rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.255.88,local_lock=none,addr=192.168.0.200 0 0
BACKUP NFS UnMOUNTED

Break Lock

If some backup went wrong:

isard-backupninja  | Jul 30 01:49:49 Info: >>>> starting action /usr/local/etc/backup.d/92-disks-borg.borg (because of --now)
isard-backupninja  | Jul 30 01:49:49 Info: Repository was already initialized
isard-backupninja  | Jul 30 01:49:51 Error: Failed to create/acquire the lock /backup/disks/lock.exclusive (timeout).
isard-backupninja  | Jul 30 01:49:51 Fatal: Failed backing up source.
isard-backupninja  | Jul 30 01:49:51 Fatal: <<<< finished action /usr/local/etc/backup.d/92-disks-borg.borg: FAILED
isard-backupninja  | Jul 30 01:49:51 Info: FINISHED: 1 actions run. 1 fatal. 1 error. 0 warning.

you'll need to break-lock in that backup folder:

docker exec -ti isard-backupninja borg break-lock /backup/disks
docker exec -ti isard-backupninja borg list /backup/disks

Warning: Attempting to access a previously unknown unencrypted repository!
Do you want to continue? [yN] y

2024-03-31T01:00:23                  Sun, 2024-03-31 01:00:23 [db1fd1ebf613d8701a98a98f42d2e9dbfed59ad647c195f03a3a8282186b946a]
2024-04-30T01:00:17                  Tue, 2024-04-30 01:00:18 [5d7b6dc13c4c7f56dc4f243448ee36afbf81e51ec793fe66f59e48ecb5f9aba2]
2024-05-31T01:00:18                  Fri, 2024-05-31 01:00:19 [cc0d01366c6753f6e40dc8a9a07a907f47f99829b8dac566d699a0377f6d6224]
2024-06-30T01:00:19                  Sun, 2024-06-30 01:00:19 [82c62f48761ca4a6d22d97f8ae4a64e63d2bc09c51347ef980bae90054f1c335]
2024-07-07T01:00:19                  Sun, 2024-07-07 01:00:19 [3ef207cb18718bd0833afef16c4f07e23a229235b26f1689d88a49682cc85675]
2024-07-14T01:00:18                  Sun, 2024-07-14 01:00:19 [773aa658c1e09f1d7ffb3358a3b9392122f3bd6fe889d3f75a0b78dc3dbaca6e]
2024-07-21T01:00:19                  Sun, 2024-07-21 01:00:19 [253380bf29e60d48ce49061fe06dcb5e61ad94e89e6bf566a645e6b16e03ddab]
2024-07-22T01:00:19                  Mon, 2024-07-22 01:00:20 [089f91294d6ab345aa6b5ebb9b629f8aed11e8acbb05594c64a428a51797a496]
2024-07-23T01:00:16                  Tue, 2024-07-23 01:00:17 [2c02f63226ea34a4118eb618f1876c194c17acf3a70ab725743cc997fecb7e6e]
2024-07-24T01:00:15                  Wed, 2024-07-24 01:00:15 [c47ee3559b1c4c0c306b28c6e4dab9c0334277a7d3b07346f3a85d3924e6602c]
2024-07-25T01:00:16                  Thu, 2024-07-25 01:00:16 [dd852d5275aa5177a4b12c1006723f1720daed942b05c8cab018f11c58f7c319]
2024-07-26T01:00:04                  Fri, 2024-07-26 01:00:04 [05ed4c26bec7fe420f2e9b817fc3a6e60374ea89fadf02c4da88163f07007c28]
2024-07-27T01:00:03                  Sat, 2024-07-27 01:00:03 [c4afe05324c7dee73edbbc2067e412eb0669a76f4266a781c0927d7ad8aa865a]
2024-07-28T01:00:03                  Sun, 2024-07-28 01:00:03 [eec87c6be908aa5bfd382bdfb4c3d9899e609b29b85ec161aae245cdabe01e69]
2024-07-29T01:00:03                  Mon, 2024-07-29 01:00:04 [e2a9911f65f236347ef3d8a342392cc30467ccff8503889b5806db3f7949d921]

## Infrastructure backups

Skip this if you execute isard-backupninja in your all-in-one infrastructure.

In infrastructure we keep one network internally between nodes. The node where the backup is being done (ONLY IF IT'S NOT THE ALL-IN-ONE) needs access to where database, redi and prometheus is running. So we've got this firewalld zone for this network:
isard-infra (active) target: default icmp-block-inversion: no interfaces: sources: 172.31.2.0/24 services: high-availability ports: 443/tcp 4443/udp protocols: forward: no masquerade: no forward-ports: port=2022:proto=tcp:toport=2022:toaddr=172.31.255.17 port=3100:proto=tcp:toport=3100:toaddr=172.31.255.67 port=9090:proto=tcp:toport=9090:toaddr=172.31.255.68 port=6379:proto=tcp:toport=6379:toaddr=172.31.255.12 port=28015:proto=tcp:toport=28015:toaddr=172.31.255.13 port=5900-7899:proto=tcp:toport=5900-7899:toaddr=172.31.255.17 ```

NOTE: Check that you've got this source blacklisted in cfgs!

firewall-cmd --add-forward-port="port=28015:proto=tcp:toport=28015:toaddr=172.31.255.13" --zone=isard-infra --permanent firewall-cmd --add-forward-port="port=28015:proto=tcp:toport=28015:toaddr=172.31.255.13" --zone=isard-infra

And in the backupninja cfg where it is running we've to set this variables correctly to reach all-in-one server:

RETHINKDB_HOST=172.31.2.10
REDIS_HOST=172.31.2.10

Last update: July 30, 2024