Have you ever wondered what would happen if your Enhance Backup server failed?

It’s a scenario no one wants to face, but preparing for it is essential. At cPFence, we faced the same question and decided to take action. Today, we’re sharing the script we developed to back up our Enhance Backup servers with the community. We hope you find it handy and useful.

This script handles:

  1. Synchronizing the live state of your backups (backup-subvolume) to a secondary server.
  2. Sending all snapshots (snapshot-*) securely to the destination.
  3. Deleting outdated snapshots from the secondary server that no longer exist on the source.

You can also use the script to restore your backups in case of a disaster with just two steps:

  1. Run the script on the backup server.
  2. Enable restore_mode=on to make all backups ready for immediate use.

📄 Get the script here: cPFence Backup Script

For more detailed instructions, visit our blog post: How to Backup Your Enhance Backups.

⚠ Disclaimer:
Please note that this guide is provided as-is for experienced server admins only, and we do not offer support for the backup process. Use it at your own risk, and don’t forget to test it in a non-production environment first!

We’d love to hear your feedback or any suggestions for improvement. Give it a try and let us know how it works for you.

    @cPFence Thank you. Excellent. We just replaced our much less robust "attempt" [a.k.a. near failure] with this one, and your script is working flawlessly so far.

    One scenario that your script has greatly improved is when we have dedicated server customers who require their backups to be located on a separate drive array within their server(s)... For unstated reasons, they will not permit backups from their server to our cluster-wide Enhance backup servers. We now have a very easy offsite backup to a dedicated VM for them thanks to your script. Your script also helped me understand what was wrong with our BTFRS knowledge.

    Great work, many thanks for helping us sleep better during the holidays!!!

      8Dweb

      You’re very welcome. Glad we could help! And yes, BTRFS can be confusing at first, but once you get the hang of it, it’s not that complicated and is incredibly efficient and powerful.

      a month later
      12 days later

      Should the file sizes be consistent on the source/destination? I decided to try this today and it's transferred 3x the data that's in the /backup drive on the source server. This is prior to even being halfway completed:

      Source:
      Data, single: total=57.01GiB, used=53.62GiB
      System, DUP: total=8.00MiB, used=16.00KiB
      Metadata, DUP: total=5.00GiB, used=4.73GiB
      GlobalReserve, single: total=158.73MiB, used=0.00B

      Destination:
      Data, single: total=165.01GiB, used=162.53GiB
      System, DUP: total=8.00MiB, used=48.00KiB
      Metadata, DUP: total=4.00GiB, used=3.02GiB
      GlobalReserve, single: total=310.58MiB, used=0.00B

        rrgreene

        Did you make any changes to the script? The backup sizes should be nearly identical between source and destination. The only scenario where the size would balloon significantly (10–20 times larger) is if the destination server (/backups) is not using BTRFS.

        I changed:

        SOURCE_HOSTNAME=$(hostname) # Gets the hostname of the source server
        DEST_BACKUPS="/backups/$SOURCE_HOSTNAME" # Path to backups on the destination server - Store backups in a seperate folder for each source backup server

          I tried the script as is. It's still running but currently double the size of the source.

          Source:
          Data, single: total=57.01GiB, used=53.63GiB
          System, DUP: total=8.00MiB, used=16.00KiB
          Metadata, DUP: total=5.00GiB, used=4.82GiB
          GlobalReserve, single: total=158.73MiB, used=0.00B

          /dev/sda4 btrfs 759942144 66511820 693198276 9% /backups

          Destination:
          Data, single: total=102.01GiB, used=99.34GiB
          System, DUP: total=8.00MiB, used=16.00KiB
          Metadata, DUP: total=1.00GiB, used=708.45MiB
          GlobalReserve, single: total=150.95MiB, used=0.00B

          /dev/sdb btrfs 2147483648 108331360 2039047936 6% /backups

            rrgreene

            Sorry, but I can’t replicate your issue. The size difference you’re seeing isn’t expected, and I’m not sure what’s going wrong in your setup.

            To help troubleshoot, try this modified version of the script, which allows you to back up only specific sites instead of everything. This might help narrow down where the issue is occurring:

            https://gist.github.com/cPFence/56c523f81f2ae7cd5dad3a31e7146815

            Just add your target site IDs in the TARGET_SITES variable to back up only those specific sites. Once the backup is completed, compare the size of the test site on both servers using:

            btrfs filesystem du -s /backups/YOUR_SITE_ID/

            Example output from our test servers below.

            The source server:

            root@testsrv3:~# btrfs filesystem du -s /backups/4dc46ca7-db66-40cb-8d71-f593bc8c0ce3/
                 Total   Exclusive  Set shared  Filename
               8.15GiB    66.48MiB   292.62MiB  /backups/4dc46ca7-db66-40cb-8d71-f593bc8c0ce3/

            The destination server:

            root@testsrv4:~# btrfs filesystem du -s /backups/4dc46ca7-db66-40cb-8d71-f593bc8c0ce3/
                 Total   Exclusive  Set shared  Filename
               8.29GiB     8.29GiB       0.00B  /backups/4dc46ca7-db66-40cb-8d71-f593bc8c0ce3/

            As you can see, there’s a small difference in total size, but that’s expected. If your backup size is way bigger, there seems to be an issue with your process. I'm not sure what it is. Try this test and see what you get.

            Write a Reply...
            Follow @enhancecp