slimx Hi actually my one is also on the new method. All Ext4. Issue persists.

Probably your metadata file or 'current' symlink has issues. Did you try running:

v12-upgrade fix-backup-permissions

It now fixes these issues automatically. If you’ve already done this and the issue persists, open a ticket with the Enhance team and let them know about the issue. Also, make sure you are on the latest version for all servers before running the command above.

nqnoc

You no longer need this, Enhance has made it plug and play. Just add the backup role, and you’re all set, no need to manually create the directory.

You can now even add the backup role to multiple servers in your cluster easily. Some may even skip a dedicated backup server and use DNS/DB servers for backups, though that’s not recommended.

nqnoc Having heard a lot on this foruns that overal Enhance RAM usage is lower in v12.
For now I just need to backup 16 websites, each one with around 5GB, so 80GB total, that will make 2 backups per day (12h). I have a unused storage VPS with 1CPU + 1.5GB RAM + 2TB.

Do you think I can use this VPS? Or 1.5GB RAM will not work?
Thanks!

Just grab a 1TB storage VPS / 2GB RAM from alphavps.com for €4, it’ll serve your needs well. Don’t worry if the first backup run spikes your VPS load—it’s normal. Subsequent backups won’t cause this since they’re incremental.

  • Edited

nqnoc Do you think I can use this VPS? Or 1.5GB RAM will not work?
Thanks!

I'm pretty sure it will be totally fine. The base system doesn't need 2GB, the backup role uses next to nothing, it literally only needs to receive ssh/rsync commands and write stuff to a disk.

If you get 1.5GB swapfile just in case, you'll have a great backup server.

The reported lower RAM usage is for the web servers. But the funny thing is, a year ago it was explained that the docker overhead could only be minimal, like 150MB RAM per website worst case. But now suddenly everyone is reporting much lower RAM usage. Anyways, it doesn't apply for the backup role.

  • Edited

slimx Hi actually my one is also on the new method. All Ext4. Issue persists.

I can confirm that the fix coming for this solves the de-duplication issue. Let's hope it will be released today 🙂

It will work without issue on separate block devices with BTRFS, as expected, so people migrating from the V11 system should be safe without overhauling their backup servers.

@cPFence 's tool also shows full de-duplication on the sites that had the fix applied.

Something I have noticed with website restore. The restore does not replace everything I have, instead, it kind of pulls back what was changed only, and still keeps the changed file. So, I end up with two files, one that had changed and the other that has been restored.

E.g, If I rename public_html to something like publi_html_1 then do a website restore, I end up with two folders, one publi_html_1 (kept) and another one which is restored publi_html.

Thought maybe a restore should replace everything I have now with what was backed up, instead of having two files, the correct one and the wrong one... Not sure if anyone else has seen this.

    wenani This is normal behaviour... If anything preferred behaviour.

    wenani

    Yes, we observed the same issue but got busy and didn’t have time to report it to Enhance. V11 didn’t behave like this.

    It should be an easy fix, please open a ticket and report it to Adam.

    I noticed that S3 backups are not being created according to the scheduled time. I have set them for nighttime, but instead, they are being executed between 12-1 PM during the day. Does anyone else have a similar issue? Of course, a ticket has been submitted to Adam.

      DracoBlue For me every backups are not following the scheduled time. They follow the "Minimum backup age" and changing that "I routed" the backups hours, but it's just a bad workaround.

      I have an open ticket for this issue (and another one, always related to backups) but after 11 hours I still have no answer.

        DracoBlue

        Vendoz

        Working fine on our servers. Check the following:

        1. Ensure the timezone is consistent across all your servers. Sometimes a reboot may be necessary to ensure that the services recognize the new settings. We always perform a reboot after changing the timezone.
        2. If you want one backup every 24 hours, set minimum to 23 hours and maximum to 24 hours.
          • For a backup every 12 hours, set minimum to 11 and maximum to 12, and so on.
        3. On your backup server, run this to find the time of the last backup for each site:
           for site in /backups/; do 
               latest_snapshot=$(ls -d "$site"/snapshot- 2>/dev/null | sort -V | tail -n 1)
               if [[ -n "$latest_snapshot" ]]; then
                   timestamp=${latest_snapshot##-}
                   timestamp=$((timestamp / 1000))
                   echo "$(date -d @$timestamp '+%Y-%m-%d %H:%M:%S') --> ${site##/}"
               fi
           done | sort
        1. Now that you know when the backups were actually taken last time, let’s say backups should start at 4 AM (assuming this aligns with both the last backup time and the minimum time setting).

          • Make sure the "Allowed Backup Hours" setting allows backups to start from 4 AM to 6 AM, assuming backups take about 2 hours to complete for your setup.
        2. If backups don’t start at the expected time, for some odd reason, we’ve noticed that manually triggering a backup for any website kickstarts all backups. Not sure why.

          • This only needs to be done once, after that, everything runs like clockwork.

        This is the approach we have taken, and currently we don’t see any bugs in the backup system, except for the restore function, which currently restores old files without deleting existing ones first, causing issues.

          12.0.24
          28th February 2025

          Added:
          New ecp CLI tool to synchronise http enabled status across all servers.

          This should fix the de-duplication issue. The release note makes it seem like no action needs to be taken by the admin, but I can't confirm this yet.

          cPFence did you test if btrfs backup system is working smoothly since new patch?

            pratik_asabe

            No, we’re done with BTRFS—all our clusters are now running on EXT4 only. It was a no-brainer for us since the new backup system was designed to actually move away from BTRFS complications. Plus, the Enhance team will likely focus most of their future features and testing on EXT4.

              cPFence so, i guess one way or other we have to switch to ext4 as well! we have few sites so i think we will start fresh with ext4 next week.. excited to upgrade to v12 next week!!

              I hope new backup system on ext4 is stable now?!

                pratik_asabe Working well for us, except that it does not follow strictly the set backup schedule. Sometimes backups are taken early or later than the specified backup time. Also, the restore function currently restores old files without deleting existing ones first. Other than that all is well.

                  wenani Thanks for the heads up, im sure you already notified adam about this issue so it will get fixed in coming days..

                    pratik_asabe I hope new backup system on ext4 is stable now?!

                    Apart from the restore function not removing old files first (already reported), everything is working like a charm, efficient, fast, and stable.

                    As for backup size, we’ve only been running it for four days, but the increase in size has been minimal. We’re hoping to see a small drop in total size once we reach 30 days.

                    wenani Working well for us, except that it does not follow strictly the set backup schedule. Sometimes backups are taken early or later than the specified backup time.

                    Try this fix:
                    Enhance Backups Running at Random Times? Here’s the Fix

                    • Edited

                    Keep in mind that while you may initially start your backups at a time ie each day at 9am, but then as you add more websites to your cluster as well as other factors, such as the number of concurrent backups you choose, even network speed etc may also cause your back timing to creep. Enhance backups are very smart as they are not designed to run at the same time each day, but rather to hit the targets you set (minimum backup age, maximum backup age etc)

                    For us, we are not so worried about when our backups run, just that they stick to the minimum and maximum backup age. We don't see any loads on our backup server when backups run now. All our backups hit their targets, but the times they run are not the same each day..that should, in my opinion, not be something you give any thought to (the time they run). Ideally, I'd prefer to see backups running all through the day (random times) with no particular noticeable impact on our cluster.

                    (Yes, there is the time you can set in Enhance to only allow backups in that window. I don't recommend or suggest this be something you worry about in a production cluster. Loads and traffic vary for many reasons, and you will actually increase loads/traffic if you force your backups to only run during a particular time window. If you think you need to do this, perhaps you don't have enough capacity on your network/cluster)

                    To sum up:
                    Backups running at random times is not a failure or a bug, and it doesn't need to be fixed. It's a very clever feature in Enhance. Even if you do try to "fix" it, you'll just be back (probably when you have a decent number of websites) to "random times" eventually anyway.

                    Follow @enhancecp