What provider are you using for your S3 compatible storage?

    Darren Vultr/Google Cloud. Vultr is the main issue right now, transfer from VM to S3 is extortionate. S3 is very cheap once the backup is on S3 and ingress to S3 is very cheap. Just not BW transfer from VM to S3.

    A lot of these cloud services can have hidden costs like this attached so it's always worth factoring that in. Do Vultr have their own cloud storage you could utilise? They may do internal bandwidth within the same region for free. Something worth considering.

    Realistically, for a lot of very large backups you're going to be better off with bare metal storage or a block device so that you can use the btrfs snapshot backups which are differential.

    Unlike Enhance backups, S3 backups are not incremental. They will consume a lot more storage/bandwidth as an entire copy of the website will be transmitted and stored every time the backups run.

    You can mitigate this by reducing the backup frequency, by reducing the minimum backup age under settings->service->backups.

    Vultr has some of the worst bandwidth policies, I think because they use 10G ports. A lot of other providers have much more user-friendly policies. Like at Linode you get to pool bandwidth from all your machines into one big bucket, which is useful since some machines will consume more/less than others, helps balance your monthly total needs. There are providers with unlimited/unmetered bandwidth that could be worth checking too, like OVHCloud.

    I'd note that for people wanting Enhance to improve the S3 backups with something like incremental backups, well, JetBackups has incremental backups on S3 and you know what it does? Runs up insane costs from all the operations being run on S3's side as part of the incremental/checking mechanism... If you think bandwidth costs are high, you've not seen a thing yet lol. I'd hope Enhance could figure something out for S3 incremental backups that uses compute on our side to avoid those costs at S3.

      twest Yea, Vultr have pool bandwidth also. Still, it’s eaten through my entire 15TB allowance in 6 days ish.

      twest I'd note that for people wanting Enhance to improve the S3 backups with something like incremental backups, well, JetBackups has incremental backups on S3 and you know what it does? Runs up insane costs from all the operations being run on S3's side as part of the incremental/checking mechanism... If you think bandwidth costs are high, you've not seen a thing yet lol. I'd hope Enhance could figure something out for S3 incremental backups that uses compute on our side to avoid those costs at S3.

      As far as I know, to achieve incremental backups with S3, we would need to store all of the backup data locally and then send only the incremental changes to the S3 bucket. It's definitely possible but most customers using S3 are doing so because they are running a small or single server cluster and they don't want to expend server resources to store backups.

        Adam Isn't it possible to create a full or incremental backup locally and after successfully sending it to S3 and getting it automatically deleted?

        There has to be some data to compare the changes with so either we have to store some kind of index in the S3 bucket or maintain a local copy of the files. It's very complex but not impossible.

        If you have large data volumes then I recommend using the built-in Enhance backups. They are incremental by design, consume no space on the app server and are very fast to back up and restore.

        It is impractical to use local backups as an intermediary step. as it would require the server to keep a safe free amount always available. It is a waste of storage, in my opinion. Storage that could be and should be sold to make an income.

        I know some people chase S3 or similar cloud storage providers as a backup option, but in my opinion, they do not have a complete picture of what they were requesting and do not compare apples with apples.

        There is no better way, at the moment, to backup (considering savings on bandwidth + storage + restore options) than the backup server provided by Enhance.

        To buy a storage VM is cheaper compared to using the same amount of storage on any cloud provider. Also, most of them come with unlimited bandwidth or if not, with more than enough.

        Looking for S3 or similar options for backup using Enhance it is like looking for trouble πŸ™‚ What a waste of time.

          Isaia-Arknet_PTY_LTD I can only second this, we have a cheap auction server from Hetzner with 2 10TB disks in it we Backup daily, keep 30 days of backups and it uses less then 2TB for 585 >ebsite ruinning on enhance, i would never use S3 when the enhance backup solution is rocksolid.

            What I'd really like to see is a native way to replicate the backup servers.
            The backup server sending incrementals to S3 would be ideal.

            You guys just have 1 backup of your client sites?

            Oh and SFTP support would be great too.

              hwcltjn I actually really liked Isaia's idea. Using S3 seems like a huge waste of bandwidth and $ when we can just get a second backup server running in another data center, then you have 2 Enhance backups running and both space/bandwidth saving incremental backups... Perfect solution! No more huge S3 costs, and I bet backup runs will be super fast! My cPanel backups usually take about 3-4 hours for 700GB, the equivalent backups using JetBackup as the Enhance S3 backup. I haven't gotten to the point of testing Enhance backups yet, but this is an exciting development πŸ™‚

                It's odd that I don't think I've heard anyone talk about backups to Google Drive (maybe I've missed it) That's the best backup-related feature with Plesk (and is incremental of course). I backup to both my Plesk server and Google Drive seamlessly. Zero issues.

                https://imageupload.io/UQxzlDpNw2t1pXj

                  Restic does it well with deduplication and compression, Hope enhance might have explored it already.
                  why not rustic?

                  don't get the hate to s3, there are many s3 providers very cheap, and if backups have been implemented correctly (incremental) there would be no complaints.

                  s3 with no incremental is not what the real feature request was, and i bet nobody will use it on the long run. I mean, you shouldn't.

                  standtech Backups are not redundancy for your content, instead, it is a copy of data stored in a different place (preferably a different DC)

                  You can also create the backup of the backup, and so on, for additional redundancy of the backups, but I think it is a waste. (this is definitely on own opinion only)

                  The "backup" word (in itself) is a way of restoring original content. It implies that the data is already sitting on the Original server and the backup server (if one of the 2 fails, you fix it and continue backing up)

                  s3 or other cloud providers offer some type of raid and data cloned on probably multiple places, but do not think it is bulletproof. They fail, too, and sometimes data sitting there can also get corrupted.

                  I am not promoting anything or going against S3 or other providers, but I do consider it a waste of time to develop tools for backing up data when we all know enhance for s3 backups are very far away compared to jet-backup backups on the same cloud providers.

                  You should look and learn each how they work, and then you would probably be able to assess and figure out how far one from the other is πŸ™‚

                  read this: https://docs.jetbackup.com/manual/whm/Destinations/destinationsOverview.html
                  Enhance to s3 is not the same as jetbackup to s3

                  i do not want to make this post too long. Still, in simple words, a backup that is compressed and archived is a waste of bandwidth and storage, as it requires Local storage in the production server for the archive to be created and compressed, then sent to S3 or another cloud if your file created is 100 GB then every night you will be moving 100 GB with enhance backup to S3 (by now you should have figured it out how bad this is!)

                  The missing term here is Incremental. That would be possible and implemented, and then S3 or any other destination would make perfect sense in achieving your desired backup redundancy.

                    Follow @enhancecp