More backup settings for each hosting package, individually determined [LOGGED]
prasad0889 Thanks prasad. I forgot they had this offer.
I already use them for my DNS, so not sure I will give them my backups.
Also seeing the whole world using CF makes me a little worried sometimes in case of an outage.
Adrien Cloudflare outages are very rare like once in a year or two maybe, and that too for a very short time, maybe around 5-10 minutes.
other alternatives for S3 are backblaze and wasabi.
is worth seeing all options.
josedieguez Yeah BB is excellent.
Wasabi can be a great option as egress will be very less, only sometimes someone will want a restore.
- Edited
Nothing compares with a Storage server from HetsnerOVH or other dedicated servers where you can have 24 or 48 TB or more for backups (regarding price)
Yes, hardware might fail once every 10 years, but I guess it is just backup, not production, so you bring another backup server up, and you start backing up again.
The chance for production and backup to fail at the same time are probably so small that you can call it 0. The fact that Enhance offers a solution that is cost-free, included in the package and solution tested and maintained is superb.
Dedicated servers = your storage, your traffic and no limits on how much data you restore, not sure what anyone can ask for more!
I am not sure why you would need to save backups somewhere else, unless: it is cheaper (and there is nothing cheaper than a dedicated storage server unless it is oversold)
Isaia-Arknet_PTY_LTD on my experience Dedicated servers with HDD, receiving backups from more than 2-3 servers is just plain slow.
but again, on my experience.
josedieguez We are still using the same Dedicated servers as 5 years ago, actually it are our old servers where we did sharedhosting we use these now as Backup servers
Isaia-Arknet_PTY_LTD Thanks a lot for sharing! I'm going to follow your advice but I have a question:
Would an SSD NMVe BM at Hetzner do a better job than their storage box?
HDD vs SSD vs NVME debate has no win here, as backup has a target = a lot + cheap storage.
If you have 2 - 3 servers to back up and the HDD does not keep up, then imagine what does to your production hosting server (you kill it while backing up), and this can not be good.
In reality, I do not think speed has much to do with it, as the limiting factor will be the network!
Are there plans to add this feature to the roadmap?
Adrien we hooked up a Auction server with a lot of diskspace and just like Isaia-Arknet_PTY_LTD says, there is no win in SSD/ NVME WE also host it in Helsinki instead of Germany, physically an other Datacenter.
As others have said, we also have large 25-50TB storage servers running RAID10. As mentioned to @Adam in the past, one of the elements that causes unnecessary slowness is how Enhance calls on rsync (not utilizing bandwidth).
I think they expected everyone to have backups locally on the same server, which I thought was silly as I can't imagine anyone in the industry actually doing that.
No need for NVME or SSD, distribute backups accordingly. If you're smaller, aim for 4x 10TB vs 1x 40TB, and distribute the load.
- Edited
josedieguez on my experience Dedicated servers with HDD, receiving backups from more than 2-3 servers is just plain slow.
I am curious, how did you test this? I am not trying to be smart; instead, I am interested in getting smarter
Maybe it was rsync between servers in the same DC, and you were backing up 3 x SSD or NVME servers on a target with HDD where the network speed was capping HDD write speed, but even then, it is a good thing. It will go slower and will not kill the SSD/NVME (production server)
I do not see an issue with a backup server maxing on write for the time of the backup. But i guess if you backup 3 x servers at the same time, you probably got it wrong on schedule
Regarding Enhance backups and backup servers.. it is software, and maybe it is not very efficient at this point in time (I am only guessing) for me works, but it can be optimised in future and can be programmed to be smarter to block backups and do /server backups or some automation in the background to achieve best results.. (there is no limit at how smart it can get if enhance team will have the time to work on the backup process), I believe Enhance team will do a good job with this in the future.
On another note, S3? Faster than a dedicated server? .. ok
digitalexpanse I think they expected everyone to have backups locally on the same server, which I thought was silly as I can't imagine anyone in the industry actually doing that.
https://enhance.com/docs/backup-role/
TIP
We recommend installing the backup role on a completely separate server to your web servers and hosting it in a different data centre.
I think they did OK! According to documentation.
rsync between servers in different DC (wouldn't have backups on same DC), tried some configs, never had a good enough experience. maybe i didn't find the right config for this.
I just preffer faster backups, and not having the server with high or maxed out the write, if during that time clients will also perform on demand backups or restores. So, right now, using a good amount of the port speed, of any server that runs a backup, but ending in a matter of 2-3 hours, wich is pretty fast for the amount of data.
I'd very much like to see per package retention settings.
Per package is very much required and also if possible can we implement real-time backup as an option?
- Edited
Has this been roadmapped?