Adam is aware of some issues I have had with servers. I think during a reboot of a backup being taken (which can happen!), the backups malformed and caused a 'loop' to happen. This 'loop' caused my backup volume to become full. It kept re-trying a backup. There are multiple entries in ECP for the website (it's 10GB).
Because this volume became full and there's currently no option to delete a backup, naturally I just deleted the
/backups/<ID>, but then the backup system would no longer create a backup for that website. I then recreated the
/backups/<ID> folder and the same thing happened. It wasn't until I added a
/backups/<ID>/backup-subvolume (and chmod'ed them) folder that backups for this website started to work again.
I think the backups functionality just needs a bit more attention. In an ideal world, backups need to happen regardless of whatever happens (disk space permitting), i.e. if it fails an incremental check, or folders don't exist, make it remove the files/folders and start again from scratch so a backup CAN happen.
For me, once S3 compatible storage feature is released, this would be my go-to as generally the space is unlimited and cheaper.
Happy to provide any details if needed!