If I can run enhance seriously it will be managing backups for terabytes of data, the backups will need to be running pretty much continuously all day to get what I'm hoping is tri-hourly backups to minimize data loss during an emergency situation.
With so many servers running backups all the time, and the need for additional CPU in an emergency situation (wanting as many threads available dedicated to the redeploy as possible), I've rolled out a beefy server with 28 CPU 56 threads, 128gb ram, and 4tb enterprise nvme, on a 10gbps port. It's in a different datacenter from my main cluster as an added bit of protection (if main DC has issues, there's possibility to redeploy at other DC). The big port will allow faster up/down speeds, which can save considerable time during many disaster scenarios. In a worst case scenario we could be looking at needing 10TB of sites/data to restore, optimizing as many bottlenecks as possible is critical to minimizing recovery time. Often just transferring data from backups can take hours, so optimizing that route is important.
I would be fine reducing costs and going with sata SSD instead of nvme, but I figured with a 10g port that I might as well bump out sata ssd as a bottleneck (they only have ~ 500Mbps read/write, while the nvme will dwarf that immensely, transferring the bottleneck elsewhere.
Backups aren't an area I would ever try cost-saving measures. I'd highly recommend building out the backup server based on your needs and disaster recovery plans. How important quickly recovering is or not, etc. might be good to test your system out, get a 5gb test site, try restoring it, see how it goes. Get 10 5gb test sites, try restoring them to see what's what. When shtf and it's go-time, it's always good to have a mental picture of what to expect during your recovery procedure. I've seen providers shit themselves when trying to recover and their bottleneck was some dinky 50Mbps uplink (and other horrific scenarios) and they were looking at days to transfer backups lol