Adam Generally with bare metal servers the provider can swap the disks into a new chassis in the event of a hardware failure which would always be preferable to a restore from backup.
Swapping disks is often a shit show. Today's best practice is to spool up a new, instantly deployable server and get this back online. While the data center resolves the issues with disks or whatever the F is wrong, we'd normally transition back from the temporary deployment.
Server Down, critical issue
- deploy instant server in 2 minutes, restore, online in 15 minutes
- beg datacenter to replace, and resolve drive issues, 4hr - 3 days, depending on SLA, plus you still have to restore
Adrien So you want it on the same server? Why would you like to mix production and staging on the same server?
- ability for an owner to instantly stage
- ability for the owner to instantly push to live
- zero network traffic or congestion
- easy to swap domains
- website owners screw up a lot (DIYers), easy-to-recreate staging
- losing website owners that expect this, given how common it is
- not spending hours internally dealing with this, or educating owners on it
note on network traffic
we use enhance to optimize density, if we're constantly having staging sites flowing at the network level, even in the same data center, its still network congestion that should be available for production traffic
staging sites
on average use zero resources, no traffic to speak of, and owners just trying to manage development and test changes in safe environment, plus are containerized in enhance (safe)
owners expect local staging
I've been doing this 10 years, worked in more hosting dashboards than I can think. I can't recall a single scenario where the owner or I created a staging site and to switch IPs to access their staging site.
We have 1Gb - 125Gb WordPress sites and it's been a shit show dealing with staging so far. And given staging sites have zero traffic, my only concern is storage really. This may be fixable with improved rsync transfers.