- Edited
check /var/log/mysql/error.log
/var/log/mysql/mariadb.log
check /var/log/mysql/error.log
/var/log/mysql/mariadb.log
DracoBlue that file doesn't exist. I see no log files at all for mariadb. No /var/log/mariadb folder at all and nothing in /var/log/mysql
Probably a systemd setting to make a service restart on failure. Maybe @Adam can fix this for everyone in the next patch?
Nickske00 yeah that would be nice. Previously, this didn't happen because I think Docker auto restarted stuff. Now I'm having issues on multiple sites across multiple servers. Frustrating.
The problem with auto restarting mariadb when it's killed for OOM is that it will likely be killed again. Repeated killing of MariaDB will cause data corruption. It's better to address the cause of the OOM - set hard memory limits on your websites, set a sensible innodb_buffer_size or add more RAM to the system.
If you want the old behaviour back, you can add a systemd override for the mariadb service and set Restart=Always. That should do it.
I ended up just setting the OOMScoreAdjust value for it to -1000. We'll see if that works.
Also edit my.cnf by going to Enhance Main CP → Server → Database → Edit my.cnf, then add:
max_user_connections=25
After that, restart MariaDB:
systemctl restart mariadb
You can also utilize this script to analyze your setup and identify areas for optimization. However, it is crucial to keep MariaDB up and running for at least 24 hours to ensure you receive accurate information.
https://gist.github.com/cPFence/98c359cfade030fd62adb6681312a97a
This is an issue to ponder if you ever had OOM kills on v11 to mariadb. The Ubuntu OOM killer will always target mariadb/mysql since it is likely using the largest chunk of server ram. We all also have slightly different setups.. ie swap, no swap.. different ram etc..
cPFence max_user_connections=25
This may be a good way to add some stability in a shared server environment.. I also think that systemd should try to restart mariadb/mysql if it is OOM killed. I have been talking to Adam more in a ticket and he also has some thoughts on this.
Adam If you want the old behaviour back, you can add a systemd override for the mariadb service and set Restart=Always. That should do it.
Edit: there is also this in /etc/systemd/system/multi-user.target.wants/mariadb.service
# Kernels like killing mariadbd when out of memory because its big.
# Let's temper that preference a little.
# OOMScoreAdjust=-600
...as @thekendog noted above. How is that going @thekendog ? Did you drop in a .conf file to /etc/systemd/system/mariadb.service.d ?
@Adam will this conf file persist now on v12?
If you need MariaDB to restart automatically on any failure and ensure persistence, you can set this up with a systemd override file. However, be cautious with such changes to avoid issues like infinite restart loops if a persistent error causes repeated failures.
What Enhance is doing now follows standard practices used by professional control panels like cPanel and DirectAdmin and should not be changed.
What really should be done is to address the root cause of the failures through proper resource allocation, MySQL optimization, and preventing long, inefficient queries from killing your server. And if that’s already done, then adding more RAM to the server is the next step.
Pro tip (since it tripped me up):
If overriding (and it will persist) with files in
/etc/systemd/system/mariadb.service.d/
you need to ensure you have added the correct unit to the .conf file..
ie
[Service]
OOMScoreAdjust=-600
Also, don't forget: systemctl daemon-reload
to ensure you override is loaded.. you can check with systemctl show mariadb.service
(to see the loaded values) and also systemctl show mariadb.service
to see the current status.
Disclaimer: Don't blindly create a .conf file with the sample values I have above.
cPFence What really should be done is to address the root cause of the failures through proper resource allocation, MySQL optimization, and preventing long, inefficient queries from killing your server. And if that’s already done, then adding more RAM to the server is the next step.
this IS the correct way to attack this issue
Man, this keeps happening on multiple servers. I'm having to adjust the OOM scores across the board for MariaDB. This didn't happen on v11 and it didn't happen with the same sites/servers when I was using SpinupWP. I see that Enhance is setting the innodb_buffer_pool_size to 512M while the default is 128M. I wonder if that is the issue? I have another server that uses cPanel and it is wayyyyyyyyyy beefier and has way more RAM and CPUs than the ones on Enhance. The default innodb_buffer_pool_size for it is still only 128M.
xyzulu You can impose a memory limit on MariaDB to prevent it from using all the RAM.
Check: sudo systemctl edit mariadb.service
or sudo systemctl edit mariadb
and
[Service]
MemoryMax=1G (your value)
Or
[Service]
OOMScoreAdjust=-1000
It should help with OOM.
Reload the systemd daemon configuration by executing the following command
systemctl daemon-reload
thekendog interesting.
Why enhance doesn’t provide us with all this settings for database servers into the UI. Even with templates where we can make template with settings and apply this settings across all database servers into the cluster or into a group of servers.
I can see this requires a lot of sysadmin work on database servers. To keep them in good performance
@ "thekendog"#p19875
It's generally not ideal to give MySQL optimization recommendations without knowing your exact setup, but since you didn’t provide details, here are some general tips:
Apply these settings:
max_user_connections=25
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 128M
wait_timeout = 300
interactive_timeout = 300
Restart MariaDB, let it run for 24 hours, then run this script and post the output:
https://gist.github.com/cPFence/98c359cfade030fd62adb6681312a97a
If you want to mimic how v11 handled this, you can create a systemd override file with auto-restart blindly enabled, similar to what Docker was doing. When a spike happens, it will just keep restarting MariaDB over and over (obviously, MySQL will be down while this is happening). Whether it lasts a few minutes or a few hours, only time (or luck) will tell.
Then the admin checks in, sees the MySQL service is running, and everything looks fine ,but in reality, it isn’t. In my humble opinion, this is the wrong approach.
If you insist on using a systemd override to mimic Docker behavior, make sure you set up proper monitoring for mission-critical sites so you actually know what’s happening, when, and how it’s performing.
The problem I see, is not the lack of admin skills, but the fact that as he mentioned same issue didn't occur in other panels and in v11 , I believe this is something enhance should look into
gmakhs cPanel will set the innodb buffer to 128mb max by default. It will set it to less with servers with lower ram. However it is not optimizing the innodb buffer pool size as this MOSTLY depends on the use case of the server and should be done individually for the server.
Without reading the logs, before the OOM is triggered we will be just guessing. We have no idea what proccesses are running on the different servers.
But if the server has a lot of RAM and heavy database usage, 128mb does not make sense. You want to keep that RAM in use, otherwise you are wasting resources.