2/6 servers went through good, the other 4 are offline for databases. Seems mysql didn't change over/install correctly, now I'm cooked. Any ideas anyone? I don't want to install mysql if it's going to mess up data migration, but thinking of trying that to see what happens. I don't recall anyone having this issue during their upgrades, RIP.

root@s894:~# v12-upgrade upgrade-mysql
Changing existing mysql root user to unix socket auth
Replacing MySQL/MariaDB Docker with updated package
Adding mysql user
useradd: user 'mysql' already exists
Couldn't change mysql root user to socket auth: Error { kind: Internal, msg: "Command \"mysql\" with args CommandArgs { inner: [\"-e\", \"INSTALL PLUGIN auth_socket SONAME \'auth_socket.so\'; ALTER USER \'root\'@\'localhost\' IDENTIFIED WITH auth_socket;\"] } failed after 1 attempts. Stdout: \"ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/local/enhance/mysqlcd-run/mysqld.sock' (2)\n\"" }, this may need to be done manually
thread 'main' panicked at /builds/enhance/ecp-core/ecp-cli/src/upgrade.rs:707:18:
Could not stop mariadb Docker: Error { kind: Internal, msg: "Command \"docker\" with args CommandArgs { inner: [\"stop\", \"mysql\"] } failed after 1 attempts. Stdout: \"Error response from daemon: No such container: mysql\n\"" }
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

May be your mysql socket is configured in a different way? or have you ever tried something or tinkered with it in past?

can you see socket location by running mysql --print-defaults | grep socket

idk this looks messy and might need time to figure out... im sure you already informed adam about it will see if i find anything keep us posted here..

it ended up failing due to a broken repository for a 3rd party app I use for server monitoring, it made the aptupdate fail during the enhance upgrade... But trying to run the upgrade again just throws that error, won't try again, sheesh...

    twest ah, third party repo.. a nightmare i know... these are the worst kinda issues.. i think adam will come up with something im sure, this should be resolved step by step by adam since he knows enhance's structure...

    Yeah I knew the repo had been abandoned too, just never bothered to remove it, it seemed harmless, but now we know it kills the upgrade process lol... I tested installing mysql manually on a standby server, it's serving websites good. So Next I am running a backup on my databases:
    sudo cp -r /var/lib/mysql /root/mysql_backup_$(date +%F)

    Then I'm going to manually install mysql:
    sudo apt install mysql-server=8.0.41-1ubuntu22.04 --no-install-recommends

    Well, that failed. The strange thing is on the standby server that I created a new site on and it worked, well I migrated it to one of the servers I installed mysql manually that arent working and now that test site doesn't work. Not sure why the manual mysql install worked on test server and not on the others.

      If I had backups etc I would try:

      • Decommission the server(s) in question
      • Deploy a new server, and restore the sites from the borked mysel servers.

      You do need to install MySQL via enhance (cli or ui) for it to “integrate” correctly.

      twest may be the nature of default mysql connection via 'localhost' until something from enhance pov is not right to perform queued tasks and enhance panicked and later everything failed.. idk, a lot of reasons until we are sure by diging in but like xyzulu said i also think deploying new server might help but the data freshness will cause problem to your clients if backups weren't as recent as possible...

      so i think long route is your only option i guess, rsync and dump everything in /var/local/enhance/mysqlcd-data/data/ on old server to new server to new mysql location after installation of enhance on new server and then do the same for files as well, i hope you don't have 100+ websites 😅...

      idk how it will go but bring adam into this and consult options with him....

      Well I made progress, the manual installed mysql works now, databases is connecting to sites. I just needed to change webservers from ELS to Apache/OLS.

      That at least brough sites back online so I can proceed with the rest of the update... I need to figure out why ELS is screwing off though, I tried the reset config to default but it had no effect.

        Just to be safe imo after successful upgrade setup new server and migrate everyone to that server to avoid long term issues..

        Well, actually need to change to straight Apache, some servers/sites not working on OLS.

          What a mess lol, I fu up one of my server this weekend… been trying to harden the security and somehow end up in mess… have fun

          twest

          We had a similar issue on one of our clusters, and manually installing MariaDB fixed it. Are things working fine now, or are you still experiencing issues?

            twest Very sorry to hear this @twest
            Our ELS servers updated without issue. I do know all about the borked/abandoned 3rd party repositories... bad history there. Wishing all the best as you recover from the update!!!

            cPFence I think everything is cool now, just need to wait until tonight when we can try switching everything back over to Litespeed. Adam shed some light on why the webserver change probably worked = as part of the switch Enhance restarts all the php containers of sites, which would reconnect it to the new mysql socket. That makes sense, so assumably it should be safe to switch back to Litespeed.

            There was a lingering issue of sites crashing when switching back to Litespeed, but we couldn't access the settings panel to try troubleshoot. That issue was due to UFW blocking the port, I guess that config gets wiped during the update. Hoping when we switch back over to Litespeed tonight it will be smooth sailing trying to troubleshoot why some sites won't load on it - maybe just needs config fixed.

            Before manually installing mysql I pushed a backup of /var/lib/mysql/, today I compared the backup folder to the live folder. Most servers had very similar folder size between the live mysql folder and backup, except one server the backup is +90GB and the live folder is 45GB. That discrepancy is a bit concerning, so will need to figure that out.

            It doesn't seem like there's any lasting damage to anything from the update failure, everything is technically running smooth as butter despite the chaotic update lol. Memory and CPU consumption looks good 😃

              twest

              Adam's explanation makes a lot of sense. In our case, after manually installing MariaDB, we also had to force all PHP containers to restart using:

              apt install mariadb-server
              mv /root/.my.cnf /root/.my.cnf_bkp
              killall -9 lsphp php-fpm

              You should be perfectly fine switching back to LSE anytime.

              Congrats on finally breaking out of the Docker cage!

                cPFence did you run into any issue with phpmyadmin not loading on some sites? On one site in particular I noticed going to the 'databases' tab showed no databases. The sites are still loading/working, but in Enhance UI it shows nothing. I tested an incognito browser to make sure my cache isn't messed.

                  twest

                  Yes, that happened on the same server where we manually installed MariaDB, but since it was a test cluster, I never really bothered to fix it. Just checked now, and the issue is gone, so I guess a restart can sort it out. Try this:

                  systemctl restart mysql.service
                  systemctl restart orchd.service

                  Welp, hopefully my last update on this thread and it can go away, lessons learned lol

                  My backups server was super screwed with runaway storage use. I think it's because V12 rolls out full new backups to replace the old ones since it uses a different system, so you essentially end up with 2 full backups until your old batch gets removed - not good for me with 3TB of data and only 4TB storage. My guess anyways, idk because while I've read all the threads on the backups saga - it seems many ppl had many different issues. For me the conversion @xyzulu posted was a great fix - got all the old backups cleared out so the new ones could roll out. That drive is now formatted for ext4 too, so should be good and stable long term. It was interesting watching servers pump out new backups at 700Mbps for hours, enormous load (10-20), luckily it only lasted a few hours to finish - everything looking good now.

                  I switched back over to ELS. It switched over flawlessly - just had to update latest version of ELS in the settings panel, and reset our config, it's rocking and rolling good now.

                  Some weird quirks, like can't see backups info in websites backups tab, like the size info - guessing that's gonna be fixed in a update soon, like the issue of logs.

                  Only user reported issue was connecting to FTP. I remembered someone mentioned something about ftp accounts needing to be assigned to the public_html folder now? I tried reassigning it to that folder, still wouldn't finish connecting all the way. I just gave the user their ssh login for now, which works fine.

                  The issue of my panel showing no databases for some users went away too, didn't even restart anything. Maybe it just needed time to settle, give me one last bit of anxiety 🤯

                  Thank you everyone that stuck with me during the issue with encouragement and ideas, really appreciate that. You all know how it is when a server goes down, end of the world. I hope if someone upgrading runs into my issue maybe this thread will help them, good lesson to keep your apt repos clean.

                  I've got 1 more cluster to update, feeling real confident it will go off without a hitch lol 💪

                    Follow @enhancecp