ApisCP cgroup2 blank admin panel

Enabled cgroup2, server rebooted and blank admin portal now. No sites yet, fresh install.

[Thu May 23 00:20:08 2024] EXCEPTION: Class “Opcenter\System\Cgroup\v1\Controllers\Io” not found

     0B. Error_Reporter::get_debug_bt()
     1B. Error_Reporter::handle_error(128, "Class "Opcenter\System\Cgroup\v1\Controllers\Io" not found", "/usr/local/apnscp/lib/Opcenter/System/Cgroup/BaseController.php", 72, Error)
     2B. Error_Reporter::handle_exception(Error)

EXCEPTION: Class “Opcenter\System\Cgroup\v1\Controllers\Io” not found

     0B. Opcenter\System\Cgroup\BaseController::make(Opcenter\System\Cgroup\Group, "io")
     1B. Opcenter\System\Cgroup::charge(Opcenter\System\Cgroup\Group)
     2B. ListenerService\Daemon->process_backend_data(<binary>)
     3B. ListenerService\Daemon->client_processing_loop()
     4B. ListenerService\Daemon->spawn()
     5B. ListenerService\Daemon->spawn_workers()
     6B. ListenerService\Daemon->start()
     7B. ListenerService\Daemon->__construct()
     8B. ListenerService\Daemon::init()

[Thu May 23 00:20:10 2024] apnscpd restart. Shutting down job runner
[Thu May 23 00:20:19 2024] [last message repeated 1 times]
[2024-05-23 00:20:19][4] Processing: Lararia\Jobs\BootstrapperTask
[Thu May 23 00:20:26 2024] EXCEPTION: Class “Opcenter\System\Cgroup\v1\Controllers\Io” not found

EditDomain --reconfig --all is required after reboot. If you’re on edge, all files under /etc/cgconfig.d are removed, if not, remove these files first, then run upcp -sb system/cgroup prior to running EditDomain --reconfig --all.

The folder /etc/cgconfig.d is blank, there are no sites in this server yet. Running EditDomain --reconfig --all returns WARNING: unknown(): No sites found in domainmap.tch - corrupted map?

I tried to add a domain and it fails. AddDomain -c siteinfo,domain=boxbytes.com -c siteinfo,admin_user=boxbytes

I also tried reverting to v1 and it didn’t help. Ran upcp -sb

What does upcp -sb system/cgroup report? What’s the system flavor, VM or dedicated/bare? I handled a similar case yesterday with a Hetzner dedicated, which utilized multiple UEFI boot points by PXE at random. Everything else on the system was mirrored with mdadm (mdadm with v1.0 metadata supports grub2).

grep cgroup2 /proc/mounts

If this is empty, then check,

grep unified /etc/default/grub
# Sample response: GRUB_CMDLINE_LINUX="crashkernel=no rootflags=usrquota,grpquota,prjquota selinux=0 systemd.unified_cgroup_hierarchy=1 fips=0 boot=UUID=a9751d94-c1e6-4999-bbb8-b379b62c93c1"

If present, then it’s vacillating between boot kernels where things get complicated. On the other hand, if not present, upcp -sb system/kernel. It’ll reboot with the new kernel flags.

I am the one you helped yesterday, forum jrcham=discord spaceysine. I’ve had discord for years vs the forum nickname. Also the dedicated server is an OVHcloud server not Hetzner.

I didn’t realize cgroup2 added kernel params (makes sense now that you mention it). To fix (same as I already did prior for quotas) I mounted the unmounted boot volume to “/mnt/temp” and added the params using “grub2-mkconfig -o /mnt/temp/EFI/almalinux/grub.cfg” a bit ago and rebooted. It is working fine now.

[root@lin1 ~]# grep cgroup2 /proc/mounts
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
cgroup2 /.socket/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0

I had to rebuild the server as I tried changing the boot to using mdm metadata 1.0 as you suggested and finally gave up. I have the instructions I tried if you want to look at them. At the end of the article you sent me another person said they had issues too and found a different way (theirs was cloning the partitions). My thought is at this point I will write a script to mount the unmounted boot partition and sync the files as needed so they stay the same. It’s that or I stop using mdraid which I would prefer to keep. Forcing the boot to one drive thru bios caused more issues. I am also planning to duplicate the setup on my local environment so I can play further with it when time permits. For now, keeping the files in sync seems to work and is the easiest solution.

Do you see any issues with my thoughts before I continue?

If you run the following command, then reboot:

grubby --update-kernel=ALL --args systemd.unified_cgroup_hierarchy=1

Is cgroup2 the active mount type in /proc/mounts? Likewise, if you run again with,

grubby --update-kernel=ALL --args systemd.unified_cgroup_hierarchy=0

Is cgroup the active mount type in /proc/mounts?

Edit: to note as well, verify /boot/efi/ has alternated boot drives before and after reboot. This ensures that grubby is setting the boot parameters on both drives rather than one.

I abandoned this plan and setup KVM with a VM instead of running it directly on the dedicated server. Seems much better now!