Most shared hosting platforms use CloudLinux with CageFS to isolate users from each other. It is the industry standard. cPanel ships with it, Plesk supports it, and nearly every host running shared servers has it installed. It works. We chose a different approach.
This post explains what CageFS does, what we built instead using Podman containers, why we made that choice, and where each approach has real tradeoffs. We are not here to tell you CloudLinux is bad. It solves a specific problem well. But the problem we wanted to solve was different, and containers turned out to be a better fit for how we run things.
What CageFS actually does
CageFS is a virtualized filesystem layer from CloudLinux. It gives each user on a shared server their own view of the filesystem. When a user logs in via SSH or when their PHP process runs, they see a restricted version of the server. They cannot see other users’ home directories, they cannot see system binaries they should not have access to, and they cannot read sensitive configuration files.
Under the hood, CageFS works by creating per-user mount namespaces using a combination of hard links, bind mounts, and a skeleton directory structure. Each user gets a copy of essential system files (libraries, binaries, locale data) but shares the underlying storage. The skeleton is built once and then cloned for each user. When a user runs a PHP script or opens a shell, the CageFS PAM module activates and drops them into their virtualized environment.
It also integrates with CloudLinux’s LVE (Lightweight Virtual Environment) for resource limiting. LVE uses kernel-level cgroups to enforce CPU, memory, I/O, and process limits per user. This is the part that prevents one site from consuming all the server resources.
Together, CageFS and LVE provide two things: filesystem isolation and resource control. Both are critical for shared hosting. If either is missing, one compromised or misbehaving site can affect every other site on the server.
The cPanel + CloudLinux stack
In practice, most hosts run this as a full stack: cPanel for management, CloudLinux for isolation, and Apache with mod_lsapi or LiteSpeed for serving PHP. The stack is well-understood, well-documented, and you can set it up in an afternoon.
Here is how it typically works:
- cPanel creates a system user for each hosting account
- CloudLinux’s CageFS wraps that user in a virtual filesystem
- LVE Manager sets resource limits (CPU, RAM, I/O, processes)
- Apache or LiteSpeed runs PHP through the user’s LVE container
- Each PHP process executes under the user’s UID, inside CageFS
This gives you user isolation at the filesystem level and resource isolation at the kernel level. A user inside CageFS cannot see /home/otheruser, cannot read /etc/shadow, and cannot access another user’s MySQL data directory.
The biggest advantage of this stack is that it is turnkey. You install CloudLinux, run the CageFS setup, and it works. The ecosystem is mature. There are admin panels, monitoring tools, and documentation for every edge case. If you are running a traditional shared hosting business and need to get 500 accounts on a server quickly, this is the obvious choice.
Where CageFS has limitations
CageFS is filesystem isolation, not process isolation. The user’s PHP processes still run on the host kernel, in the host’s process namespace, using the host’s network stack. LVE adds resource limits on top, but the processes themselves are not truly contained.
Shared kernel surface. Every user’s code runs against the same kernel. A kernel vulnerability that allows privilege escalation affects every account on the server equally. CageFS does not add a meaningful barrier here because it operates at the filesystem level, not the syscall level. A process inside CageFS can still make the same system calls as one outside it.
Limited network isolation. All users share the server’s network stack. There is no per-user network namespace. A PHP process running as user A can, in theory, open sockets and communicate with services that user B’s processes are using, as long as the port is open. In practice, most hosting setups mitigate this with firewall rules, but it is not isolation in the container sense.
Dependency on a proprietary kernel. CloudLinux ships a modified kernel. You need their kernel for LVE to work. You need their kernel for CageFS to function correctly. This means you are locked into their release cycle, their patching schedule, and their compatibility matrix. When upstream kernel security patches land, you wait for CloudLinux to rebase and release. This is usually fast, but it is a dependency you do not control.
PHP version management. CloudLinux provides alt-PHP packages for running multiple PHP versions. This works, but all versions run on the host, sharing the same system libraries. If you need PHP 7.4 for one legacy site and PHP 8.3 for another, you install both on the server. Each version’s extensions, configurations, and libraries coexist on the same filesystem. It works, but updates and conflicts happen.
Skeleton drift. The CageFS skeleton directory needs maintenance. When system packages are updated, the skeleton needs rebuilding. When new binaries are added or removed, the skeleton needs updating. This is automated in most cases, but edge cases exist. A user reporting that a specific command-line tool “disappeared” after a system update is usually a skeleton sync issue.
What we built instead
We use Podman containers. Each user gets their own container for SSH access and their own container (or set of containers) for PHP processing. These are real Linux containers with their own process namespace, mount namespace, and cgroup slice.
Here is what the architecture looks like from the outside:
For SSH access: When a user connects via SSH, they land inside a container. The container has their home directory mounted in, a minimal set of tools, and access to the MySQL socket. It does not have access to other users’ directories, other containers, or the host filesystem outside of what is explicitly mounted.
For PHP processing: Each website (or group of websites per user) runs PHP-FPM inside its own container. Nginx communicates with PHP-FPM through Unix sockets. The container has the user’s home directory, read-only access to the MySQL socket, and the specific PHP version that site needs. Nothing else.
For resource limits: Each user gets a systemd cgroup slice. All of that user’s containers (SSH, PHP) run inside their slice. CPU, memory, I/O bandwidth, and IOPS limits are enforced at the slice level. If a user’s total resource consumption exceeds their allocation, the kernel enforces it, regardless of which container is doing the work.
How container isolation differs from CageFS
The fundamental difference is the isolation boundary.
CageFS gives you a filtered view of the host filesystem. The process still runs on the host. It shares the host’s process table, network stack, and (to a degree) its syscall surface. The isolation is at the mount level.
A Podman container gives you a separate process namespace, mount namespace, and cgroup hierarchy. The process inside the container cannot see processes outside it. It cannot see the host’s process table. It cannot see other containers. The only things it can access are what we explicitly mount into it.
Here is what that means in practice:
Process isolation. Inside a CageFS environment, a user can run
ps
and potentially see process IDs from other users (depending on configuration and kernel settings). Inside our containers,
ps
shows only the processes running in that container. There is nothing else to see.
Filesystem isolation. CageFS hides paths using mount tricks. The underlying files still exist on the host filesystem, and the isolation depends on the CageFS PAM module activating correctly. In our setup, the container’s root filesystem is a read-only image. The only writable paths are the user’s home directory and a few temporary mounts. There is no host filesystem to “break out” into because it was never mounted.
System information. We mount fake /proc files into SSH containers. When a user checks how much RAM the server has or how many CPUs are available, they see values that reflect their allocation, not the actual server hardware. CageFS does not do this by default. A user inside CageFS can typically see the real /proc/meminfo and /proc/cpuinfo.
PHP version isolation. Each PHP container runs exactly one PHP version with exactly the extensions configured for that site. The PHP binary, its modules, and its configuration exist only inside that container’s image. Updating PHP 8.3 does not touch the PHP 8.1 container. There is no shared library path, no extension conflict, no version interaction. Each container is its own world.
Capability restriction. Our containers drop unnecessary Linux capabilities. The SYS_ADMIN capability is explicitly removed. The
no-new-privileges
flag prevents any process inside the container from gaining additional permissions, even through setuid binaries. CageFS does not restrict capabilities because it is not a container. The processes run with whatever capabilities the user’s session provides.
Resource control: LVE vs cgroups
CloudLinux’s LVE is, at its core, a cgroup wrapper with a kernel module for enforcement and a management UI. It tracks CPU, memory, I/O, and process counts per user and enforces hard limits.
We do the same thing, but without the proprietary kernel module. Each user gets a systemd slice with resource limits configured directly:
- CPU: A quota (for example, 150% for 1.5 cores) and a weight for fair scheduling when the server is under contention
- Memory: A hard maximum. When a user hits it, the kernel’s OOM killer handles it. Swap is disabled per-slice to prevent I/O thrashing
- I/O: Bandwidth limits (MB/s) and IOPS limits, both read and write, per block device
- Processes: Tracked through systemd’s TasksAccounting
The practical difference is small. Both approaches use cgroups v2 under the hood. LVE adds a kernel module for finer-grained accounting and a management interface. Our approach uses stock kernel cgroups through systemd, which means we run on any standard Linux kernel. No proprietary patches, no modified kernel, no vendor dependency for resource enforcement.
The advantage of LVE is its management tooling. CloudLinux’s LVE Manager gives you a web UI to adjust limits, view historical usage, and set package-level defaults. We handle this through our control panel and orchestrator, which covers everything we need for managing limits across the fleet.
How PHP-FPM works in containers
Each PHP container runs PHP-FPM with pools configured per website. Even when multiple sites share a container (in pooled mode), each site gets its own FPM pool with its own process limits.
The process manager uses on-demand mode. Workers spawn when a request arrives and die after an idle timeout. This matters for resource efficiency. A server hosting 200 sites does not need 200 PHP-FPM master processes running at all times. Most sites are idle most of the time. On-demand mode means you pay (in memory) only for what is actively serving requests.
Nginx talks to PHP-FPM through Unix sockets, not TCP. The socket files live in a directory that both Nginx and the PHP container can access. There is no network overhead, no TCP handshake, no port management. This is faster than TCP-based setups and eliminates an entire class of port-conflict issues.
When a PHP version needs updating, we rebuild the container image, recreate the container, and the site picks up the new version on the next request. The old container is removed. There is no “install PHP 8.3 alongside 8.2 and hope the symlinks are right” situation.
SSH access in containers
When a user connects via SSH, they are connecting to a container, not to the host. Each user gets their own SSH container running on a unique port, bound only to the loopback interface. The orchestrator routes the connection to the right container based on the user.
Inside the container, the user has:
- Their home directory (read-write)
- MySQL access through the host gateway
- A bash shell with standard tools
- Temporary directories that are wiped on container restart
They do not have:
- Access to any other user’s files
- Visibility into other processes on the server
- Access to the host filesystem
- The ability to see real server hardware information
- The ability to escalate privileges through setuid binaries
If a user manages to find a vulnerability in their SSH container and gets root inside it, they are root inside a container with dropped capabilities, a read-only root filesystem, restricted proc, and no network access beyond the loopback. The blast radius is their own container. The host and other users are unaffected.
In a CageFS setup, if a user escapes the virtual filesystem (which has happened with past CageFS CVEs), they land on the actual host. The blast radius is the entire server.
SELinux and additional hardening
Containers alone are not enough. We run SELinux in enforcing mode on all servers. Container processes run under SELinux contexts that restrict what system calls they can make, what files they can access, and what network operations they can perform, even if the container configuration has a gap.
This is defense in depth. The container provides namespace isolation. SELinux provides mandatory access control. Capability dropping prevents privilege escalation. The
no-new-privileges
flag prevents setuid tricks. Each layer catches what the others might miss.
CloudLinux relies primarily on CageFS for isolation and LVE for resource control. SELinux can run alongside it, but it is not part of the isolation model. The isolation model is the virtual filesystem.
The tradeoffs
We are not going to pretend the container approach is strictly better. There are real tradeoffs.
Complexity. The cPanel + CloudLinux stack is simpler to operate. Install, configure, done. Our container setup requires managing container images, systemd services, socket directories, cgroup slices, and the orchestration to tie it all together. We built tooling to handle this, but the tooling itself is something we maintain.
Resource overhead. Containers have a small per-instance overhead. Each PHP-FPM container has its own master process. Each SSH container has its own sshd. On a server with 200 users, that is 200+ container master processes. With on-demand PHP-FPM, the actual memory impact is small (idle containers consume very little), but it is not zero. CageFS has almost no per-user overhead because there are no additional processes. It is just mount namespace trickery.
Ecosystem. cPanel + CloudLinux has decades of ecosystem. Thousands of plugins, integrations, and third-party tools assume this stack. Migration tools, backup systems, and monitoring software all speak cPanel. Our stack is custom. It does exactly what we need, but there is no marketplace of third-party add-ons.
Maturity. CageFS has been running in production on hundreds of thousands of servers for over a decade. Podman and container isolation have a different lineage but are equally proven technology. Containers run everything from single-server setups to the largest infrastructure on the planet. The tooling is different, the isolation model is stronger.
Operational tooling. CloudLinux ships with LVE Manager, CageFS management tools, PHP selector UI, and more. These are off-the-shelf products designed for broad adoption. We built our own orchestrator and control panel, purpose-built for our infrastructure. Different approach, same end result.
When CageFS makes more sense
If you are running a traditional shared hosting business with hundreds or thousands of cPanel accounts per server, CloudLinux + CageFS is the practical choice. The tooling is mature, the ecosystem is established, and the support infrastructure exists. You are buying a solved problem.
If your primary concern is preventing users from seeing each other’s files and you do not need deep process isolation, CageFS handles it. It is lighter weight, simpler to operate, and good enough for the vast majority of shared hosting scenarios.
If you need to run at the margins of server density, packing 500+ accounts onto a single machine, the lower per-user overhead of CageFS matters. Every MB of RAM consumed by container overhead is RAM that is not serving PHP requests.
When containers make more sense
If you want true process isolation, not just filesystem hiding, containers provide it. Separate process namespaces, restricted capabilities, and a read-only root filesystem create a fundamentally different security boundary than a virtual filesystem overlay.
If you need clean PHP version management without library conflicts, containers solve it by definition. Each container is a self-contained environment. There is no shared state to conflict.
If you want to run on a stock Linux kernel without proprietary patches, containers work on any modern Linux distribution. No vendor kernel, no patching dependency, no licensing.
If you want defense in depth with SELinux, capability dropping, and namespace isolation working together, the container model lends itself to layered security more naturally than a filesystem-level approach.
If your infrastructure is already container-oriented and you are building your own management tooling anyway, adding CageFS on top would be an odd architectural choice. Containers already provide everything CageFS does, plus more.
What this means for websites on our platform
For site owners, the isolation model is mostly invisible. Your site works the same way regardless of what is running underneath. You upload files, your PHP runs, your database works, your emails send.
The differences show up at the edges:
- If one site on the server gets compromised, the blast radius is that site’s container. The attacker cannot pivot to other accounts because there is nothing to pivot to. No shared process space, no shared filesystem (beyond what is explicitly mounted read-only), no shared network namespace.
- If your site needs a specific PHP version with specific extensions, it gets its own container image with exactly that configuration. There is no “the server has these PHP versions installed, pick one” situation.
- If your site misbehaves and starts consuming excessive resources, the cgroup limits enforce a hard ceiling. Your site slows down. Other sites on the same server are unaffected. This is the same outcome as LVE, achieved through the same underlying kernel mechanism.
- Your SSH session is inside a container. You can install packages, modify system files, run whatever you want inside it. You cannot break anything outside your container. If something goes wrong, the container can be recreated from its image in seconds.
The end result is that each account on our servers operates as if it is on its own machine, with its own processes, its own filesystem, its own resource budget, and its own blast radius. That is what container isolation provides, and it is why we chose it over the traditional shared hosting stack.
Summary
| CageFS + LVE | Podman containers | |
|---|---|---|
| Filesystem isolation | Virtual filesystem (mount tricks) | Container root filesystem + explicit mounts |
| Process isolation | Shared host process space | Separate process namespace per container |
| Network isolation | Shared network stack | Host networking with loopback-only binding |
| Resource limits | LVE (proprietary kernel module) | systemd cgroup slices (stock kernel) |
| PHP versions | alt-PHP packages on host | Per-container images |
| Kernel dependency | CloudLinux modified kernel | Any standard Linux kernel |
| Capability restriction | None (not a container) | Dropped capabilities, no-new-privileges |
| System info hiding | Not by default | Fake /proc mounts |
| Overhead per user | Minimal (mount namespace only) | Small (container master processes) |
| Ecosystem | Mature, large | Custom |
| Escape blast radius | Host server | Container (restricted) |
Both approaches work. CageFS is the right tool for traditional shared hosting at scale with cPanel. Containers are the right tool if you want deeper isolation, stock kernel compatibility, and a layered security model. We picked containers because they match how we think about the problem and they let us build the isolation model we wanted without depending on a proprietary kernel.