Running out of disk space is one of the most common problems on Linux servers. When a disk fills up, websites stop loading, databases crash, email delivery fails, and logs stop recording. The worst part is that disk space problems tend to appear without warning – everything works fine until you hit 100%, and then several things break at once.
Linux gives you two essential commands for checking disk usage:
df
shows how much space is available on each mounted filesystem, and
du
shows how much space individual directories and files are consuming. Between these two commands and a few supporting tools, you can diagnose virtually any disk space problem on a server.
This guide covers both commands in detail, explains how to find what is consuming your disk space, and walks through the less obvious causes of “disk full” errors, including inode exhaustion.
Checking filesystem disk usage with df#
The
df
command (short for “disk free”) shows the total, used, and available space on every mounted filesystem.
df -h
df -h
The
-h
flag makes the output human-readable, converting bytes to KB, MB, and GB:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 32G 16G 67% /
/dev/sda2 200G 145G 45G 77% /home
tmpfs 16G 256M 16G 2% /dev/shm
/dev/sdb1 500G 412G 63G 87% /backups
The columns:
- Filesystem – the disk partition or device
- Size – total capacity of the filesystem
- Used – how much space is consumed
- Avail – how much free space remains
- Use% – percentage of the total capacity that is in use
- Mounted on – where the filesystem is accessible in the directory tree
Pay attention to the Use% column. Anything above 90% is a warning sign, and above 95% is urgent. Some systems start behaving unpredictably when a filesystem passes 95% because certain operations need temporary space to complete.
Check a specific filesystem
If you only care about one filesystem, pass the path:
df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 200G 145G 45G 77% /home
This is useful in scripts and quick checks. If your websites live under
/home
, this single command tells you whether you are running low.
Show filesystem type
df -hT
The
-T
flag adds a column showing the filesystem type (ext4, xfs, tmpfs). This is useful when you have multiple partition types and need to verify what you are working with:
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 50G 32G 16G 67% /
/dev/sda2 xfs 200G 145G 45G 77% /home
tmpfs tmpfs 16G 256M 16G 2% /dev/shm
Exclude pseudo-filesystems
By default,
df
lists tmpfs, devtmpfs, and other virtual filesystems that are not actual disks. To show only real disk partitions:
df -h --exclude-type=tmpfs --exclude-type=devtmpfs
This cleans up the output when you only want to see physical disks.
Checking directory sizes with du#
While
df
tells you that a filesystem is filling up,
du
(disk usage) tells you which directories are responsible. This is the command you use to track down what is eating your space.
du -sh
du -sh /home/user1
The flags:
-s
gives a summary (total for the directory instead of listing every subdirectory), and
-h
makes it human-readable.
4.2G /home/user1
Check multiple directories at once
du -sh /home/*
4.2G /home/user1
1.8G /home/user2
12G /home/user3
256M /home/user4
This immediately shows which user accounts are consuming the most space. On a shared hosting server, this is typically the first command you run when disk usage is high.
List and sort directories by size
To find the largest directories under a specific path:
du -h --max-depth=1 /home/user1 | sort -hr
4.2G /home/user1
2.1G /home/user1/public_html
1.5G /home/user1/backups
380M /home/user1/logs
220M /home/user1/tmp
The
--max-depth=1
limits the output to the immediate subdirectories. Without it,
du
would recursively list every nested directory, producing hundreds or thousands of lines. The
sort -hr
sorts by size in descending order, with the largest at the top.
You can increase the depth to drill deeper:
du -h --max-depth=2 /home/user1/public_html | sort -hr
Check the size of a single file
du -h /var/log/syslog
Or use
ls -lh
for a quick check:
ls -lh /var/log/syslog
-rw-r----- 1 syslog adm 1.4G Apr 5 14:22 /var/log/syslog
Finding large files#
When disk space is low, you need to find the biggest files quickly. The find command combined with
sort
handles this efficiently.
Find files larger than a specific size
find / -type f -size +100M -exec ls -lh {} \; 2>/dev/null | sort -k5 -hr | head -20
This searches the entire filesystem for files larger than 100 MB, lists them with human-readable sizes, sorts by the size column, and shows the top 20. The
2>/dev/null
suppresses permission denied errors for directories you cannot read.
-rw-r--r-- 1 mysql mysql 2.8G /var/lib/mysql/ibdata1
-rw-r----- 1 mysql mysql 1.1G /var/lib/mysql/ib_logfile0
-rw-r----- 1 mysql mysql 1.1G /var/lib/mysql/ib_logfile1
-rw-r----- 1 syslog adm 890M /var/log/syslog.1
-rw-r--r-- 1 root root 450M /home/user1/public_html/backup-2024-03.tar.gz
-rw-r--r-- 1 root root 320M /tmp/largefile.sql
Find files modified recently that are large
find / -type f -size +50M -mtime -7 -exec ls -lh {} \; 2>/dev/null
This finds files larger than 50 MB that were modified in the last 7 days. Useful for spotting recent log growth, database dumps someone forgot to clean up, or backup files that should not be there.
Common locations where large files accumulate
On a typical web server, disk space is most commonly consumed by:
- /var/log – log files, especially if log rotation is misconfigured
- /var/lib/mysql – database files (ibdata1, binary logs, slow query logs)
- /home/*/public_html – user website files, especially old backups and media uploads
- /tmp – temporary files that were never cleaned up
- /var/spool/mail – email queues that are stuck or backed up
- /root – files dumped in the root home directory during maintenance
What to do when the disk is full#
When a filesystem hits 100%, your priority is to free enough space to restore normal operation. Here is a systematic approach.
1. Check what is consuming space
df -h
du -h --max-depth=1 / | sort -hr | head -15
This gives you an immediate picture of which top-level directories are the biggest consumers.
2. Check log files first
Log files are the most common cause of unexpected disk growth. Check their sizes:
du -sh /var/log/* | sort -hr | head -10
If a log file has grown to several gigabytes, you can truncate it without deleting it (which preserves the file handle for any process writing to it):
> /var/log/large-logfile.log
This empties the file to zero bytes. The greater-than sign with no input redirects nothing into the file, effectively clearing it. This is safer than
rm
because deleting a log file while a process still has it open will not actually free the space until that process releases the file handle.
3. Find and remove old backups
Old database dumps, site backups, and
.tar.gz
files in user directories are common space wasters:
find /home -name "*.sql" -size +100M -exec ls -lh {} \;
find /home -name "*.tar.gz" -size +100M -exec ls -lh {} \;
find /home -name "*.zip" -size +100M -exec ls -lh {} \;
Verify each file before deleting. An old backup might be the only copy of something important.
4. Clean up package manager cache
On Rocky Linux and other RHEL-based systems:
dnf clean all
On Ubuntu and Debian:
apt-get clean
apt-get autoremove
Package managers cache downloaded packages, and this cache can grow to several gigabytes over time.
5. Check for deleted files still held open
A process that has a file open holds a reference to it. If you delete the file, the space is not freed until the process releases the handle. This is a common cause of confusion when
df
shows a disk as full but
du
reports less usage than expected.
To find deleted files still consuming space:
lsof +L1
This lists all open files with zero link count (deleted but still held open). Restarting the process that holds the file will free the space.
Inode exhaustion - disk full but space is available#
Sometimes
df -h
shows plenty of free space but you still cannot create files. The error message might say “No space left on device” even though gigabytes are available. This is inode exhaustion.
What are inodes
Every file and directory on a Linux filesystem has an inode – a data structure that stores metadata about the file (permissions, ownership, timestamps, location on disk). The number of inodes is fixed when the filesystem is created. Each file, no matter how small, uses one inode. When you run out of inodes, you cannot create new files even if there is plenty of disk space.
Check inode usage
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 3276800 3276800 0 100% /
/dev/sda2 13107200 245678 12861522 2% /home
In this example, the root filesystem has used all of its inodes. The disk has free space, but no more files can be created.
What causes inode exhaustion
The typical culprit is a directory containing an enormous number of small files. Common scenarios:
- Session files – PHP session files in
/tmpor/var/lib/php/sessionsthat are never cleaned up - Cache files – poorly configured caches that create millions of tiny files
- Email queue – stuck mail queues with thousands of individual message files
- WordPress temp files – plugin or theme updates that leave behind temporary files
Find directories with the most files
find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -20
This counts the number of files in each directory and shows the top 20. The
-xdev
flag prevents
find
from crossing filesystem boundaries, keeping the search on a single partition.
1843200 /var/spool/postfix/maildrop
245000 /tmp/sess_
12456 /var/lib/php/sessions
8934 /home/user1/public_html/wp-content/cache
If a single directory contains hundreds of thousands or millions of files, that is your inode problem. Clean up the files in that directory:
find /var/spool/postfix/maildrop -type f -delete
The
find -delete
approach handles directories with too many files for
rm *
to process (which would fail with an “argument list too long” error).
Monitoring disk usage over time#
Checking disk space reactively, after something breaks, is not ideal. A better approach is periodic monitoring.
Quick check via SSH
If you manage servers via SSH, a fast way to check all critical filesystems is to run a command remotely:
ssh user@server "df -h | grep -E '(/$|/home|/var)'"
This connects, checks disk usage on the most important filesystems, and disconnects.
Watch command for real-time monitoring
If you are actively investigating disk growth,
watch
runs a command repeatedly:
watch -n 5 'df -h'
This refreshes the
df -h
output every 5 seconds, so you can see space being consumed in real time. Useful when you suspect a runaway log file or a process that is writing large amounts of data.
Set up a simple alert
A basic shell script that checks disk usage and sends a warning:
#!/bin/bash
THRESHOLD=90
df -h --output=pcent,target | tail -n +2 | while read usage mount; do
percent=${usage%%%}
if [ "$percent" -ge "$THRESHOLD" ]; then
echo "WARNING: $mount is at ${usage} capacity"
fi
done
This checks every mounted filesystem and prints a warning if usage exceeds 90%. You can run this as a cron job and redirect the output to email or a monitoring system.
Checking disk usage on Hostney#
On Hostney’s managed WordPress hosting, you do not need SSH access to check disk usage. The control panel displays your current storage usage on the dashboard, including a breakdown by website. If a specific site is consuming more space than expected, the file manager lets you browse the directory structure and identify large files.
For customers who prefer the command line, SSH access is available on all plans. You can use all of the commands covered in this guide to investigate disk usage, find large files, and clean up unnecessary data. The Linux find command is especially useful here for locating old backups and oversized log files that have accumulated over time.
If you are running out of disk space and need to identify what is consuming it, the approach is straightforward: start with
df -h
to see which filesystem is full, use
du
to drill into the directories consuming the most space, and then use
find
to locate specific large files. In most cases, the problem is log files that were never rotated, old backups that were never cleaned up, or a database that has grown beyond expectations.
Quick reference#
| Task | Command |
|---|---|
| Check all filesystem usage |
df -h
|
| Check a specific path |
df -h /home
|
| Show filesystem type |
df -hT
|
| Check inode usage |
df -i
|
| Directory size summary |
du -sh /path
|
| Top-level directory sizes |
du -h --max-depth=1 /path | sort -hr
|
| Find files over 100 MB |
find / -type f -size +100M -exec ls -lh {} \;
|
| Find recently modified large files |
find / -type f -size +50M -mtime -7 -exec ls -lh {} \;
|
| Count files per directory |
find / -xdev -printf '%h\n' | sort | uniq -c | sort -rn | head -20
|
| Truncate a log file |
> /var/log/filename.log
|
| List deleted files still open |
lsof +L1
|
| Clean package cache (RHEL) |
dnf clean all
|
| Clean package cache (Ubuntu) |
apt-get clean && apt-get autoremove
|