You might be wondering why you're seeing a difference in disk usage between to the df
and du
commands.
One potential cause for this may be that you have some open file descriptors for files that have been deleted but are still held open by some process. While this may be puzzling at first, it's pretty easy to identify and remove the file descriptors.
Below I will show you how to recreate the scenario and correct it.
Create a large file and keep it open after it's deleted
The current disk usage according to df
is only at 15%
.
[ec2-user@linuxbucket ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 484M 56K 484M 1% /dev tmpfs 494M 0 494M 0% /dev/shm /dev/xvda1 7.8G 1.2G 6.6G 15% /
The ec2-user's home folder is only at 28K
.
[ec2-user@linuxbucket ~]$ du -sh ~/ 28K /home/ec2-user/
Using the fallocate
command I can fill up the disk.
[ec2-user@linuxbucket ~]$ fallocate -l 7G death_star_plans.tiff fallocate: death_star_plans.tiff: fallocate failed: No space left on device [ec2-user@linuxbucket ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 484M 56K 484M 1% /dev tmpfs 494M 0 494M 0% /dev/shm /dev/xvda1 7.8G 7.7G 0 100% / [ec2-user@linuxbucket ~]$ du -sh ~/ 6.6G /home/ec2-user/
Now keep the file death_star_plans.tiff
open with the tail -f
command and then delete it.
[ec2-user@linuxbucket ~]$ tail -f death_star_plans.tiff & [2] 2818 [ec2-user@linuxbucket ~]$ rm death_star_plans.tiff [ec2-user@linuxbucket ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 484M 56K 484M 1% /dev tmpfs 494M 0 494M 0% /dev/shm /dev/xvda1 7.8G 7.7G 0 100% / [ec2-user@linuxbucket ~]$ du -sh ~/ 28K /home/ec2-user/
Above the tail command is keeping the file open in the background with PID 2818 even though we've already deleted it. This can be confirmed by running the following
[ec2-user@linuxbucket ~]$ lsof | grep "(deleted)" tail 2818 ec2-user 3r REG 202,1 7009644544 2469 /home/ec2-user/death_star_plans.tiff (deleted)
Removing unwanted file descriptors
There are a few ways to free the space back up.
Redirect NULL to the file descriptor
The descriptor is located at /proc/2818/fd/3
. This is known because of the bold information in the lsof output above.
[ec2-user@linuxbucket ~]$ ls -l /proc/2818/fd | grep deleted lr-x------ 1 ec2-user ec2-user 64 Jun 11 23:28 3 -> /home/ec2-user/death_star_plans.tiff (deleted) [ec2-user@linuxbucket ~]$ > /proc/2818/fd/3 tail: cannot watch ‘death_star_plans.tiff’: No such file or directory [2]- Exit 1 tail -f death_star_plans.tiff [ec2-user@linuxbucket ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 484M 56K 484M 1% /dev tmpfs 494M 0 494M 0% /dev/shm /dev/xvda1 7.8G 1.2G 6.6G 15% /
Restart the service holding the file open
One common scenario is that a service like syslog
or apache
has a file descriptor held open on a log file that was not properly cleaned up by logrotate. In these cases restarting the service or forcing log rotation can release the open descriptor. Depending on the version of linux you're running you might use the service
or systemctl
to restart the service. Logrotation can usually be forced by running logrotate -f /etc/logrotate.d/syslog
. You will need to make sure you replace syslog
with your target logrotate config.
Conclusion
Now you that the Death Star Plans
have been deleted we have enough space to start work on the Starkiller Base
plans. Don't forget to encrypt the data this time. You never know when a janitor is going to go snooping around.
