Introduction
Recently I need to manage some of Linux machines in my department.
I need to know disk usages in every machine and it's not efficient to login all machines and type same commands to monitor disk usage every day.
In this tutorial, I will show you how to use Bash script to develop a disk usage monitor and let this script be a Cron job to be executed automatically every day.
Prerequisites
Before starting this tutorial, it will have following work:
- A Linux-like operating system.
- Bash has been installed on this operating system.
Disk Usage Monitor Development
Firstly, we open our terminal and type vim disk_monitor.sh
to edit this script:
vim disk_monitor.sh
Then thinking about following development steps and approaches:
- Scanning all available disk devices where they're mounted correctly.
To scan all available disk devices, we can use
lsblk
command to complete this step. And thelsblk
command of sample print result is as follows:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 243M 0 part /boot
├─sda2 8:2 0 1K 0 part
├─sda3 8:3 0 74.5G 0 part /data
└─sda5 8:5 0 148.8G 0 part
├─lubuntu--vg-root 253:0 0 141G 0 lvm /
└─lubuntu--vg-swap_1 253:1 0 7.8G 0 lvm [SWAP]
And we can use awk
command to fetch all mounted path on MOUNTPOINT
column:
lsblk | awk '{print $7}'
And the result will be:
MOUNTPOINT
/boot
/data
/
[SWAP]
- Check all mounted disk device path and ensure that their disk usages are not greater then given disk usage percent.
- If one of disk is greater than specific disk usage percent, print it.
- Collecting these messages and store them to be a text file.
- And this text file will be unique.
We can use the
for
loop to traverse all mounted point paths and usedf -lh /path
to check current disk usage percent. If disk usage percent is greater then given disk usage, this message will be stored in uniquedisk_usages_{date}.txt
file.
Putting then together! And the Bash script will be as follows:
#!/bin/bash
today_date=$(date "+%Y_%m_%d")
mounted_disk_devices=$(lsblk | awk '{print $7}')
disk_usage_percent=$1
for mounted_disk_device in ${mounted_disk_devices};
do
if [[ ${mounted_disk_device} = '' ]]; then
continue;
fi;
if [[ ${mounted_disk_device} = 'MOUNTPOINT' ]]; then
continue;
fi;
if [[ ${mounted_disk_device} = '[SWAP]' ]]; then
continue;
fi;
disk_usage=$(df -lh ${mounted_disk_device} | tail -n 1 | awk '{print $5}' | sed -e 's/%//g')
if [[ ${disk_usage} -gt ${disk_usage_percent} ]]; then
message="Disk mount point: ${mounted_disk_device} should be considered."
echo ${message}
echo ${message} >> "disk_usage_${today_date}.txt"
fi;
done;
And remember using the chmod 755 ./disk_monitor.sh
command to let this Bash script have reading, writing, and executing for owner.
Letting group owner and other owners have reading and executing.
That's it! And this Bash script usage is as follows:
./disk_monitor.sh 10
It means using this ./disk_monitor.sh
to check the disk usage on mounted point path is greater than 10%
.
If it meets this above condition, print message and store this on specific text file.
Advanced developments
To be advanced usage for this ./disk_monitor.sh
, we can use scp
command to copy ./disk_monitor.sh
to remote machine on remote user home folder.
scp -P {ssh_port_number} './disk_monitor.sh' user@user_host_name:/home/user/disk_monitor.sh
Once all remote machines have this ./disk_monitor.sh
script, we can use following Bash script at specific time with Cron every day:
We assume that we've ten machines and they're nachine1
to machine10
and the disk_monitor.sh
script has been copied on every remote machine:
numbers=$(seq 1 10)
today_date=$(date "+%Y_%m_%d")
for number in ${numbers};
do
ssh user@machine${number} './disk_monitor.sh'
scp user@machine${number}:/home/user/disk_usages.txt "./disk_usages_machine${number}_${today_date}.txt"
done;
Conclusion
In this tutorial, you've learned following things:
- Create a Bash script to accomplish disk usage monitor.
- Using the
for
loop to traverse the machines and execute disk usage monitor then copy disk usage text file to the local machine.