I'm bit ashamed to make this post, even though I have quite a bit of experience with maintaining linux based servers, I have never added storage to a running server before! Well I did once very early in my career, but my hands were being held the entire time by a senior engineer. I never had to make crucial decisions and research the process on my own. Ever since I moved into the cloud space; adding more storage is not a typical task due to the ephemeral nature of cloud based resources.
Well today that changes! I will add a 1 TB disk for my virtualization server. This will function as the home for my VMs, separate from the disk hosting the OS.
Prerequisites
- Linux Host (I will be using Fedora Server 30)
- Disk storage separate from the OS disk
Let's Begin!
To get started shutdown your host server and physically install the additional disk storage. If you are on a proper server, it might support hot swappable drives so you may not need to shutdown your server. For me, I'm installing a 1TB SSD in my gen 8 NUC and I don't have that luxury! Once it's installed boot up your host and connect (ssh) into it.
- Check the current disk space usage. To do this execute the
df -h
command. The-h
option outputs the values in "human readable" format. You should get an output like below.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 1.3M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 15G 2.2G 13G 15% /
tmpfs 16G 4.0K 16G 1% /tmp
/dev/sdb3 976M 169M 740M 19% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/1000
Notice my 1 TB drive is not showing up.
2. Check the block devices to see what the current block devices are. Execute the lsblk -p
command.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sdb 8:16 1 57.9G 0 disk
├─/dev/sdb1 8:17 1 4M 0 part
├─/dev/sdb2 8:18 1 1M 0 part
├─/dev/sdb3 8:19 1 1G 0 part /boot
├─/dev/sdb4 8:20 1 53.5G 0 part
│ ├─/dev/mapper/fedora-root 253:0 0 15G 0 lvm /
│ └─/dev/mapper/fedora-swap 253:1 0 5.5G 0 lvm [SWAP]
├─/dev/sdb5 8:21 1 250M 0 part
├─/dev/sdb6 8:22 1 250M 0 part
├─/dev/sdb7 8:23 1 110M 0 part
├─/dev/sdb8 8:24 1 286M 0 part
└─/dev/sdb9 8:25 1 2.5G 0 part
Here it shows /dev/sdb
as having the size of 57.9 GB. That is my 64 GB OS drive! The host doesn't recognize the new disk that I installed!
3. Now let's take a look at the partition table with the fdisk -l
command. You should get a similar output.
Disk /dev/sda: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SATA SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 57.9 GiB, 62109253632 bytes, 121307136 sectors
Disk model: Thumb Drive 1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 88BE9F4C-38E2-47D0-9974-A4F5D0781A7B
** ADDITIONAL LINES CUT FOR BREVITY **
Notice the /dev/sda that is the additional 1 TB storage I added. Cool so Fedora does recognize the disk, however it's currently in an unusable state.
4. Use the fdisk /dev/sda
command to start the partitioning process. It will change the prompt to something like below.
Command (m for help):
5. Enter p
to check the partition table. From the output below, /dev/sda has no partitions.
Command (m for help): p
Disk /dev/sda: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SATA SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
6. Create a new GPT table by pressing g
.
Command (m for help): g
7. Press and enter n
to create a new partition. You can go ahead and accept the defaults like I did.
Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-2000409230, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2000409230, default 2000409230):
Created a new partition 1 of type 'Linux filesystem' and of size 953.9 GiB.
Partition #1 contains a VMFS_volume_member signature.
Do you want to remove the signature? [Y]es/[N]o: n
8. Now if we list the partitions again p
and you should see the new partition. Take a note that it is /dev/sda1. We will need that later when we create a filesystem.
Command (m for help): p
Disk /dev/sda: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SATA SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 53817ED4-974F-5E47-B4E2-6FCFF28EE3AB
Device Start End Sectors Size Type
/dev/sda1 2048 2000409230 2000407183 953.9G Linux filesystem
9. Alright so the new partition is in place, time to save the changes by pressing w
. This will kick you out of the fdisk prompt.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
10. Create a new filesystem by supplying the partition filesystem. Issue the following command mkfs.ext4 /dev/sda1
to create a new filesystem. When you enter that command you will see something like below.
mke2fs 1.44.6 (5-Mar-2019)
/dev/sda1 contains a VMFS_volume_member file system
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 250050897 4k blocks and 62513152 inodes
Filesystem UUID: f5a5557a-dcb8-43b1-ae98-d5c1eff2f170
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
11. I will create a new directory onto which I will mount the /dev/sda1 filesystem. To do this use the mkdir /appl
command. Appl is short for applications, it will hold all of my new applications and VMs.
12. Mount /dev/sda1 onto /appl. mount /dev/sda1 /appl
13. Check to see the disk usage now with the df -h
command. You should see the following.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 1.4M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 15G 2.2G 13G 15% /
tmpfs 16G 12K 16G 1% /tmp
/dev/sda1 938G 77M 891G 1% /appl
Nice! /dev/sda1 is now showing up on the bottom.
14. If you've gotten this far you're 99% complete. All that is left is to ensure the mount persists after a reboot of the host. First we will need to grab the UUID of the filesystem with the blkid
command.
/dev/sda1: UUID="f5a5557a-dcb8-43b1-ae98-d5c1eff2f170" TYPE="ext4" PARTUUID="212dc680-c149-0e48-9788-dd104e51d3cd"
/dev/sdb1: SEC_TYPE="msdos" LABEL_FATBOOT="ESXi" LABEL="ESXi" UUID="5BDF-84AB" TYPE="vfat" PARTUUID="a8fa26fb-96c0-4128-b9dc-43cf1d1403c9"
/dev/sdb2: PARTUUID="1b7d3424-0118-49d9-8b46-e582071efc51"
/dev/mapper/fedora-root: UUID="9baa5936-1f34-4372-8816-856ceb3ac183" TYPE="xfs"
As you can see at the first line. The UUID for /dev/sda1 is f5a5557a-dcb8-43b1-ae98-d5c1eff2f170.
15. Open up the fstab file vim /etc/fstab
. Now add line on the bottom to the file.
UUID=f5a5557a-dcb8-43b1-ae98-d5c1eff2f170 /appl ext4 defaults 0 0
16. Reboot the host and verify the mount stays in place after the reboot by using df -h
command.