Btrfs w/ multi-device raid Part 1
If you are currently using Fedora 14 you may wish to play with the new and exciting BTRFS file-system and today we will begin to show you how. I wrote this because even after alot of reading I was still left with questions regarding a few irregularities like not being able to use fstab to mount a multi-device btrfs raid on boot.
This new copy-on-write file-system allows for file-system level check summing, live pool resizing, and live snapshots.
Copy-on-write can be summed by the following entry found @ http://en.wikipedia.org/wiki/Copy-on-write
‘The fundamental idea is that if multiple callers ask for resources which are initially indistinguishable, they can all be given pointers to the same resource. This function can be maintained until a caller tries to modify its “copy” of the resource, at which point a true private copy is created to prevent the changes becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that if a caller never makes any modifications, no private copy need ever be created’
With COW in mind, btrfs was developed with a rich set of enterprise features such as subvolumes, snapshots, online pool re-size, and even online addition or removal of volume devices. An ext3/4 partition can easily be converted to btrfs because of btrfs lack of anchoring metadata to fixed locations on the device. Btrfs also contains early solid state disk support in the form of file-system optimizations tho the feature is still under heavy development from release to release.
The following are some amazing btrfs articles and resources to help answer any questions you may have.
If you are already using Fedora 14 and wish to try btrfs on separate devices to ensure you don’t loose data then you will need to install the ‘btrfs-progs’ package via YUM. This package contains the kernel module and user-space utilities to work with btrfs on your devices.
yum install btrfs-progs
Now you can either reboot or just check and see if the btrfs module was loaded and if not load it. This is usually done on demand if say you attempt to mount a btrfs pool.
lsmod | grep btrfs
if no module is loaded then the following command will load the btrfs module into the running kernel.
Now we can begin to use btrfs as a file-system in a preexisting Fedora 14 system.
If you would like to install Fedora 14 to a btrfs device then you will need a ext3/4 ‘/boot’ partition as grub currently does not support booting from a btrfs volume. Fedora also does not support btrfs multi-device setups at this time from the installer. So no btrfs raid0, 1, 10 bootable volume support at this time. To enable btrfs in the Fedora Anaconda installer you will need to supply the kernel switch ‘btrfs’ when you reach the Fedora installer boot menu. This will also ensure that the ‘btrfs-progs’ is installed by default. Instructions can be found here for Fedora 14
In our first example we would like to create a single btrfs volume using the device /dev/sdb
Simple huh? Just like your old ext3/4 method. Now lets create a btrfs volume with a label of VMstor.
mkfs.btrfs -L VMstor /dev/sdb
If you would like to view the file-system and ensure it was created, get the UUID of the disk, or see what disks are being used for which btrfs volumes then you can run the command ‘btrfs filesystem show’. You can also specify a device at the end to see which btrfs file-systems are on a particular drive. In my example I have a raid0 setup across two disks.
btrfs filesystem show
My output example below:
# btrfs filesystem show /dev/sdb
failed to read /dev/sr0
Label: none uuid: 2dc59e45-5db8-4e3a-936b-5fcf2fd923cd
Total devices 2 FS bytes used 15.61GB
devid 2 size 69.25GB used 69.01GB path /dev/sdc
devid 1 size 69.25GB used 69.03GB path /dev/sdb
Btrfs Btrfs v0.19
You can ignore the error ‘failed to read /dev/sr0’. This just means that the program could not scan the cd-rom device. This is because btrfs-progs v.19 does not currently ship with any udev rules. Once the udev rules are created the cd-rom device will no longer be scanned.
Now lets take it a step further and create a raid-1 btrfs volume with the label iso-store.
mkfs.btrfs -m raid1 -d raid1 -L iso-store /dev/sdc /dev/sdd
Now you will see the ‘-m raid1’ and ‘-d raid1’ switches. This tells btrfs how to apply the meta data and data profiles. In this case we will mirror the meta-data and the data.
-d –data data profile, raid0, raid1, raid10 or single
-m –metadata metadata profile, values like data profile
You can use these to create combination’s of how the meta-data and data is stored across the disks in your btrfs volume.
Remember the btrfs disk we called VMstor? Lets re-create the disk and setup a few subvolumes(or pool), this will allow us to snapshot only the subvolume we want instead of the entire disc. This means if we have multiple pools, we can resize one online while performing a snapshot on another and not have them conflict.
mkfs.btrfs -L disk1 /dev/sdb
Now lets mount this disk to /mnt as disk1
mount /dev/sdb /mnt/disk1
The mount command should show the btrfs device as mounted now.
/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/sdc on /mnt/disk1 type btrfs (rw)
Now we can create the actual subvolume (or pool) that will be called VMstor.
btrfs subvolume create /mnt/disk1/VMstor
Now we will mount the subvolume as a test before deciding to add it to /etc/fstab.
mkdir /vmstor && mount -t btrfs -o subvol=vmstor /dev/sdb /vmstor
You can write some data the the /vmstor folder if you would like. But now we need to stick these volumes and subvolumes in /etc/fstab so that they are automatically mounted at boot up. Unfortunately btrfs currently requires a manual ‘btrfs device scan’ on each btrfs device after the btrfs kernel module is loaded. This is due to the lack of any udev rules and will be fixed in future releases.
Part2 will be available shortly and shows how to setup btrfs device scans and mount the volumes at boot up.