More LVM love - Journal of Omnifarious
Sep. 19th, 2008
10:19 am - More LVM love
I must say again that I love LVM. I attached my new hard-drives to my fileserver while it was running, then I had to reboot it because the 3ware 9650SE-4LPML card seems to require that I use the 3ware BIOS setup utility to configure my RAID and doesn't seem to have a Linux utility to do it. But, being able to add the drives while the server was on still saved me about 45 minutes of downtime.
After the reboot, I started using LVM in earnest. I added the new array to the volume group, used pvmove to move all the filesystems to the new volume group (while they were in active use mind you) and then removed the old RAID array from the volume group. Standard LVM stuff. But I was able to do it all while the system was up and running.
Also, I had been using 64MiB allocation chunks, but with 3.18TiB of storage, that gets to be an unwieldy number of chunks. But LVM now has a feature that allows you to change the size of the allocation chunks. In my case, all of my physical volumes had an even number of chunks, and all the filesystems also consisted of a contiguous region with an even number of chunks (at least after I moved them they did) and so I was able to move to a more manageable 128MiB chunk size.
All this change only required about 15 minutes of server downtime. I'll likely have another 45 minutes or so as I switch the old array to RAID 0 (to make sure the wipe works thoroughly) wipe it and then remove all the disks and the card. Most of that will likely be taken up by getting the new disks into cages. That makes a total of about 50-60 minutes of downtime.
If 3ware had a RAID management utility that worked in Linux I could've done all this with 0 downtime so far, and likely only about 30 minutes of downtime in the future for physically moving the drives into cages.
If I had 8 hot-swap cages (4 for the old drives, 4 for the new), I could do it with no downtime.