You have this Linux system that doesn't use RAID. You start to worry about the loss of files (from the last backup; you do backups, right?) and downtime should the disk fail. Maybe it is a good idea to have RAID. But how to retrofit RAID1 without a lot of downtime backing up, reformatting the disks and restoring the data?
I suspected there might be a way to start off with a degraded RAID1 array on the second, new disk, copy the partitions on the old disk onto it, change the type of the old disk to RAID element, add it to the array and let it resync. Sure enough it can be done, and François Marier has blogged it. In fact he goes further and shows how to reinstall the boot loader. I didn't have to do this because my partition is /home. The critical tip is the use of the keyword missing to create the degraded array without issues.
In my case the decision to go RAID1 was done after a failed disk caused loss of files. It was not a wise decision by the system builder to not use RAID1 in the first place.
I've varied the procedure a little. Instead of putting ext4 directly on the RAID partition, I put a logical volume on it, and then created an ext4 partition inside that. This allows me to migrate the content to a larger disk if expansion is needed in future, using logical volume operations, with little downtime.
There's one thing you should do if you decide to use logical volumes on the RAID. After you have assembled the RAID array, run vgscan. This will reinitialise the cache in /etc/lvm/cache/.cache. Otherwise it will contain entries for the components of the array and cause failure to assemble later on with a mysterious (to me at first) duplicate PV error because it thinks the array components are candidates for volumes. LVM is normally configured to ignore components of RAID arrays but only if the cache is up to date. See here for more details.
A couple of caveats: On other Linux systems mdadm.conf may be in /etc, not /etc/mdadm. Also the mdadm --detail --scan command to get the mdadm.conf line will contain a spares=1 directive if run while the array is resyncing. Remove it, or you will have problems next boot.
I suspected there might be a way to start off with a degraded RAID1 array on the second, new disk, copy the partitions on the old disk onto it, change the type of the old disk to RAID element, add it to the array and let it resync. Sure enough it can be done, and François Marier has blogged it. In fact he goes further and shows how to reinstall the boot loader. I didn't have to do this because my partition is /home. The critical tip is the use of the keyword missing to create the degraded array without issues.
In my case the decision to go RAID1 was done after a failed disk caused loss of files. It was not a wise decision by the system builder to not use RAID1 in the first place.
I've varied the procedure a little. Instead of putting ext4 directly on the RAID partition, I put a logical volume on it, and then created an ext4 partition inside that. This allows me to migrate the content to a larger disk if expansion is needed in future, using logical volume operations, with little downtime.
There's one thing you should do if you decide to use logical volumes on the RAID. After you have assembled the RAID array, run vgscan. This will reinitialise the cache in /etc/lvm/cache/.cache. Otherwise it will contain entries for the components of the array and cause failure to assemble later on with a mysterious (to me at first) duplicate PV error because it thinks the array components are candidates for volumes. LVM is normally configured to ignore components of RAID arrays but only if the cache is up to date. See here for more details.
A couple of caveats: On other Linux systems mdadm.conf may be in /etc, not /etc/mdadm. Also the mdadm --detail --scan command to get the mdadm.conf line will contain a spares=1 directive if run while the array is resyncing. Remove it, or you will have problems next boot.
No comments:
Post a Comment