![Hard Drive](http://static.howstuffworks.com/gif/adding-a-hard- disk-1-1.jpg)Well, I have just received the RMA replacement of my failed Samsung HD204UI 2TB drive. _place_holder; Now it is time to put the drive back into my Linux server and rebuild the RAID-1 (mirror) array so that the data resumes its _place_holder;highly-available state. _place_holder; Here are the steps I took:
-
Install the hard drive. _place_holder; Since I am not using hot-swap drive bays, I will have to shutdown the server to attach the drive the SATA-II controller.
-
Locate the disk (new disk is /dev/sdd).  _place_holder;This was found by looking at “fdisk -l”.  _place_holder;My replacement is the unpartitioned drive that has resumed the device spec of my failed unit.
-
A Linux RAID (fd) partition must be created on the new drive. Create the partitioning table on the new drive, /dev/sdd, identically to the drive in the array. Use the “sfdisk” command:
 _place_holder;
sfdisk -d /dev/sdc > sdc_partition.out
sfdisk /dev/sdd < sdc_partition.out
Checking that no-one is using this disk right now … OK
Disk /dev/sdd: 243201 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature  _place_holder;/dev/sdd: unrecognized partition table type Old situation: No partitions found New situation: Units = sectors of 512 bytes, counting from 0
 _place_holder; _place_holder; Device Boot _place_holder; _place_holder; _place_holder; Start _place holder; _place_holder; _place_holder; _place_holder; _place_h older; _place_holder; End _place_holder; _place_holder; #sectors _place_holder; Id _place_holder; System /dev/sdd1 _place holder; _place_holder; _place_holder; _place_holder; _place_ho lder; _place_holder; _place_holder; _place_holder; _place_hold er; _place_holder; _place_holder; 63 3907024064 3907024002 _place_holder; fd _place_holder; Linux raid autodetect /dev /sdd2 _place_holder; _place_holder; _place_holder; _place_hold er; _place_holder; _place_holder; _place_holder; _place_holder ; _place_holder; _place_holder; _place_holder; _place_holder; 0 _place_holder; _place_holder; _place_holder; _place_holder;& nbsp_place_holder; _place_holder; _place_holder; _place_holder; -& nbsp_place_holder; _place_holder; _place_holder; _place_holder;&nb sp_place_holder; _place_holder; _place_holder; _place_holder;  place_holder; 0 _place_holder; _place_holder; 0 _place_holder; Empty /dev/sdd3 _place_holder; _place_holder; _place_holder;  place_holder; _place_holder; _place_holder; _place_holder; _pl ace_holder; _place_holder; _place_holder; _place_holder; _plac e_holder; 0 _place_holder; _place_holder; _place_holder; _plac e_holder; _place_holder; _place_holder; _place_holder; _place_ holder; - _place_holder; _place_holder; _place_holder; _place_ holder; _place_holder; _place_holder; _place_holder; _place_ho lder; _place_holder; 0 _place_holder; _place_holder; 0 _place_holder; Empty /dev/sdd4 _place_holder; _place_holder;&nbs p_place_holder; _place_holder; _place_holder; _place_holder; _ place_holder; _place_holder; _place_holder; _place_holder; _pl ace_holder; _place_holder; 0 _place_holder; _place_holder; _pl ace_holder; _place_holder; _place_holder; _place_holder; _plac e_holder; _place_holder; - _place_holder; _place_holder; _plac e_holder; _place_holder; _place_holder; _place_holder; _place_ holder; _place_holder; _place_holder; 0 _place_holder; _place_holder; 0 _place_holder; Empty Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table
Re-reading the partition table …
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: _place_holder; dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).)
 _place_holder;
- Verify that the disk was partitioned properly and similar to the surviving drive.  _place_holder;In the example, I am comparing the partition tables of the drive currently in the RAID array with my new drive.  _place_holder;Notice that the newly created partition is /dev/sdd1 on my replacement drive /dev/sdd.
 _place_holder;
sfdisk -l /dev/sdc
 _place_holder;
Disk /dev/sdc: 243201 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
 _place_holder;
 _place_holder;  _place_holder;Device Boot Start  _place_holder;  _place_holder; End  _place_holder; #cyls  _place_holder;  _place_holder;#blocks  _place_holder; Id  _place_holder;System
/dev/sdc1  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0+ 243200  _place_holder;243201- 1953512001  _place_holder; fd  _place_holder;Linux raid autodetect
/dev/sdc2  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
/dev/sdc3  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
/dev/sdc4  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
[root@aquila ~]# sfdisk -l /dev/sdd
 _place_holder;
Disk /dev/sdd: 243201 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
 _place_holder;
 _place_holder;  _place_holder;Device Boot Start  _place_holder;  _place_holder; End  _place_holder; #cyls  _place_holder;  _place_holder;#blocks  _place_holder; Id  _place_holder;System
/dev/sdd1  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0+ 243200  _place_holder;243201- 1953512001  _place_holder; fd  _place_holder;Linux raid autodetect
/dev/sdd2  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
/dev/sdd3  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
/dev/sdd4  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; -  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;0  _place_holder;Empty
 _place_holder;
My degraded raidset is /dev/md3.  _place_holder;Here is it’s current status. _place_holder;
 _place_holder;
mdadm -v -D /dev/md3
/dev/md3:
 _place_holder;  _place_holder;  _place_holder;  _place_holder; Version : 0.90
 _place_holder; Creation Time : Sat Jun  _place_holder;4 21:34:09 2011
 _place_holder;  _place_holder;  _place_holder;Raid Level : raid1
 _place_holder;  _place_holder;  _place_holder;Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder; Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder;  _place_holder;Raid Devices : 2
 _place_holder; Total Devices : 1
Preferred Minor : 3
 _place_holder;  _place_holder; Persistence : Superblock is persistent
 _place_holder;
 _place_holder;  _place_holder; Update Time : Tue Jul 12 20:27:30 2011
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder; State : clean, degraded
 _place_holder;Active Devices : 1
Working Devices : 1
 _place_holder;Failed Devices : 0
 _place_holder; Spare Devices : 0
 _place_holder;
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;UUID : 5de5eb25:d02318e7:da699fd5:65330895
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;Events : 0.13444
 _place_holder;
 _place_holder;  _place_holder; Number  _place_holder; Major  _place_holder; Minor  _place_holder; RaidDevice State
 _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; 8  _place_holder;  _place_holder;  _place_holder; 33  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder;active sync  _place_holder; /dev/sdc1
 _place_holder;  _place_holder;  _place_holder;  _place_holder;1  _place_holder;  _place_holder;  _place_holder; 0  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder;  _place_holder;1  _place_holder;  _place_holder;  _place_holder;removed
 _place_holder;
- Now I need to reconstruct the degraded RAID array with the partition on the new drive.  _place_holder;Since the replacement drive is now properly partitioned,it can simply be added to the array, /dev/md3, using the mdadm –manage command.  _place_holder;
 _place_holder;
mdadm -v –manage /dev/md3 –add /dev/sdd1
mdadm: added /dev/sdd1
 _place_holder;
Now taking a look at the /dev/md3 raidset, the new partition has been added as a spare and the array has automatically started recovering.  _place_holder;This is a 2TB drive, so it will take a little while to become in-sync.
 _place_holder;
mdadm -D /dev/md3
/dev/md3:
 _place_holder;  _place_holder;  _place_holder;  _place_holder; Version : 0.90
 _place_holder; Creation Time : Sat Jun  _place_holder;4 21:34:09 2011
 _place_holder;  _place_holder;  _place_holder;Raid Level : raid1
 _place_holder;  _place_holder;  _place_holder;Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder; Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder;  _place_holder;Raid Devices : 2
 _place_holder; Total Devices : 2
Preferred Minor : 3
 _place_holder;  _place_holder; Persistence : Superblock is persistent
 _place_holder;
 _place_holder;  _place_holder; Update Time : Tue Jul 12 20:27:30 2011
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder; State : clean, degraded, recovering
 _place_holder;Active Devices : 1
Working Devices : 2
 _place_holder;Failed Devices : 0
 _place_holder; Spare Devices : 1
 _place_holder;
 _place_holder;Rebuild Status : 0% complete
 _place_holder;
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;UUID : 5de5eb25:d02318e7:da699fd5:65330895
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;Events : 0.13444
 _place_holder;
 _place_holder;  _place_holder; Number  _place_holder; Major  _place_holder; Minor  _place_holder; RaidDevice State
 _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; 8  _place_holder;  _place_holder;  _place_holder; 33  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder;active sync  _place_holder; /dev/sdc1
 _place_holder;  _place_holder;  _place_holder;  _place_holder;2  _place_holder;  _place_holder;  _place_holder; 8  _place_holder;  _place_holder;  _place_holder; 49  _place_holder;  _place_holder;  _place_holder;  _place_holder;1  _place_holder;  _place_holder;  _place_holder;spare rebuilding  _place_holder; /dev/sdd1
 _place_holder;
Here is what the healthy mdadm status of the 2TB mirror raidset looks like (after about 8-10 hours with my /proc/sys/dev/raid/ _place_holder;configurations):
 _place_holder;
mdadm -D /dev/md3
/dev/md3:
 _place_holder;  _place_holder;  _place_holder;  _place_holder; Version : 0.90
 _place_holder; Creation Time : Sat Jun  _place_holder;4 21:34:09 2011
 _place_holder;  _place_holder;  _place_holder;Raid Level : raid1
 _place_holder;  _place_holder;  _place_holder;Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder; Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
 _place_holder;  _place_holder;Raid Devices : 2
 _place_holder; Total Devices : 2
Preferred Minor : 3
 _place_holder;  _place_holder; Persistence : Superblock is persistent
 _place_holder;
 _place_holder;  _place_holder; Update Time : Wed Jul 13 08:20:22 2011
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder; State : clean
 _place_holder;Active Devices : 2
Working Devices : 2
 _place_holder;Failed Devices : 0
 _place_holder; Spare Devices : 0
 _place_holder;
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;UUID : 5de5eb25:d02318e7:da699fd5:65330895
 _place_holder;  _place_holder;  _place_holder;  _place_holder;  _place_holder;Events : 0.13498
 _place_holder;
 _place_holder;  _place_holder; Number  _place_holder; Major  _place_holder; Minor  _place_holder; RaidDevice State
 _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder; 8  _place_holder;  _place_holder;  _place_holder; 33  _place_holder;  _place_holder;  _place_holder;  _place_holder;0  _place_holder;  _place_holder;  _place_holder;active sync  _place_holder; /dev/sdc1
 _place_holder;  _place_holder;  _place_holder;  _place_holder;1  _place_holder;  _place_holder;  _place_holder; 8  _place_holder;  _place_holder;  _place_holder; 49  _place_holder;  _place_holder;  _place_holder;  _place_holder;1  _place_holder;  _place_holder;  _place_holder;active sync  _place_holder; /dev/sdd1
 _place_holder;
That was all that there was to it.  _place_holder;If you follow these steps with your own server, your mileage may vary, so be careful.  _place_holder;Make sure that you take care of your data first and make a consistent, recoverable backup before you start.  _place_holder;Remember, a backup that has never been restored is not a backup.