Well, I have run out of storage for backups and shared media on my home-grown Linux NAS server. _place_holder; My existing 1TB mirror is full. _place_holder; Now that 3 TB drives have started to trickle out, the 2 TB capacity devices are dropping in price. _place_holder; I picked up a couple of _place_holder; Samsung HD204UI drives 2TB drives for $80 each over at New Egg. _place_holder; These are the 5400 RPM drives, but that is adequate for a backup/media sharing logical volume. _place_holder; Be kind, my NAS server is an old Dell 600SC with a 4-port SATA card in a PCI bus slot.
Anyway, this is probably a bit disjointed, but I just thought I would document and share how I am setting up my new software raidset and LVM volume group. _place_holder; To add a little more danger to the mix, I will be doing it live. Well, relatively live. _place_holder; I need to power off the server in order to add the 2 new disk drives to the SATA controller.
I am migrating the data on an existing 1TB raidset to to a mirrored pair 2TB drives that I will create. _place_holder; The new drives are attached to ports 3 and 4 of the SATA card and show up as devices /dev/sdc and /dev/sdd.
First I will create the partions on each drive and form the mirror-set (Raid 1). _place_holder; The partition type is ‘fd’ and I created a single, large partition on each.
Partition the first drive, /dev/sdc:
# fdisk /dev/sdc
The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-243201, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201):
Using default value 243201
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 243201 1953512001 fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Now partition the second drive /dev/sdd:
# fdisk /dev/sdd
The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-243201, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201):
Using default value 243201
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 243201 1953512001 fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
With the Linux raid partitions created, the software RAID mirror must be formed. The mdadm command is used here. _place_holder; The first thing to do is to look at the existing raidsets and pick a device identifier to create:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
248896 blocks [2/2] [UU]
md2 : active raid1 sdb1[1] sda1[0]
976759936 blocks [2/2] [UU]
md1 : active raid1 hdb3[1] hda3[0]
36812864 blocks [2/2] [UU]
unused devices: <none>
The devices md0 - md2 are taken. _place_holder; The new raidset will be created as md3. _place_holder; Now that I now what device to assign to my software raid group, I can create the mirror (RAID 1) device using my /dev/sdc1 and /dev/sdd1 partitions.
# /sbin/mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 \
/dev/sdc1 /dev/sdd1
mdadm: size set to 1953511936K
mdadm: array /dev/md3 started.
To check on the array and its status:
# /sbin/mdadm --detail /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Sat Nov 20 13:23:21 2010
Raid Level : raid1
Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Sat Nov 20 13:23:21 2010
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Rebuild Status : 0% complete
UUID : 96302901:9b7adda2:0aec7b78:38f51c78
Events : 0.1
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
The array sync was taking quite some time. _place_holder; To help improve the time, I adusted the minimum raid speed. _place_holder; Here is what my original value was, and what I changed it to.
# sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min = 1000
# sysctl -w dev.raid.speed_limit_min=60000
dev.raid.speed_limit_min = 60000
This setting improved the array sync speed from about 2800K/s to 38000K/s and will cause it to finish in less than a day.
Now, with the mirror created, /dev/md3 can be made physical volume as part of a logical volume group. _place_holder; Here is a list of the existing volume groups on the system:
# vgdisplay
--- Volume group ---
VG Name datavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.51 GB
PE Size 4.00 MB
Total PE 238466
Alloc PE / Size 238466 / 931.51 GB
Free PE / Size 0 / 0
VG UUID pqSnGs-aoM1-YgU7-fHxV-5dsM-tBDR-yWu3Ip
--- Volume group ---
VG Name rootvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 19
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 7
Open LV 7
Max PV 0
Cur PV 1
Act PV 1
VG Size 35.11 GB
PE Size 4.00 MB
Total PE 8987
Alloc PE / Size 8987 / 35.11 GB
Free PE / Size 0 / 0
VG UUID Xedg1q-JbxX-OOHn-f9th-CTyk-Xr4H-De1zJI
# lvdisplay /dev/datavg/lvdata01
--- Logical volume ---
LV Name /dev/datavg/lvdata01
VG Name datavg
LV UUID q1scUB-8cD3-bIV1-JkRL-3mh8-uqA3-Mc7a8F
LV Write Access read/write
LV Status available
# open 1
LV Size 931.51 GB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
The goal will be to create a physical volume from /dev/md3 and then add it to the datavg volume group. Once a member of the datavg volume group, the logical volume datavg-lvdata01 will be moved to the new physical volume.
Establish /dev/md3 as an lvm device using the pvdisplay command and extend the datavg volume group
# pvcreate /dev/md3
Physical volume "/dev/md3" successfully created
[root@aquila ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a- 35.11G 0
/dev/md2 datavg lvm2 a- 931.51G 0
/dev/md3 lvm2 -- 1.82T 1.82T
# vgextend -v datavg /dev/md3
Checking for volume group "datavg"
Archiving volume group "datavg" metadata (seqno 6).
Wiping cache of LVM-capable devices
Adding physical volume '/dev/md3' to volume group 'datavg'
Volume group "datavg" will be extended by 1 new physical volumes
Creating volume group backup "/etc/lvm/backup/datavg" (seqno 7).
Volume group "datavg" successfully extended
# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a- 35.11G 0
/dev/md2 datavg lvm2 a- 931.51G 0
/dev/md3 datavg lvm2 a- 1.82T 1.82
# vgs -v
Finding all volume groups
Finding volume group "datavg"
Finding volume group "rootvg"
VG Attr Ext #PV #LV #SN VSize VFree VG UUID
datavg wz--n- 4.00M 2 1 0 2.73T 1.82T pqSnGs-aoM1-YgU7-fHxV-5dsM-tBDR-yWu3Ip
rootvg wz--n- 4.00M 1 7 0 35.11G 0 Xedg1q-JbxX-OOHn-f9th-CTyk-Xr4H-De1zJI
Now with /dev/md3 in the datavg volume group, the logical volume /dev/datavg/lvdata01 needs to be moved from the /dev/md3 physical volume so that it can be removed (reduced) from the volume group. _place_holder; Here is the current status of the lvdata01 volume group:
lvdisplay -vm /dev/datavg/lvdata01
Using logical volume(s) on command line
--- Logical volume ---
LV Name /dev/datavg/lvdata01
VG Name datavg
LV UUID q1scUB-8cD3-bIV1-JkRL-3mh8-uqA3-Mc7a8F
LV Write Access read/write
LV Status available
# open 1
LV Size 931.51 GB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Segments ---
Logical extent 0 to 238465:
Type linear
_** Physical volume /dev/md2**_
Physical extents 0 to 238465
The logical volume lvdata01 is on /dev/md2, and it needs to be moved to /dev/md3 so that I can remove the old 1TB drives from the volume group and re- purpose them elsewhere. _place_holder; The pvmove command will be used to move the extents of the logical volume from one physical volume to the other.
This will move the logical volume lvdata01 from /dev/md2 to /dev/md3 in the background (-b option).
# pvmove -b -n /dev/datavg/lvdata01 /dev/md2 /dev/md3
Here is the detailed logical volume display while the move is in progress:
# lvdisplay -vm /dev/datavg/lvdata01
Using logical volume(s) on command line
--- Logical volume ---
LV Name /dev/datavg/lvdata01
VG Name datavg
LV UUID q1scUB-8cD3-bIV1-JkRL-3mh8-uqA3-Mc7a8F
LV Write Access read/write
LV Status available
# open 1
LV Size 931.51 GB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Segments ---
Logical extent 0 to 238465:
Type linear
Logical volume pvmove0
Logical extents 0 to 238465
The pvmove process is slow, and took several hours. _place_holder; If you can afford the down time, utilizing dd to move the blocks would have been a lot faster. _place_holder; In any event about 20 hours later, pvmove completed. The lvdisplay and pvs commands now show the following.
# lvdisplay -m /dev/datavg/lvdata01
--- Logical volume ---
LV Name /dev/datavg/lvdata01
VG Name datavg
LV UUID q1scUB-8cD3-bIV1-JkRL-3mh8-uqA3-Mc7a8F
LV Write Access read/write
LV Status available
# open 0
LV Size 931.51 GB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Segments ---
Logical extent 0 to 238465:
Type linear
Physical volume /dev/md3
Physical extents 0 to 238465
# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a- 35.11G 0
/dev/md2 datavg lvm2 a- 931.51G 931.51G
/dev/md3 datavg lvm2 a- 1.82T 931.50G
The lvdata01 logical volume is now on the physical volume /dev/md3. _place_holder; Also, the /dev/md2 physical volume is unused.
Now that /dev/md2 is free, it can be removed from the datavg volume group.
# vgs -v datavg
Using volume group(s) on command line
Finding volume group "datavg"
VG Attr Ext #PV #LV #SN VSize VFree VG UUID
datavg wz--n- 4.00M 2 1 0 2.73T 1.82T pqSnGs-aoM1-YgU7-fHxV-5dsM-tBDR-yWu3Ip
# vgreduce datavg /dev/md2
Removed "/dev/md2" from volume group "datavg"
# vgs -v datavg
Using volume group(s) on command line
Finding volume group "datavg"
VG Attr Ext #PV #LV #SN VSize VFree VG UUID
datavg wz--n- 4.00M 1 1 0 1.82T 931.50G pqSnGs-aoM1-YgU7-fHxV-5dsM-tBDR-yWu3Ip
# pvs
PV VG Fmt Attr PSize PFree
/dev/md1 rootvg lvm2 a- 35.11G 0
/dev/md2 lvm2 -- 931.51G 931.51G
/dev/md3 datavg lvm2 a- 1.82T 931.50G
The final step now is to fixup the mdadm.conf and to remove my old 1TB RAID drives. _place_holder; To get the mdadm detail, issue the command_ _place_holder; /sbin/mdadm –detail -scan_.
# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=e886a2b3:5aa859a7:ada5fcb2:f0d38e04
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=8386a2b3:6ba89a87:cdb5edb5:f3d43e29
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=96302901:9b7adda2:0aec7b78:38f51c78
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=d359e71c:fe7bf6d1:6951dda0:b4f6f242
Edit the /etc/mdadm.conf to add the new /dev/md3 entry and to remove the /dev/md2 device that will be removed from the system. _place_holder; I am removing my old 1 TB mirror from the system altogher so that I can repurpose that storage on my VMware ESX host. _place_holder; Here is my /etc/mdadm.conf after editing.
# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=d359e71c:fe7bf6d1:6951dda0:b4f6f242
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=e886a2b3:5aa859a7:ada5fcb2:f0d38e04
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=96302901:9b7adda2:0aec7b78:38f51c7
I now shutdown the system so that I can safely remove my old 1 TB drives. _place_holder; Well, I guess that this is a most “live” upgrade. _place_holder; With the system shutdown, I remove my old 1 TB drives and restart. _place_holder; After the system comes up, here is what my /proc/mdstat file shows:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdb1[1] hda1[0]
248896 blocks [2/2] [UU]
md3 : active raid1 sdb1[1] sda1[0]
1953511936 blocks [2/2] [UU]
md1 : active raid1 hdb3[1] hda3[0]
36812864 blocks [2/2] [UU]
unused devices: <none>
Notice how the partitions for md3 are no longer sdc1 and sdd1, but have now become sda1 and sdb1 after removing the other drives from the system. _place_holder; One final check of my /dev/md3 mirror and the datavg volume group shows:
# /sbin/mdadm -D /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Sat Nov 20 13:23:21 2010
Raid Level : raid1
Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Mon Nov 22 21:55:40 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 96302901:9b7adda2:0aec7b78:38f51c78
Events : 0.4
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
# vgs
VG #PV #LV #SN Attr VSize VFree
datavg 1 1 0 wz--n- 1.82T 831.50G
rootvg 1 7 0 wz--n- 35.11G 0
# vgdisplay -v datavg
Using volume group(s) on command line
 _place_holder; _place_holder; _place_holder; Finding volume group "datavg"
 _place_holder; --- Volume group ---
 _place_holder; VG Name _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; datavg
 _place_holder; System ID
 _place_holder; Format _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; lvm2
 _place_holder; Metadata Areas _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; Metadata Sequence No _place_holder; 12
 _place_holder; VG Access _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; read/write
 _place_holder; VG Status _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; resizable
 _place_holder; MAX LV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 0
 _place_holder; Cur LV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; Open LV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; Max PV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 0
 _place_holder; Cur PV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; Act PV _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; VG Size _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1.82 TB
 _place_holder; PE Size _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 4.00 MB
 _place_holder; Total PE _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 476931
 _place_holder; Alloc PE / Size _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 264066 / 1.01 TB
 _place_holder; Free _place_holder; PE / Size _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 212865 / 831.50 GB
 _place_holder; VG UUID _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; pqSnGs-aoM1-YgU7-fHxV-5dsM-tBDR-yWu3Ip
 _place_holder; --- Logical volume ---
 _place_holder; LV Name _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; /dev/datavg/lvdata01
 _place_holder; VG Name _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; datavg
 _place_holder; LV UUID _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; q1scUB-8cD3-bIV1-JkRL-3mh8-uqA3-Mc7a8F
 _place_holder; LV Write Access _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; read/write
 _place_holder; LV Status _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; available
 _place_holder; # open _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; LV Size _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1.01 TB
 _place_holder; Current LE _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 264066
 _place_holder; Segments _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 1
 _place_holder; Allocation _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; inherit
 _place_holder; Read ahead sectors _place_holder; _place_holder; _place_holder; _place_holder; auto
 _place_holder; - currently set to _place_holder; _place_holder; _place_holder; _place_holder; 256
 _place_holder; Block device _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; 253:7
 _place_holder; --- Physical volumes ---
 _place_holder; PV Name _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; /dev/md3
 _place_holder; PV UUID _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; LhFMqe-Y6UA-FhQ1-FZdQ-aCi3-Pwkl-tBpkJO
 _place_holder; PV Status _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; _place_holder; allocatable
 _place_holder; Total PE / Free PE _place_holder; _place_holder; _place_holder; 476931 / 212865
Everything looks like it is in good shape. _place_holder; One last item of business is to expand the logical volume, lvdata01, to take advantage of the larger datavg volume group. _place_holder; The steps here will be to extend the logical volume with lvextend, and then use resize2fs to expand the filesystem to fully allocate the larger lvdata01 footprint. _place_holder; For example, I will add 2GB of space to the lvdata01 logical volume:
# lvextend -L +2G /dev/datavg/lvdata01
Extending logical volume lvdata01 to 1.01 TB
Logical volume lvdata01 successfully resized
# fsck /dev/datavg/lvdata01
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
/dev/datavg/lvdata01 is mounted.
WARNING!!! Running e2fsck on a mounted filesystem may cause
SEVERE filesystem damage.
Do you really want to continue (y/n)? yes
/dev/datavg/lvdata01: recovering journal
/dev/datavg/lvdata01: clean, 106266/67608576 files, 226453522/270403584 blocks
# resize2fs /dev/datavg/lvdata01
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/datavg/lvdata01 is mounted on /data; on-line resizing required
Performing an on-line resize of /dev/datavg/lvdata01 to 270927872 (4k) blocks.
The filesystem on /dev/datavg/lvdata01 is now 270927872 blocks long.
#
By not supplying a size to the resize2fs command, it will expand the filesystem to fill the logical volume. _place_holder; I also ran an fsck on the file system to make sure that there were not any issues prior to the resize2fs operation.
Anyway, that was my weekend, and my whirlwind tour through the 2TB RAID upgrade of my home-grown NAS. _place_holder; Hopefully there is something useful in here.