d23: Submirror of d3
State: Okay Tue 07 Jun 2011 06:04:41 PM SGT
Size: 56754405 blocks
Stripe 0:
Device Start Dbase State Hot Spare Time
c1t1d0s3 0 No Okay Tue 07 Jun 2011 06:04:22 PM SGT
d13: Submirror of d3
State: Needs maintenance Tue 16 Dec 2014 11:35:15 PM SGT
Invoke: metareplace d3 c1t0d0s3 <new device>
Size: 56754405 blocks
Stripe 0:
Device Start Dbase State Hot Spare Time c1t0d0s3 0 No Maintenance Tue 16 Dec 2014 11:35:15 PM SGT
# metadb -i
flags first blk block count
a m p luo 16 1034 /dev/dsk/c1t1d0s5
a p luo 16 1034 /dev/dsk/c1t1d0s6
a p luo 16 1034 /dev/dsk/c1t1d0s7
M p unknown unknown /dev/dsk/c1t0d0s6
M p unknown unknown /dev/dsk/c1t0d0s7
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
In this example, we deduce that disk c1t0 has failed
Delete any state database replicase on the failed disk
In above example, the database replicas in /dev/dsk/c1t0d0s6 & /dev/dsk/c1t0d0s7 has "M" flag beside it indicating "replica had problem with master blocks". Delete those and then run metadb -i again to verify they have been removed.
# metadb -d /dev/dsk/c1t0d0s6
# metadb -d /dev/dsk/c1t0d0s7
# metadb -i
flags first blk block count
a m p luo 16 1034 /dev/dsk/c1t1d0s5
a p luo 16 1034 /dev/dsk/c1t1d0s6
a p luo 16 1034 /dev/dsk/c1t1d0s7
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
Remove the failed disk
use either luxadm or cfgadm (depending on the type of disk) to prepare the HDD for removal
create replicas of state database on the new disk, in our example there was 1 copy of the replica in c1t0d0s5, c1t0d0s6 & c1t0d0s7 before the disk failed
# metadb -af /dev/dsk/c1t0d0s5
# metadb -af /dev/dsk/c1t0d0s6
# metadb -af /dev/dsk/c1t0d0s7
Verify the replicas are healthy in the replaced disk
# metadb -i
initialise the new submirrors on the replaced disk