Manager (SVM) control when booted from a CDROM. Our host details:
HOST: snorkle PROMPT: cdrom [0] OS: Solaris 10 u8 x86 SVM ROOT DEVICE: d2 PHYSICAL ROOT SLICE: c1t1d0s0 NOTES: The following is applicable for Solaris 9 and 10, x86 and SPARC. Also, while a boot from CDROM is used for the example, booting from jumpstart would work as well.Though the following details accessing the SVM managed root disk,
after step 3, any SVM volume could instead be mounted and managed.
Regardless of what volume you intend to manage, the purpose is to show
how to work with SVM volumes while booted from the CDROM in a sane manner
(see note 1).
Step 0) Start with booting to single user:
CDROM Boot: x86 [0] /usr/sbin/reboot # set cdrom boot from the BIOS and select 'option 6' from the # Solaris installation menu to get a single user shell sparc [0] /usr/sbin/reboot -- 'cdrom -s' or {0} ok boot cdrom -sStep 1) After booting to single user from the CDROM, 'fsck' the physical
root slice (c1t1d0s0) to ensure it is stable:
cdrom [0] /usr/sbin/fsck -n /dev/rdsk/c1t1d0s0 ** /dev/rdsk/c1t1d0s0 (NO WRITE) ** Last Mounted on / ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3a - Check Connectivity ** Phase 3b - Verify Shadows/ACLS ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cylinder Groups 157521 files, 4394691 used, 3862820 free (5412 frags, 482176 blocks, 0.1% fragmentation)Step 2) Mount 'c1t1d0s0' to '/a', copy the SVM config file (see note 2)
over to the current ramdisk FS, and umount 'c1t1d0s0':
cdrom [0] /usr/sbin/mount -o ro /dev/dsk/c1t1d0s0 /a cdrom [0] /usr/bin/cp /a/kernel/drv/md.conf /kernel/drv/md.conf cdrom [0] /usr/sbin/umount /aStep 3) Now that the SVM config (md.conf) has been copied over, force
a reload of the 'md' driver and verify the state of our metadevices
(see notes 3 and 4):
cdrom [0] /usr/sbin/update_drv -f md devfsadm: mkdir failed for /dev 0x1ed: Read-only file system cdrom [0] /usr/sbin/metastat d8: Mirror Submirror 0: d7 State: Okay <snip...> Device Relocation Information: Device Reloc Device ID c1t1d0 Yes id1,sd@f0c6dd7544cf411d1000620200000If necessary, sync any metadevices that need it. As an example:
cdrom [0] /usr/sbin/metasync d2Step 4) Mount the SVM root device 'd2' to '/a', verify the mount, and
do your work:
cdrom [0] /usr/sbin/mount -F ufs -o rw /dev/md/dsk/d2 /a cdrom [0] /usr/bin/df -h /a Filesystem size used avail capacity Mounted on /dev/md/dsk/d2 7.9G 4.2G 3.6G 54% /a cdrom [0] /usr/sbin/mount | /usr/bin/grep /a /a on /dev/md/dsk/d2 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1540002 on Wed Dec 8 12:15:42 2010 cdrom [0] /usr/bin/ls /a a dev kernel opt tmp b devices lib platform usr bin etc lost+found proc var boot export mnt sbin vol cdrom home net system cdrom [0] /usr/bin/cat /a/etc/nodename snorkleStep 5) After you've finished working, umount 'd2' from '/a', verify
the volume is stable, and reboot:
cdrom [0] /usr/sbin/umount /a cdrom [0] /usr/sbin/fsck -n /dev/md/rdsk/d2 ** /dev/md/rdsk/d2 (NO WRITE) ** Last Mounted on /a ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3a - Check Connectivity ** Phase 3b - Verify Shadows/ACLS ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cylinder Groups 157521 files, 4394691 used, 3862820 free (5412 frags, 482176 blocks, 0.1% fragmentation) cdrom [0] /usr/sbin/reboot
NOTES
Note 1: Typically, the point of working with an SVM volume from a CDROM
is for volume maintenance or recovery purposes and not general
administration tasks. Also, if you muck with the actual SVM
configuration, you run an increased risk of having problems with SVM
when booted back up into multiuser. To explain, '/kernel/drv/md.conf'
is used by SVM at startup to identify the state databases. Copying
over the version on the root disk to the ramdisk FS allows you to
work with the local SVM volumes. The ramdisk SVM config files
under '/etc/lvm' will remain generic unless you modify the SVM
configuration. Changes to the SVM configuration while booted from
the CDROM are based upon the environment presented by the CDROM, thus
might be different than the normal runtime environment of the host.
To illustrate, in the following I've deliberately removed one of the
metadb slices:
cdrom [0] /usr/bin/cat /kernel/drv/md.conf # #pragma ident "@(#)md.conf 2.2 04/04/02 SMI" # # Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # The parameters nmd and md_nsets are obsolete. The values for these # parameters no longer have any meaning. name="md" parent="pseudo" nmd=128 md_nsets=4; # Begin MDD database info (do not edit) mddb_bootlist1="sd:261:16:id1,sd@f0c6dd7544cf411d1000620200000/f sd:261:8208:id1,sd@f0c6dd7544cf411d1000620200000/f sd:261:16400:id1,sd@f0c6dd7544cf411d1000620200000/f sd:262:16:id1,sd@f0c6dd7544cf411d1000620200000/g sd:262:8208:id1,sd@f0c6dd7544cf411d1000620200000/g sd:262:16400:id1,sd@f0c6dd7544cf411d1000620200000/g"; # End MDD database info (do not edit) # cdrom [0] /usr/bin/cat /etc/lvm/mddb.cf #metadevice database location file do not hand edit #driver minor_t daddr_t device id checksum cdrom [0] /usr/sbin/metadb -d c1t1d0s6 cdrom [0] /usr/bin/cat /kernel/drv/md.conf # #pragma ident "@(#)md.conf 2.2 04/04/02 SMI" # # Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # The parameters nmd and md_nsets are obsolete. The values for these # parameters no longer have any meaning. name="md" parent="pseudo" nmd=128 md_nsets=4; # Begin MDD database info (do not edit) mddb_bootlist1="sd:133:16:id1,sd@f0c6dd7544cf411d1000620200000/f sd:133:8208:id1,sd@f0c6dd7544cf411d1000620200000/f sd:133:16400:id1,sd@f0c6dd7544cf411d1000620200000/f"; # End MDD database info (do not edit) # cdrom [0] /usr/bin/cat /etc/lvm/mddb.cf #metadevice database location file do not hand edit #driver minor_t daddr_t device id checksum sd 133 16 id1,sd@f0c6dd7544cf411d1000620200000/f -2977 sd 133 8208 id1,sd@f0c6dd7544cf411d1000620200000/f -11169 sd 133 16400 id1,sd@f0c6dd7544cf411d1000620200000/f -19361Following the delete of the metadbs on c1t1d0s6 above, 'md.conf'
and 'mddb.cf' were both updated on the ramdisk. The update not
only removed the deleted metadbs, but also updated the minor device
number relevant to the CDROM environment. The device the CDROM
has identified as c1t1d0s5, based on minor number, is actually
c1t2d0s5 within the normal runtime environment. Rebooting back into
multiuser, 'metadb -i' shows only 3 metadb copies after the delete, as
expected, since the database copies were removed from the disk slice.
Both 'md.conf' and 'mddb.cf', however, still show 6 metadb copies
in the configuration. This leaves SVM in a potentially precarious
position as the data on disk no longer matches the config data.
You could copy over the appropriate configuration files, manually
updating them as necessary. To do so, you would need to update them
based on the details of the runtime environment so they don't reflect
the CDROM environment. The above situation isn't too bad to resolve,
though it is meant to illustrate that you can potentially wreck your
SVM configuration if you are not careful.
Note 2: For the curious, the following details the contents of
'/etc/lvm/mddb.cf' and similarly, the MDD DB info section of
'/kernel/drv/md.conf'. The contents are prior to any the work above:
snorkle [0] cat /etc/lvm/mddb.cf #metadevice database location file do not hand edit #driver minor_t daddr_t device id checksum sd 261 16 id1,sd@f0c6dd7544cf411d1000620200000/f -2977 sd 261 8208 id1,sd@f0c6dd7544cf411d1000620200000/f -11169 sd 261 16400 id1,sd@f0c6dd7544cf411d1000620200000/f -19361 sd 262 16 id1,sd@f0c6dd7544cf411d1000620200000/g -2977 sd 262 8208 id1,sd@f0c6dd7544cf411d1000620200000/g -11169 sd 262 16400 id1,sd@f0c6dd7544cf411d1000620200000/g -19361 sd: specifically, the device driver presenting the device, in this case SCSI 261: the minor number of the device; a long listing following the symlink of /dev/dsk/c1t1d0s5 would return the major number of 32 (sd if we look at /etc/name_to_major) and a minor of 261 to identify slice 5 on c1t1d0 262: same as 261, though identifying c1t1d0s6 16, 8208, 16400: block offset of each SVM state DB id1,sd@f0c6d..00: unique device id for the disk device registered with the OS by the sd driver at attach time /f: slice 5 on the device identified by 'device id' /g: slice 6 on the device identified by 'device id' -2977,...: checksum of the DB entry maintained by SVMYou can see the 'device id' correlated to a ctd from various commands,
such as 'iostat' and 'prtconf'. 'iostat' example:
snorkle [0] /usr/bin/iostat -niE c1t1d0 | /usr/bin/grep 'Device Id:' c1t1d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: VBOX Product: HARDDISK Revision: 1.0 Device Id: id1,sd@f0c6dd7544cf411d1000620200000 Size: 11.01GB <11005853184 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 1 Predictive Failure Analysis: 0Note 3: The following error message after executing 'update_srv' can
be safely ignored:
devfsadm: mkdir failed for /dev 0x1ed: Read-only file systemNote 4: Since the 'md' driver wasn't unloaded, it shouldn't be necessary
to do so though if no metadevices are returned from 'metastat', run
'metainit' to rescan the metadbs listed in 'md.conf':
cdrom [0] /usr/sbin/metainit -r