INTRO
The information detailed below is to assist with proactive planning for
the potential demise and recovery of a host back to the state it was
in prior to demise. Circumstances this will likely be of benefit wouldThe information detailed below is to assist with proactive planning for
the potential demise and recovery of a host back to the state it was
include the patching of a host, application upgrades by a vendor, trying
new and untested configurations, etc. The value involved is that should
something go catastrophically wrong after the "updates," the issue can be
immediately resolved simply by rebooting and recovering to an "untainted"
(boot) device wherein the system appears as though no changes were ever
made. Though specifically for handling of root disks which are mirrored
with hardware RAID, the procedure below could also be tweaked and used
with non-root disks. Caveat, one should be familiar with Solaris, RAID 1,
Solaris bootup, and OBP before attempting the steps detailed below. Since
the following is working with the root disk, most of the actual work is
performed within OpenBoot, as there is no way to safely do the following
from within Solaris while the root disk is in use. Before proceeding, the
following points are of interest or consideration:
- Before using the Hardware RAID features of the Sun Fire T2000
server, ensure that the following patches have been applied:
* 119850-13 mpt and /usr/sbin/raidctl patch
* 122165-01 LSI1064 PCI-X FCode 1.00.39
- the sample platform detailed is a Sun Fire T2000, though the
steps involved should be the same or similar for any SPARC
based host with a hardware RAID controller
- OS: Solaris 10
- Kernel Revision: Generic_141414-01
- Shell Prompt: prefect [0]
- OBP Prompt: {0} ok
- Solaris ID'd Disks: c0t0d0
c0t1d0
* refers to both logical volumes and physical disks
- HW RAID Ctlr Disks: 0.0.0
0.1.0
- Initial RAID 1 Vol: c0t0d0
- Final RAID 1 Vol: c0t1d0
- SCSI Ctlr 0 Path: /pci@780/pci@0/pci@9/scsi@0
- Root FS: / (s0)
- Commands Used:
-> Solaris:
+ /usr/sbin/raidctl
+ /usr/bin/df
+ /usr/sbin/init
+ /usr/sbin/format
+ /usr/sbin/mount
+ /usr/sbin/fsck
+ /usr/bin/vi
+ /usr/bin/cat
+ /usr/sbin/umount
+ /usr/bin/touch
+ /usr/sbin/devfsadm
+ /usr/bin/cp
+ /usr/bin/sed
+ /usr/sbin/eeprom
+ /usr/sbin/dumpadm
+ echo
-> OBP:
+ setenv
+ reset-all
+ probe-scsi-all
+ show-disks
+ select
+ show-volumes
+ delete-volume
+ unselect-dev
+ devalias
+ boot
+ create-im-volume
DETAILS
Before breaking our mirror, we need to determine the current setup of
our root device:
prefect [0] /usr/bin/df -h /
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 11G 4.1G 6.7G 38% /
prefect [0] /usr/sbin/raidctl -l
Controller: 0
Volume:c0t0d0
Disk: 0.0.0
Disk: 0.1.0
prefect [0] /usr/sbin/raidctl -l c0t0d0
Volume Size Stripe Status Cache RAID
Sub Size Level
Disk
----------------------------------------------------------------
c0t0d0 68.3G N/A OPTIMAL OFF RAID1
0.0.0 68.3G GOOD
0.1.0 68.3G GOOD
prefect [0] /usr/sbin/format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 [LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136]
/pci@780/pci@0/pci@9/scsi@0/sd@0,0
Specify disk (enter its number): ^D
In the above, we've identified / as residing on c0t0d0s0, which is
currently a RAID 1 volume, comprised of physical disks 0.0.0 and 0.1.0.
Verified by format, the only device Solaris sees is the logical RAID
1 volume, presented by the RAID controller. At this point, we need to
bring down the host to the OBP prompt:
prefect [1] /usr/sbin/init 0
{b} ok setenv fcode-debug? true
fcode-debug? = true
{0} ok setenv auto-boot? false
auto-boot? = false
{0} ok reset-all
With the box down, we enable 'fcode-debug?' so we can muck with the
mirror from the OBP. Disabling 'auto-boot?' is to prevent the box from
attempting OS bootups before we are ready. The 'reset-all' is to ensure
the new settings are used as well as we cycle back through POST. Once back
to OBP, we've validated the disks available with 'probe-scsi-all' and
select the base device, '/pci@780/pci@0/pci@9/scsi@0' (previously seen in
the output of 'format'):
{0} ok probe-scsi-all
/pci@780/pci@0/pci@9/scsi@0
MPT Version 1.05, Firmware Version 1.09.00.00
Target 0 Volume 0
Unit 0 Disk LSILOGICLogical Volume 3000 143243264 Blocks, 73 GB
{0} ok show-disks
a) /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom
b) /pci@7c0/pci@0/pci@1/pci@0/ide@8/disk
c) /pci@780/pci@0/pci@9/scsi@0/disk
q) NO SELECTION
Enter Selection, q to quit: c
/pci@780/pci@0/pci@9/scsi@0/disk has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
e.g. ok nvalias mydev ^Y
for creating devalias mydev for /pci@780/pci@0/pci@9/scsi@0/disk
{0} ok select /pci@780/pci@0/pci@9/scsi@0
With our volume's base SCSI device selected, 'show-volumes' will display
our current volume, so that it can be deleted:
{0} ok show-volumes
Volume 0 Target 0 Type IM (Integrated Mirroring)
Optimal Enabled
2 Members 143243264 Blocks, 73 GB
Disk 1
Primary Online
Target 4 FUJITSU MAY2073RCSUN72G 0501
Disk 0
Secondary Online
Target 1 SEAGATE ST973401LSUN72G 0556
{0} ok 0 delete-volume
The volume and its data will be deleted
Are you sure (yes/no)? [no] yes
Volume 0 has been deleted
In the above command, '0 delete-volume', 0 is specifically 'Volume 0'.
You must answer yes to the question to continue.
* NOTE, only volumes setup as RAID 1 can be handled in this manner
via the HW RAID controller, as it simply splits the mirrors from
the logical volume, leaving the data in place. Performing a
'delete-volume' with other RAID levels will destroy the volume
and any contained data.
Verify the volume was removed and reset the system so the two original
physical devices are now visible:
{0} ok show-volumes
No volumes to show
{0} ok unselect-dev
{0} ok reset-all
[snip...]
{0} ok probe-scsi-all
/pci@780/pci@0/pci@9/scsi@0
MPT Version 1.05, Firmware Version 1.09.00.00
Target 0
Unit 0 Disk FUJITSU MAY2073RCSUN72G 0501 143374738 Blocks, 73 GB
SASAddress 500000e01361c882 PhyNum 0
Target 1
Unit 0 Disk SEAGATE ST973401LSUN72G 0556 143374738 Blocks, 73 GB
SASAddress 5000c500021551cd PhyNum 1
Verify that aliases are setup for our devices, wherein physical disk
(PhyNum) 0 is 'disk0' and physical disk (PhyNum) 1 is 'disk1'. Perform a
reconfiguration boot of the system to 'single user' on disk 0:
{0} ok devalias
ttya /pci@7c0/pci@0/pci@1/pci@0/isa@2/serial@0,3f8
nvram /virtual-devices/nvram@3
net3 /pci@7c0/pci@0/pci@2/network@0,1
net2 /pci@7c0/pci@0/pci@2/network@0
net1 /pci@780/pci@0/pci@1/network@0,1
net0 /pci@780/pci@0/pci@1/network@0
net /pci@780/pci@0/pci@1/network@0
ide /pci@7c0/pci@0/pci@1/pci@0/ide@8
cdrom /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f
disk3 /pci@780/pci@0/pci@9/scsi@0/disk@3
disk2 /pci@780/pci@0/pci@9/scsi@0/disk@2
disk1 /pci@780/pci@0/pci@9/scsi@0/disk@1
disk0 /pci@780/pci@0/pci@9/scsi@0/disk@0
disk /pci@780/pci@0/pci@9/scsi@0/disk@0
scsi /pci@780/pci@0/pci@9/scsi@0
virtual-console /virtual-devices/console@1
name aliases
{0} ok printenv boot-device
boot-device = disk net
{0} ok boot disk -rsmverbose
Boot device: /pci@780/pci@0/pci@9/scsi@0/disk@0 File and args: -rsmverbose
ufs-file-system
Loading: /platform/SUNW,Sun-Fire-T200/boot_archive
[snip...]
[ milestone/single-user:default starting (single-user milestone) ]
Requesting System Maintenance Mode
SINGLE USER MODE
Root password for system maintenance (control-d to bypass):
single-user privilege assigned to /dev/console.
Entering System Maintenance Mode
Oct 15 12:16:21 su: 'su root' succeeded for root on /dev/console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
Ensure that we can now see both disks from within Solaris and fsck the
filesystems on disk1 (the mirror that we are not booted from):
prefect [0] /usr/sbin/mount -a
mount: /tmp is already mounted or swap is busy
prefect [0] /usr/bin/df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 11G 4.1G 6.7G 38% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1.4M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c0t0d0s3 5.9G 954M 4.9G 16% /var
swap 14G 0K 14G 0% /tmp
swap 14G 0K 14G 0% /var/run
/dev/dsk/c0t0d0s4 42G 43M 42G 1% /space
prefect [0] /usr/sbin/format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 [LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136]
/pci@780/pci@0/pci@9/scsi@0/sd@0,0
1. c0t1d0 [LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136]
/pci@780/pci@0/pci@9/scsi@0/sd@1,0
Specify disk (enter its number): ^D
prefect [1] for i in 0 3 4 ; do /usr/sbin/fsck -y /dev/rdsk/c0t1d0s${i}; done
** /dev/rdsk/c0t1d0s0
** Last Mounted on /
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
151110 files, 4236458 used, 7112386 free (6194 frags, 888274 blocks, 0.1% fragmentation)
** /dev/rdsk/c0t1d0s3
** Last Mounted on /var
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
21170 files, 970681 used, 5219373 free (1213 frags, 652270 blocks, 0.0% fragmentation)
** /dev/rdsk/c0t1d0s4
** Last Mounted on /space
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
2 files, 9 used, 44356384 free (8 frags, 5544547 blocks, 0.0% fragmentation)
Assuming that the slices come back clean, which they should, we need
to mount up / on disk1, set up a reconfiguration boot, and clean up the
device tree. The 'devfsadm' command is set to specifically run against
/mnt, where disk1's / is mounted to. Parameters '-Cv' tell devfsadm
to clean up stale devices, add those newly found, and be verbose about
what it is doing:
prefect [0] /usr/sbin/mount /dev/dsk/c0t1d0s0 /mnt
prefect [0] /usr/sbin/mount /dev/dsk/c0t1d0s3 /mnt/var
prefect [0] /usr/bin/touch /mnt/reconfigure
prefect [0] /usr/sbin/devfsadm -r /mnt -Cv
devfsadm[181]: verbose: no devfs node or mismatched dev_t for /mnt/devices/scsi_vhci:devctl
Since on our next reboot we will be booting off of disk1, disk1's vfstab
needs to be updated from the original copy. The original copy is set
to mount filesystems based upon the usage of the logical volume c0t0d0.
This needs to be updated relevant to disk1's slicing, thus c0t1d0:
prefect [0] /usr/bin/cp /mnt/etc/vfstab /mnt/etc/vfstab.orig
prefect [0] /usr/bin/sed -e 's;c0t0d0s;c0t1d0s;g' /mnt/etc/vfstab.orig > /mnt/etc/vfstab
Under the premise of performing updates, be it patching, creating new
files or configs, etc, file /mnt/willitstay is created, though any of
the mentioned actions could otherwise be performed, instead. For our
purposes, willitstay is being used for illustrative purposes since it
does not exist on disk0, though will soon exist on disk1:
prefect [0] echo "wonder if this will stay" >> /mnt/willitstay
prefect [0] /usr/bin/cat /mnt/willitstay
wonder if this will stay
Unmount disk1 and fsck the boot slice (s0). Once done, do a
reconfiguration boot to 'single user' using disk1.
* NOTE, as long as a reconfiguration boot using disk1 is performed,
the host could otherwise be booted to 'multi-user' and brought
up normally to allow the changes made to disk1 to be tested,
used, etc. For illustration purposes, the following details a
boot using disk1 to 'single user':
prefect [0] /usr/sbin/umount /mnt/var
prefect [0] /usr/sbin/umount /mnt
prefect [0] /usr/sbin/fsck -y /dev/rdsk/c0t1d0s0
** /dev/rdsk/c0t1d0s0
** Last Mounted on /mnt
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
151126 files, 4236474 used, 7112370 free (6178 frags, 888274 blocks, 0.1% fragmentation)
prefect [0] reboot -- 'disk1 -rsmverbose'
syncing file systems... done
rebooting...
[snip...]
Boot device: /pci@780/pci@0/pci@9/scsi@0/disk@1 File and args: -rsmverbose
ufs-file-system
Loading: /platform/SUNW,Sun-Fire-T200/boot_archive
[snip...]
[ milestone/single-user:default starting (single-user milestone) ]
Requesting System Maintenance Mode
SINGLE USER MODE
Root password for system maintenance (control-d to bypass):
single-user privilege assigned to /dev/console.
Entering System Maintenance Mode
Oct 15 14:16:44 su: 'su root' succeeded for root on /dev/console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
The following is simply validation of the changes that were made to
disk1 prior to booting off of it:
prefect [0] /usr/bin/cat /willitstay
wonder if this will stay
prefect [0] /usr/sbin/mount -a
mount: /tmp is already mounted or swap is busy
prefect [1] /usr/bin/df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t1d0s0 11G 4.1G 6.7G 38% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1.4M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c0t1d0s3 5.9G 954M 4.9G 16% /var
swap 14G 0K 14G 0% /tmp
swap 14G 0K 14G 0% /var/run
/dev/dsk/c0t1d0s4 42G 43M 42G 1% /space
prefect [0] /usr/sbin/raidctl -l
Controller: 0
Disk: 0.0.0
Disk: 0.1.0
prefect [0] /usr/sbin/raidctl -l -g 0.0.0 0
Disk Vendor Product Firmware Capacity Status HSP
----------------------------------------------------------------------------
0.0.0 FUJITSU MAY2073RCSUN72G 0501 68.3G GOOD N/A
GUID:500000e01361c880
prefect [0] /usr/sbin/raidctl -l -g 0.1.0 0
Disk Vendor Product Firmware Capacity Status HSP
----------------------------------------------------------------------------
0.1.0 SEAGATE ST973401LSUN72G 0556 68.3G GOOD N/A
prefect [0] /usr/sbin/raidctl -l 0
Controller Type Version
----------------------------------------------------------------
c0 LSI_1064EE 1.09.00.00
As the changes to disk1 have been tested and validated, the system needs
to be setup to perform a reconfiguration boot at next bootup. Once the
host is down and system reset, the new logical volume, c0t1d0, will be
created, using disk1 as the primary mirror and syncing disk0 to disk1:
prefect [0] /usr/bin/touch /reconfigure
prefect [0] /usr/sbin/init 0
prefect [0] svc.startd: The system is coming down. Please wait.
svc.startd: 55 system services are now being stopped.
Oct 15 14:33:42 prefect syslogd: going down on signal 15
svc.startd: The system is down.
syncing file systems... done
Program terminated
{0} ok reset-all
[snip...]
Sun Fire T200, No Keyboard
Copyright 2009 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.30.3, 8064 MB memory available, Serial #75372526.
Ethernet address 0:14:4f:7e:17:ee, Host ID: 847e17ee.
Below, the scsi devices are identified and selected and we verify there
are no current volumes:
{0} ok probe-scsi-all
/pci@780/pci@0/pci@9/scsi@0
MPT Version 1.05, Firmware Version 1.09.00.00
Target 0
Unit 0 Disk FUJITSU MAY2073RCSUN72G 0501 143374738 Blocks, 73 GB
SASAddress 500000e01361c882 PhyNum 0
Target 1
Unit 0 Disk SEAGATE ST973401LSUN72G 0556 143374738 Blocks, 73 GB
SASAddress 5000c500021551cd PhyNum 1
{0} ok show-disks
a) /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom
b) /pci@7c0/pci@0/pci@1/pci@0/ide@8/disk
c) /pci@780/pci@0/pci@9/scsi@0/disk
q) NO SELECTION
Enter Selection, q to quit: c
/pci@780/pci@0/pci@9/scsi@0/disk has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
e.g. ok nvalias mydev ^Y
for creating devalias mydev for /pci@780/pci@0/pci@9/scsi@0/disk
{0} ok select /pci@780/pci@0/pci@9/scsi@0
{0} ok show-volumes
No volumes to show
To setup our mirror, the disks need to be specified in the order of
primary then secondary. Not setting the mirror up in this manner
will destroy the data on the primary, overwriting it with data from
the secondary. As we've already modified and intend to use the data on
disk1, our primary disk is disk1. The parameters to create-im-volume, the
mirrored volume creation command, are '1 0', thus disk1 followed by disk0.
* NOTE, the cXtYdZ notation of the resulting logical volume is based
upon the values of the primary physical disk. As seen above,
the probe-scsi-all reveals that controller 0, target 1, unit 0, is
disk1. (This could be further verified with a review of 'devalias'
output.) Given the above, the new logical volume will be c0t1d0:
{0} ok 1 0 create-im-volume
Target 1 size is 143243264 Blocks, 73 GB
Target 0 size is 143243264 Blocks, 73 GB
The volume can be any size from 1 MB to 69943 MB
* NOTE, when prompted for size, it seems that accepting the default
does not work, and instead must be typed in (even if it is the
same value):
What size do you want? [69943] 69943
Volume size will be 143243264 Blocks, 73 GB
PhysDisk 0 has been created for target 1
PhysDisk 1 has been created for target 0
Volume has been created
A quick check of our new mirrored volume show that it is still syncing.
(For this particular box, from the time the volume was created until
it reached an 'OPTIMAL' state (in multi-user mode), the time lapse
was about 30 minutes.) At this point, unselect the current device,
and reboot the host to disk1 with a reconfiguration boot:
{0} ok show-volumes
Volume 0 Target 1 Type IM (Integrated Mirroring)
Degraded Enabled Resync In Progress
2 Members 143243264 Blocks, 73 GB
Disk 0
Primary Online
Target 4 SEAGATE ST973401LSUN72G 0556
Disk 1
Secondary Online Out Of Sync
Target 0 FUJITSU MAY2073RCSUN72G 0501
{0} ok unselect-dev
{0} ok reset-all
[snip...]
{0} ok boot disk1 -rmverbose
[snip...]
Boot device: /pci@780/pci@0/pci@9/scsi@0/disk@1 File and args: -rmverbose
ufs-file-system
Loading: /platform/SUNW,Sun-Fire-T200/boot_archive
Loading: /platform/sun4v/boot_archive
ramdisk-root hsfs-file-system
Loading: /platform/SUNW,Sun-Fire-T200/kernel/sparcv9/unix
[snip...]
[ network/ssh:default starting (SSH server) ]
[ application/management/sma:default starting (net-snmp SNMP daemon) ]
prefect console login: [ milestone/multi-user:default starting (multi-user milestone) ]
prefect console login: root
Password:
Oct 15 15:22:00 prefect login: ROOT LOGIN /dev/console
Last login: Thu Oct 15 14:24:22 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
Once the box is back up to multi-user, a quick check shows that we
are booted off of disk1 and that our updates to the system still held
(/willitstay). A further look, however, shows that we are actually booted
off of the logical volume c0t1d0 and that the volume is still syncing
(as stated earlier, SYNC state for about 30 minutes, till OPTIMAL.
The first disk in the 'raidctl' volume output is the primary.)
prefect [0] /usr/bin/df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t1d0s0 11G 4.1G 6.7G 38% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1.5M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
11G 4.1G 6.7G 38% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c0t1d0s3 5.9G 954M 4.9G 16% /var
swap 14G 0K 14G 0% /tmp
swap 14G 16K 14G 1% /var/run
/dev/dsk/c0t1d0s4 42G 43M 42G 1% /space
prefect [0] /usr/bin/cat /willitstay
wonder if this will stay
prefect [0] /usr/sbin/raidctl -l
Controller: 0
Volume:c0t1d0
Disk: 0.0.0
Disk: 0.1.0
prefect [0] /usr/sbin/raidctl -l c0t1d0
Volume Size Stripe Status Cache RAID
Sub Size Level
Disk
----------------------------------------------------------------
c0t1d0 68.3G N/A SYNC OFF RAID1
0.1.0 68.3G GOOD
0.0.0 68.3G GOOD
At this point, the system has been remirrored after testing of the
updates made, and brought back online. As a final thought, a few other
updates still need to be made to the system towards making this a hassle
free solution:
prefect [0] /usr/sbin/eeprom fcode-debug?=false
prefect [0] /usr/sbin/eeprom auto-boot?=true
prefect [0] /usr/sbin/eeprom boot-device="disk1 net"
prefect [0] /usr/sbin/dumpadm -d /dev/dsk/c0t1d0s1
see also:
Breaking and Syncing an SVM Root Mirror