LiveUpgrade

From pressy's brainbackup
Jump to: navigation, search

LiveUpgrade

You will nedd some special patches, see InfoDoc 206844 (formerly 72099)

Prepair the server

# pkgrm SUNWlucfg SUNWluu SUNWlur [SUNWluzone]
# cd <directory_of_Sol10_lu_packages>
# pkgadd –d . SUNWlucfg SUNWluu SUNWlur SUNWluzone

SUNWlucfg new in 10 08/07

Don't forget the latest patches: 121430-xx SunOS 5.9 5.10 Live Upgrade Patch


Create the new Boot Environment

# lucreate -c <old_BE_name> [-C /dev/dsk/c1t0d0s0] \
-m /:/dev/dsk/<target_disk_slice>:ufs \
-m -:/dev/dsk/c1t1d0s1:swap \
-m /var:/dev/dsk/c1t1d0s4:ufs \
-m /opt:/dev/dsk/c1t1d0s5:ufs \
-m /usr:/dev/dsk/c1t1d0s6:ufs \
-n <new_BE_name>
### -> "-C [current boot-dev]" needed @ costumer with error message:
### ERROR: Cannot determine the physical boot device for the current
### boot environment <s10_u3>.
### Use the <-C> command line option to specify the physical boot 
### device for the current boot environment <s10_u3>.
### ERROR: Cannot create configuration for primary boot environment.
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <s10_u3>.
Creating initial configuration for primary boot environment <s10_u3>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10_u3> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <s10_u3> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
 100% complete                           
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10_u5>.
Source boot environment is <s10_u3>.
Creating boot environment <s10_u5>.
Creating file systems on boot environment <s10_u5>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c1t1d0s0>.
Creating <ufs> file system for </opt> in zone <global> on </dev/dsk/c1t1d0s4>.
Creating <ufs> file system for </var> in zone <global> on </dev/dsk/c1t1d0s3>.
Mounting file systems for boot environment <s10_u5>.
Calculating required sizes of file systems for boot environment <s10_u5>.
Populating file systems on boot environment <s10_u5>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </opt>.
Populating contents of mount point </var>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <s10_u5>.
Creating compare database for file system </var>.
Creating compare database for file system </opt>.
Creating compare database for file system </>.
Updating compare databases on boot environment <s10_u5>.
Making boot environment <s10_u5> bootable.
Setting root slice to </dev/dsk/c1t1d0s0>.
Population of boot environment <s10_u5> successful.
Creation of boot environment <s10_u5> successful.

root@server # lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10_u3                     yes      yes    yes       no     -         
s10_u5                     yes      no     no        yes    -     

Upgrade the Boot Environment

root@server # luupgrade -u -n s10_u5 -s /cdrom/sol_10_508_sparc/s0
218128 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </DVD/Solaris_10/Tools/Boot>
Validating the contents of the media </DVD>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10_u5>.
Determining packages to install or upgrade for BE <s10_u5>.
Performing the operating system upgrade of the BE <s10_u5>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10_u5>.
Package information successfully updated on boot environment <s10_u5>.
Adding operating system patches to the BE <s10_u5>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <s10_u5> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <s10_u5> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <s10_u5>. Before you activate boot 
environment <s10_u5>, determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <s10_u5> is complete.

root@server # lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10_u3                     yes      yes    yes       no     -         
s10_u5                     yes      no     no        yes    -         

Patch the Boot Environment

root@server # lumount s10_u5
root@server # df -h | grep alt
/dev/dsk/c1t1d0s0       20G   6.6G    13G    34%    /.alt.s10_u5
/dev/dsk/c1t1d0s3      9.8G   410M   9.3G     5%    /.alt.s10_u5/var
/dev/dsk/c1t1d0s4       20G   4.8G    15G    24%    /.alt.s10_u5/opt
swap                    19G     0K    19G     0%    /.alt.s10_u5/var/run
swap                    19G     0K    19G     0%    /.alt.s10_u5/tmp
root@server # patchadd -R /.alt.s10_u5/ -M . ./patch_order

Activate the new BE

# luactivate s10_u5
# shutdown –g0 -i0 –y 
{ok} boot <new BW> [-sx]

LU for ZFS/UFS Migration

Create the new root pool, format the disk with format -e and choose SMI (not EFI), all in one slice

root@server # zpool create <rpool-name> c0t1d0s0
root@server # lucreate -c ufsBE -n zfsBE -p <rpool-name>
[...]
root@server # lustatus
boot environment   Is         Active	Active     Can	Copy 
Name               Complete   No	      OnReboot    Delete  Status 
-----------------------------------------------------------------
ufsBE              yes       yes  	yes	    no          -
zfsBE              yes       no		no	    yes         -

root@server # zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
rpool/ROOT/zfsBE      	 5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
rpool/dump                 1.95G      -  1.95G  - 
rpool/swap                 1.95G      -  1.95G  - 

root@server # luactivate zfsBE
--> bug'/usr/sbin/luactivate: /etc/lu/DelayUpdate/: cannot create'
--> export BOOT_MENU_FILE="menu.lst"
--> luactivate zfsBE
root@server # init 6

MIRROR AFTER UFS TO ZFS

# metaclear [-r] dx [dn]
# metadb -d cxtxdxsx [2n]
# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2
# zpool attach rpool <first> <mirror>
[sparc] # installboot -F zfs \
/usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
[x86] # installgrub /boot/grub/stage1 /boot/grub/stage2 \ /dev/rdsk/c0t1d0s0

LU to a mirror

root@server # lucreate -c "Solaris_8" \ 
-m /:/dev/md/dsk/d10:ufs,mirror \
-m /:/dev/dsk/c0t0d0s6:attach \
-m /:/dev/dsk/c0t3d0s6:attach \
-n "Solaris_9"

LU using a Mirror preserving DATA

in this example i am upgrading to Solaris10-Update9

root@ldg4 # lucreate -c sol10u8 \
root@ldg4 > -m /:/dev/md/dsk/d100:ufs,mirror \
root@ldg4 > -m /:/dev/dsk/c0d0s0:detach,attach,preserve \
root@ldg4 > -n sol10u9
root@ldg4 # echo "autoreg=disable" > /tmp/tt
root@ldg4 # luupgrade -k /tmp/tt -u -n sol10u9 -s /mnt2


Issues

Failed to upgrade Solaris10 Update 8 to Update 9.

root@server # lucreate  -n sol10u9
Analyzing system configuration.
Comparing source boot environment <sol10u8> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <sol10u9>.
Source boot environment is <sol10u8>.
Creating boot environment <sol10u9>.
Creating file systems on boot environment <sol10u9>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/sol10u9>.
/usr/lib/lu/lumkfs: test: unknown operator zfs
Populating file systems on boot environment <sol10u9>.
ERROR: Unable to mount ABE <sol10u9> from ICF file </etc/lu/ICF.2>.
ERROR: Contents of ICF file </etc/lu/ICF.2>:
****
sol10u9:-:/dev/zvol/dsk/rpool/swap:swap:8388608
sol10u9:/:rpool/ROOT/sol10u9:zfs:16906934
sol10u9:/export:rpool/export:zfs:0
sol10u9:/export/home:rpool/export/home:zfs:0
sol10u9:/rpool:rpool:zfs:0
sol10u9:/var:rpool/ROOT/sol10u9/var:zfs:5665233
****
ERROR: Output from </usr/lib/lu/lumount -c / -Z -i /etc/lu/ICF.2>:
****
ERROR: cannot open 'rpool/ROOT/sol10u9/var': dataset does not exist
ERROR: cannot open 'rpool/ROOT/sol10u9/var': dataset does not exist
ERROR: cannot mount mount point </.alt.tmp.b-G4b.mnt/var> device <rpool/ROOT/sol10u9/var>
ERROR: failed to mount file system <rpool/ROOT/sol10u9/var> on </.alt.tmp.b-G4b.mnt/var>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
****
ERROR: Unable to copy file systems from boot environment <sol10u8> to BE <sol10u9>.
ERROR: Unable to populate file systems on boot environment <sol10u9>.
ERROR: Cannot make file systems for boot environment <sol10u9>.
root@server # 

Found on solarisinternals.com:

If you have a separate /var dataset in a previous Solaris 10 release and you want to use luupgrade to upgrade to the Solaris 10 10/09 release, and you have applied patch 121430-42 (on SPARC systems) or 121431-43 (on x86 systems), apply IDR143154-01 (SPARC) or IDR143155-01 (x86) to avoid the issue described in CR 6884728. Symptoms of this bug are that entries for the ZFS BE and ZFS /var dataset are incorrectly placed in the /etc/vfstab file. The workaround is to boot in maintenance mode and remove the erroneous vfstab entries.

root@server # vi /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/zvol/dsk/rpool/swap        -       -       swap    -       no      -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap - /tmp tmpfs - yes size=512m
--> #rpool/ROOT/sol10u8     -       /       zfs     1       no      -
--> #rpool/ROOT/sol10u8/var -       /var    zfs     1       no      -

root@server # init 6
[...]
Rebooting with command: boot
Boot device: /pci@1f,700000/scsi@2/disk@0,0:a  File and args:
SunOS Release 5.10 Version Generic_142900-07 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Indicator SYS.ACT is now ON
Hostname: server
ERROR: svc:/system/filesystem/minimal:default failed to mount /var/run  (see 'svcs -x' for details)
Nov  4 12:19:17 svc.startd[7]: svc:/system/filesystem/minimal:default: Method "/lib/svc/method/fs-minimal" failed with exit status 95.
Nov  4 12:19:17 svc.startd[7]: system/filesystem/minimal:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
Requesting System Maintenance Mode
(See /lib/svc/share/README for more information.)
Console login service(s) cannot run

Root password for system maintenance (control-d to bypass):
single-user privilege assigned to /dev/console.
Entering System Maintenance Mode
root@server # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   11.7G  55.3G    96K  /rpool
rpool/ROOT              8.09G  55.3G    18K  legacy
rpool/ROOT/sol10u8      8.09G  55.3G  5.39G  /
rpool/ROOT/sol10u8/var  2.70G  55.3G  2.70G  legacy
rpool/dump              2.00G  55.3G  2.00G  -
rpool/export            1.08G  55.3G    22K  /export
rpool/export/home       1.08G  55.3G  1.08G  /export/home
rpool/swap               503M  55.3G   503M  -
root@server # zfs set mountpoint=/var rpool/ROOT/sol10u8/var
root@server # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   11.7G  55.3G    96K  /rpool
rpool/ROOT              8.09G  55.3G    18K  legacy
rpool/ROOT/sol10u8      8.09G  55.3G  5.39G  /
rpool/ROOT/sol10u8/var  2.70G  55.3G  2.70G  /var
rpool/dump              2.00G  55.3G  2.00G  -
rpool/export            1.08G  55.3G    22K  /export
rpool/export/home       1.08G  55.3G  1.08G  /export/home
rpool/swap               503M  55.3G   503M  -
root@server # reboot
[...]
root@server # lucreate -c sol10u8 -n sol10u10 -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <sol10u8>.
Creating initial configuration for primary boot environment <sol10u8>.
The device </dev/dsk/c6t5000C5000F240803d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <sol10u8> PBE Boot Device </dev/dsk/c6t5000C5000F240803d0s0>.
Comparing source boot environment <sol10u8> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <sol10u10>.
Source boot environment is <sol10u8>.
Creating boot environment <sol10u10>.
Cloning file systems from boot environment <sol10u8> to create boot environment <sol10u10>.
Creating snapshot for <rpool/ROOT/sol10u8> on <rpool/ROOT/sol10u8@sol10u10>.
Creating clone for <rpool/ROOT/sol10u8@sol10u10> on <rpool/ROOT/sol10u10>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/sol10u10>.
Creating snapshot for <rpool/ROOT/sol10u8/var> on <rpool/ROOT/sol10u8/var@sol10u10>.
Creating clone for <rpool/ROOT/sol10u8/var@sol10u10> on <rpool/ROOT/sol10u10/var>.
Setting canmount=noauto for </var> in zone <global> on <rpool/ROOT/sol10u10/var>.
Creating snapshot for <rpool/ROOT/sol10u8/zones> on <rpool/ROOT/sol10u8/zones@sol10u10>.
Creating clone for <rpool/ROOT/sol10u8/zones@sol10u10> on <rpool/ROOT/sol10u10/zones>.
Setting canmount=noauto for </zones> in zone <global> on <rpool/ROOT/sol10u10/zones>.
Creating snapshot for <rpool/ROOT/sol10u8/zones/zone1> on <rpool/ROOT/sol10u8/zones/zone1@sol10u10>.
Creating clone for <rpool/ROOT/sol10u8/zones/zone1@sol10u10> on <rpool/ROOT/sol10u10/zones/zone1-sol10u10>.
Population of boot environment <sol10u10> successful.
Creation of boot environment <sol10u10> successful.
root@server #