En la entrada anterior vimos a grandes rasgos, cómo crear un ZFS y mencioné algunas de sus ventajas con respecto al tradicional UFS. En esta ocasión vamos a ver cómo migrar nuestro sistema montado sobre UFS a ZFS.
Para esto requiero un disco libre, en el cual, voy a crear el pool que va a contener el sistema, en mi caso, estoy utilizando Solaris 10 en x86, por lo que debo inicializar el disco con fdisk:
fdisk c1t1d0p0
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
Creo una partición:
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 2606e
partition> p
Current partition table (unnamed):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 1 - 2606 19.96GB (2606/0/0) 41865390
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition> label
Ready to label disk, continue? y
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
Como va a contener el sistema operativo debo hacerlo booteable:
fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/c1t1d0p0
Total disk size is 2610 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 2609 2609 100
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 5
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)
Creamos el pool:
zpool create rpool c1t1d0s0
Ahora vamos a utilizar el Live Upgrade para iniciar esta migración, lo primero es crear un ambiente de booteo (Boot Environment) utilizando el comando lucreate:
lucreate -c BE_ufs_1009 -n BE_zfs_1009 -p rpool
Estos parámetros indican lo siguiente:
- -c BE_ufs_1009: Es una identificación con la que vamos a nombrar el ambiente actual, es completamente a discreción. Para esta denominación en particular, estoy utilizando una muy común, BE es por boot environment, ufs por el sistema de archivo actual y 1009 por el release de Solaris.
- -n BE_zfs_1009: Es la identificación del nuevo ambiente.
- -p rpool: El pool que va a contener el nuevo ambiente.
Lo ejecutamos, van a aparecer algunos errores y advertencias que tienen que ver con el hecho de que es la primera vez que estamos definiendo ambiente de booteo.
Una vez concluido este proceso, revisamos los ambientes de booteo con lustatus:
lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
BE_ufs_1009 yes yes yes no -
BE_zfs_1009 yes no no yes -
Observen que el ambiente zfs aparece como completo, es muy importante que tenga este estatus. Ahora vamos a activar el nuevo ambiente con luactivate:
luactivate BE_zfs_1009
Generating boot-sign, partition and slice information for PBE <BE_ufs_1009>
A Live Upgrade Sync operation will be performed on startup of boot environment <BE_zfs_1009>.
Generating boot-sign for ABE <BE_zfs_1009>
NOTE: File </etc/bootsign> not found in top level dataset for BE <BE_zfs_1009>
Generating partition and slice information for ABE <BE_zfs_1009>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c1t0d0s0 /mnt
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <BE_zfs_1009> successful.
Aparecen otros errores y advertencias e incluside nos da indicaciones de qué hacer en caso de falla, reiniciamos el sistema operativo:
shutdown -i6 -g0 –y
En el menú del GRUB elegimos la opción que corresponde a nuestro ambiente de booteo y de no ocurrir ningún error deberá iniciar el sistema operativo en forma correcta. Si observamos a los filesystem montados observamos lo siguiente:
df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/BE_zfs_1009
20G 4.1G 13G 24% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.3G 348K 2.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
17G 4.1G 13G 24% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 2.3G 40K 2.3G 1% /tmp
swap 2.3G 24K 2.3G 1% /var/run
/dev/dsk/c1t0d0s5 2.9G 3.0M 2.9G 1% /export/home
rpool 20G 35K 13G 1% /rpool
rpool/ROOT 20G 21K 13G 1% /rpool/ROOT
Sólo aparece /export/home en UFS, lo migramos en forma manual:
umount /export/home
rmdir /export/home
zfs create -o mountpoint=/export/home rpool/home
cd /export/home
ufsdump 0cf - /dev/dsk/c1t0d0s5 | ufsrestore rf -
Eliminamos del vfstab la línea que corresponde al /export/home y listo, ya podemos eliminar el ambiente de booteo anterior:
ludelete -f BE_ufs_1009
Nuevamente reiniciamos sistema operativo y comprobamos que inicie sin problemas.
Con esto queda como concluida la migración a ZFS, aprovechando que me queda un disco duro libre, voy a “espejear” mi sistema, es muy sencillo, por medio del comando format, borro todas las particiones del disco que quedo libre (disco 0) y creo una con todo el espacio disponible, después, la agrego al pool de datos:
zpool attach -f rpool c1t1d0s0 c1t0d0s0
Y como punto final, reinstalamos el grub en c1t0d0s0:
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
Podemos monitorear el progreso de sincronización con zpool status:
zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h4m, 48.94% done, 0h4m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0 2.23G resilvered
errors: No known data errors