Besucht auch XING, das internationale Kontaktnetzwerk. Dort könnt Ihr schnell und problemlos interessante Geschäftskontakte knüpfen und Euer Netzwerk erweitern. Sehr empfehlenswert!
Man braucht ein SMI label.
Create a ZFS root pool on a slice.# zpool create rpool c0t1d0s5
# lucreate -c c1t0d0s0 -n new-zfsBE -p rpoolAnalyzing system configuration.Comparing source boot environment <c1t0d0s0> file systems with the filesystem(s) you specified for the new boot environment. Determining whichfile systems should be in the new boot environment.Updating boot environment description database on all BEs.Updating system configuration files.The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.Creating configuration for boot environment <new-zfsBE>.Source boot environment is <c1t0d0s0>.Creating boot environment <new-zfsBE>.Creating file systems on boot environment <new-zfsBE>.Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/new-zfsBE>.Populating file systems on boot environment <new-zfsBE>.Checking selection integrity.Integrity check OK.Populating contents of mount point </>.Copying.Creating shared file system mount points.Creating compare databases for boot environment <new-zfsBE>.Creating compare database for file system </rpool/ROOT>.Creating compare database for file system </>.Updating compare databases on boot environment <new-zfsBE>.Making boot environment <new-zfsBE> bootable.Creating boot_archive for /.alt.tmp.b-aJb.mntupdating /.alt.tmp.b-aJb.mnt/platform/sun4u/boot_archivePopulation of boot environment <new-zfsBE> successful.Creation of boot environment <new-zfsBE> successful.
# lustatusBoot Environment Is Active Active Can CopyName Complete Now On Reboot Delete Status-------------------------- -------- ------ --------- ------ ----------c1t0d0s0 yes yes yes no -new-zfsBE yes no no yes -
# lustatusBoot Environment Is Active Active Can CopyName Complete Now On Reboot Delete Status-------------------------- -------- ------ --------- ------ ----------c1t0d0s0 yes no no yes -new-zfsBE yes yes yes no -
# ludelete c1t0d0s0Determining the devices to be marked free.Updating boot environment configuration database.Updating boot environment description database on all BEs.Updating all boot environment configuration databases.Boot environment <c1t0d0s0> deleted.
# zpool attach -f rpool c1t1d0s0 c1t0d0s0cannot attach c1t0d0s0 to c1t1d0s0: device is too small
Kannste bald bei uns anfangen
falls du die ganze Disk in den Zpool nehmen willst, dann nimm "zpool attach -f rpool c1t1d0 c1t0d0" (d.h. ohne das s0 am Ende). Die Fehlermeldung koennte daher kommen das auf Slice 0 hier in der Tat noch eine zu kleine Partiion liegt (laesst sich mit prtvtoc anzeigen).
# zpool destroy -f rpool# zpool attach -f rpool2 c1t0d0s0 c1t1d0s0# zpool status pool: rpool2 state: ONLINEstatus: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state.action: Wait for the resilver to complete. scrub: resilver in progress for 0h2m, 22.90% done, 0h8m to goconfig: NAME STATE READ WRITE CKSUM rpool2 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0errors: No known data errors