host (cn40), we now get to start checking out handling and management
of SmartOS OS VMs (SOSVMs). Jumping in where we left off in part 3, we
start with the configuration for our image repository hosted on "serv1"
(our services host):
cn40 [0] /usr/bin/getent hosts serv1.admin.none 10.0.7.10 serv1.admin.none cn40 [0] cat /var/db/imgadm/sources.list https://datasets.joyent.com/datasets cn40 [0] cat /var/db/imgadm/sources.list http://serv1.admin.none/datasets/ cn40 [0]In the above, I've checked that we can resolve the FQDN for "serv1"
via 'getent'. I've also updated the 'imgadm' sources from Joyent.com
related to reflect the configuration on "serv1". (Remember, this is a
completely sandboxed environment so there is no routing to the outside
world, including to joyent.com.) In the course of my testing, I found
that the URL in "source.list" above needs to include an FQDN, so an IP
address won't work. Also, 'imgadm' seems to fail on "update" if there
isn't a trailing slash (/), thus the reason it is added in the updated
"source.list" entry. The failure of no trailing slash can be seen below:
cn40 [0] imgadm update updating local images database... Get http://serv1.admin.none/datasets... undefined:1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> ^ SyntaxError: Unexpected token < at Object.parse (native) at IncomingMessage.cacheUpdate (/usr/img/node_modules/imgadm.js:343:38) at IncomingMessage.EventEmitter.emit (events.js:126:20) at IncomingMessage._emitEnd (http.js:366:10) at HTTPParser.parserOnMessageComplete [as onMessageComplete] (http.js:149:23) at Socket.socketOnData [as ondata] (http.js:1367:20) at TCP.onread (net.js:403:27) cn40 [1]With our side tangent resolved and our "source.list" file updated
appropriately, 'imgadm update' will now function and add the image
references from our image repository on "serv1" to our local image
database. While we still don't have any locally imported images
('imgadm list' reports nothing), we can see what images our repository
has available (imgadm avail):
cn40 [0] /usr/sbin/imgadm update updating local images database... Get http://serv1.admin.none/datasets/... done cn40 [0] /usr/sbin/imgadm list cn40 [0] /usr/sbin/imgadm avail UUID OS PUBLISHED URN fdea06b0-3f24-11e2-ac50-0b645575ce9d smartos 2012-12-05 sdc:sdc:base64:1.8.4 84cb7edc-3f22-11e2-8a2a-3f2a7b148699 smartos 2012-12-05 sdc:sdc:base:1.8.4 aa583f78-3d83-11e2-9188-fff9b605718d smartos 2012-12-03 sdc:sdc:base64:1.8.2 ef22b182-3d7a-11e2-a7a9-af27913943e2 smartos 2012-12-03 sdc:sdc:base:1.8.2 b00acc20-14ab-11e2-85ae-4f03a066e93e smartos 2012-10-12 sdc:sdc:mongodb:1.4.0 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc smartos 2012-10-11 sdc:sdc:nodejs:1.4.0 dc1a8b5e-043c-11e2-9d94-0f3fcb2b0c6d smartos 2012-09-21 sdc:sdc:percona:1.6.0 a0f8cf30-f2ea-11e1-8a51-5793736be67c smartos 2012-08-30 sdc:sdc:standard64:1.0.7 3390ca7c-f2e7-11e1-8818-c36e0b12e58b smartos 2012-08-30 sdc:sdc:standard:1.0.7 cn40 [0]The images on our image repository contain compressed ZFS snapshots.
Once imported, we'll see that they've been added to our local ZFS
datasets. Below, I've checked our reported file systems (FS) via 'df',
imported the images for sdc:sdc:base64:1.8.4, sdc:sdc:mongodb:1.4.0,
and sdc:sdc:nodejs:1.4.0 by their respective UUIDs (as reported by
'imgadm avail'), and listed the locally imported images now reporting via
'imgadm list':
cn40 [0] /bin/df -h | /bin/grep '/zones' zones 16G 650K 13G 1% /zones zones/cores 10G 31K 10G 1% /zones/global/cores zones/config 16G 38K 13G 1% /etc/zones cn40 [0] for i in fdea06b0-3f24-11e2-ac50-0b645575ce9d \ > b00acc20-14ab-11e2-85ae-4f03a066e93e 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc ; do \ > /usr/sbin/imgadm import ${i} ; done fdea06b0-3f24-11e2-ac50-0b645575ce9d doesnt exist. continuing with install fdea06b0-3f24-11e2-ac50-0b645575ce9d successfully installed image fdea06b0-3f24-11e2-ac50-0b645575ce9d successfully imported b00acc20-14ab-11e2-85ae-4f03a066e93e doesnt exist. continuing with install b00acc20-14ab-11e2-85ae-4f03a066e93e successfully installed image b00acc20-14ab-11e2-85ae-4f03a066e93e successfully imported 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc doesnt exist. continuing with install 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc successfully installed image 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc successfully imported cn40 [0] /usr/sbin/imgadm list UUID OS PUBLISHED URN b00acc20-14ab-11e2-85ae-4f03a066e93e smartos 2012-10-12 sdc:sdc:mongodb:1.4.0 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc smartos 2012-10-11 sdc:sdc:nodejs:1.4.0 fdea06b0-3f24-11e2-ac50-0b645575ce9d smartos 2012-12-05 sdc:sdc:base64:1.8.4 cn40 [0] /bin/df -h | /bin/grep '/zones' zones 16G 654K 11G 1% /zones zones/cores 10G 31K 10G 1% /zones/global/cores zones/config 16G 38K 11G 1% /etc/zones zones/fdea06b0-3f24-11e2-ac50-0b645575ce9d 16G 372M 11G 4% /zones/fdea06b0-3f24-11e2-ac50-0b645575ce9d zones/b00acc20-14ab-11e2-85ae-4f03a066e93e 16G 774M 11G 7% /zones/b00acc20-14ab-11e2-85ae-4f03a066e93e zones/1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc 16G 1.0G 11G 9% /zones/1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc cn40 [0] /usr/sbin/zfs list -r zones | /bin/grep -- '-' zones/1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc 1.02G 10.6G 1.02G /zones/1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc zones/b00acc20-14ab-11e2-85ae-4f03a066e93e 774M 10.6G 774M /zones/b00acc20-14ab-11e2-85ae-4f03a066e93e zones/dump 838M 10.6G 838M - zones/fdea06b0-3f24-11e2-ac50-0b645575ce9d 372M 10.6G 372M /zones/fdea06b0-3f24-11e2-ac50-0b645575ce9d zones/swap 2.06G 12.7G 16K -Following the 'imgadm' listing of imported images, our 'df' output above
now reports the imported ZFS datasets related to each image. A 'zfs
list' further supports our 'df' output. Since these templates (images)
are really just unconfigured zones, their respective ZFS datasets will
be cloned during the creation of new SOSVMs. To further illustrate this,
we can have a look at the image manifest for an image via 'imgadm show':
cn40 [0] /usr/sbin/imgadm show 1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc { "name": "nodejs", "version": "1.4.0", "type": "zone-dataset", "description": "Node.js Image with MongoDB", "published_at": "2012-10-11T14:40:57.936Z", "os": "smartos", "files": [ { "path": "nodejs-1.4.0.zfs.bz2", "sha1": "d2c40995ff693994e0343a661fb2279794546748", "size": 269634973, "url": "http://10.0.7.10/datasets/1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc/nodejs-1.4.0.zfs.bz2" } ], "requirements": { "networks": [ { "name": "net0", "description": "public" } ] }, "users": [ { "name": "root" }, { "name": "admin" }, { "name": "mongodb" } ], "generate_passwords": true, "uuid": "1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc", "creator_uuid": "352971aa-31ba-496c-9ade-a379feaecd52", "vendor_uuid": "352971aa-31ba-496c-9ade-a379feaecd52", "creator_name": "sdc", "platform_type": "smartos", "cloud_name": "sdc", "urn": "sdc:sdc:nodejs:1.4.0", "created_at": "2012-10-11T14:40:57.936Z", "updated_at": "2012-10-11T14:40:57.936Z", "_url": "http://serv1.admin.none/datasets/" }Now that we have some images locally imported and at our disposal,
let's create a simple OS VM using the "base64:1.8.4" image. For anyone
familiar with 'zoneadm' and 'zonecfg', you could potentially use them,
but 'vmadm' makes things a lot simpler. Below, I've created a new
SOSVM, vmnonet, passing off JSON attributes to 'vmadm' (see vmadm(1m)
for details on the properties available). Additionally a 'vmadm list'
and 'zoneadm list' both confirm our new SOSVM is now running:
cn40 [0] /usr/sbin/vmadm create <<EOF > { "alias": "vmnonet", "brand": "joyent", "image_uuid": "fdea06b0-3f24-11e2-ac50-0b645575ce9d" } > EOF Successfully created c7885cb0-4489-484d-9cb9-3e048c1f0ed5 cn40 [0] /usr/sbin/vmadm list UUID TYPE RAM STATE ALIAS c7885cb0-4489-484d-9cb9-3e048c1f0ed5 OS 256 running vmnonet cn40 [0] /usr/sbin/zoneadm list global c7885cb0-4489-484d-9cb9-3e048c1f0ed5 cn40 [0] zoneadm list -v ID NAME STATUS PATH BRAND IP 0 global running / liveimg shared 1 c7885cb0-4489-484d-9cb9-3e048c1f0ed5 running /zones/c7885cb0-4489-484d-9cb9-3e048c1f0ed5 joyent excl cn40 [0] /bin/df -h | /bin/grep c7885cb0-4489-484d-9cb9-3e048c1f0ed5 zones/c7885cb0-4489-484d-9cb9-3e048c1f0ed5 10G 379M 10.0G 4% /zones/c7885cb0-4489-484d-9cb9-3e048c1f0ed5 zones/cores/c7885cb0-4489-484d-9cb9-3e048c1f0ed5 10G 31K 10G 1% /zones/c7885cb0-4489-484d-9cb9-3e048c1f0ed5/cores cn40 [0]The 'df' output above identifies 2 ZFS datasets for our new VM. One is
for our VM's normal usage, the other, "cores", is for any core dumps.
Of note, all SOSVMs created are identified in SmartOS via UUID. While not
a required attribute, including an "alias" property during SOSVM creation
can help reduce confusion in identifying your VMs via 'vmadm' ('zoneadm'
will not show aliases). Since we didn't include any network properties to
"vmnonet", we can only log on locally, as seen below via 'zlogin':
cn40 [0] /usr/sbin/zlogin -l root c7885cb0-4489-484d-9cb9-3e048c1f0ed5 [Connected to zone 'c7885cb0-4489-484d-9cb9-3e048c1f0ed5' pts/3] __ . . _| |_ | .-. . . .-. :--. |- |_ _| ;| || |(.-' | | | |__| `--' `-' `;-| `-' ' ' `-' / ; SmartMachine base64 1.8.4 `-' http://wiki.joyent.com/jpc2/SmartMachine+Base [root@c7885cb0-4489-484d-9cb9-3e048c1f0ed5 ~]# /usr/sbin/ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 [root@c7885cb0-4489-484d-9cb9-3e048c1f0ed5 ~]# exit logout [Connection to zone 'c7885cb0-4489-484d-9cb9-3e048c1f0ed5' pts/3 closed] cn40 [0]Alright, one VM down, let's create another, this time incorporating
networking. For our new SOSVM, "vm100", I've opted to use an input file:
cn40 [0] cat /var/tmp/vm100 { "alias": "vm100", "brand": "joyent", "image_uuid": "1fc068b0-13b0-11e2-9f4e-2f3f6a96d9bc", "dns_domain": "world.none", "hostname": "vm100", "resolvers": [ "10.0.8.10" ], "nics": [ { "nic_tag": "world", "ip": "10.0.8.100", "netmask": "255.255.255.0", "gateway": "10.0.8.37", "primary": true } ] }The image we'll be using is our locally imported nodejs:1.4.0 image.
Added to the above is our domain, name server(s), and configuration for
our SOSVM network interface. We've identified the global NIC to attach
to using the NIC_TAG we previously obtained from 'sysinfo -p' in part 3.
Below, we create the new VM using 'vmadm', list out our running VMs,
and using 'dladm' see that our vNIC (net0) for "vm100" is attached to our
"world" NIC (e1000g1, in the global zone):
cn40 [0] /usr/sbin/vmadm create -f /var/tmp/vm100 Successfully created 7b7f6343-2584-46a4-a077-707281108449 cn40 [0] /usr/sbin/vmadm list UUID TYPE RAM STATE ALIAS 7b7f6343-2584-46a4-a077-707281108449 OS 256 running vm100 c7885cb0-4489-484d-9cb9-3e048c1f0ed5 OS 256 running vmnonet cn40 [0] /usr/sbin/dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g0 phys 1500 up vmwarebr -- e1000g1 phys 1500 up vmworld -- vmwarebr0 bridge 1500 up -- e1000g0 vmworld0 bridge 1500 up -- e1000g1 net0 vnic 1500 ? -- e1000g1 cn40 [0] /usr/sbin/dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID ZONE net0 e1000g1 0 92:bd:41:a6:c7:74 fixed 0 7b7f6343-2584-46a4-a077-707281108449 cn40 [0]In part 3, a piece of our configuration included importing an SMF to
automatically create a bridge to support our SOSVMs. Without that
bridge linked to our "world" NIC, our VMs wouldn't be reachable outside
of our compute node (cn40). Since the bridge "vmworld" has been set up,
we can now remotely access our VM (vm100). By default, SmartOS-created
VMs are configured to only allow ssh key-based authentication without
keyboard interaction. This means generating an ssh key via 'ssh-keygen'
with a blank passphrase on our workstation host (glados):
# on glados: troy@glados [0] ssh-keygen -f ~/.ssh/vms-smartos Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/troy/.ssh/vms-smartos. Your public key has been saved in /home/troy/.ssh/vms-smartos.pub. The key fingerprint is: a2:0d:81:d1:87:29:ee:49:1c:40:e0:20:c8:eb:f6:05 troy@glados.vbox.none The key's randomart image is: +--[ RSA 2048]----+ |Xo.. o | |=.oo+ . | | +oo.. | | .+E . | |.o .o . S | | oo = . | |. . o . | | . | | | +-----------------+ troy@glados [0]Once you've generated a key to use, add the contents of the public key
(vms-smartos.pub) to the root user's "authorized_keys" file for your
new VM (vm100) on your compute node (cn40). The path in the case of
this write up is:
/zones/7b7f6343-2584-46a4-a077-707281108449/root/root/.ssh/authorized_keys(The "7b7f6343-2584-46a4-a077-707281108449" part of the path is the UUID
of our VM.) Back on our workstation, we can log into "vm100" by passing
our private key (vms-smartos) to 'ssh':
troy@glados [0] ssh -i ~/.ssh/vms-smartos -l root vm100 The authenticity of host 'vm100 (10.0.8.100)' can't be established. RSA key fingerprint is 13:ad:45:d0:87:e9:79:cb:08:32:2c:b5:05:c5:9c:52. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'vm100,10.0.8.100' (RSA) to the list of known hosts. Last login: Tue Jan 29 04:42:00 2013 from 192.168.56.1 __ . . _| |_ | .-. . . .-. :--. |- |_ _| ;| || |(.-' | | | |__| `--' `-' `;-| `-' ' ' `-' / ; SmartMachine (nodejs 1.4.0) `-' http://wiki.joyent.com/jpc2/SmartMachine+Node.JS [root@vm100 ~]#Excellent, we're logged in to our networked nodejs VM. As part of a
cursory review of our VM, I've checked out the output of 'ifconfig',
'df', validated host resolution via 'host', reviewed "resolv.conf",
and checked our routing table via 'netstat':
[root@vm100 ~]# /usr/sbin/ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2 inet 10.0.8.100 netmask ffffff00 broadcast 10.0.8.255 ether 92:bd:41:a6:c7:74 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 [root@vm100 ~]# /bin/df -h Filesystem size used avail capacity Mounted on zones/7b7f6343-2584-46a4-a077-707281108449 11G 1.0G 10.0G 10% / /dev 0K 0K 0K 0% /dev /lib 251M 219M 32M 88% /lib /lib/svc/manifest 11G 656K 11G 1% /lib/svc/manifest /lib/svc/manifest/site 11G 1.0G 10.0G 10% /lib/svc/manifest/site /sbin 251M 219M 32M 88% /sbin /usr 376M 355M 22M 95% /usr /usr/ccs 11G 1.0G 10.0G 10% /usr/ccs /usr/local 11G 1.0G 10.0G 10% /usr/local proc 0K 0K 0K 0% /proc ctfs 0K 0K 0K 0% /system/contract mnttab 0K 0K 0K 0% /etc/mnttab objfs 0K 0K 0K 0% /system/object lxproc 0K 0K 0K 0% /system/lxproc swap 256M 49M 207M 20% /etc/svc/volatile /usr/lib/libc/libc_hwcap2.so.1 376M 355M 22M 95% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 256M 49M 207M 20% /tmp swap 256M 49M 207M 20% /var/run [root@vm100 ~]# /usr/sbin/host serv1.world.none serv1.world.none has address 10.0.8.10 [root@vm100 ~]# /bin/cat /etc/resolv.conf nameserver 10.0.8.10 [root@vm100 ~]# /usr/bin/netstat -rn Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ---------- --------- default 10.0.8.37 UG 3 458 net0 10.0.8.0 10.0.8.100 U 4 5 net0 127.0.0.1 127.0.0.1 UH 2 0 lo0 Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If --------------------------- --------------------------- ----- --- ------- ----- ::1 ::1 UH 2 0 lo0 [root@vm100 ~]#By default, our SOSVM is created fairly secure with only one external
socket open, that of 'sshd' on port 22. Being curious about the open
port on 127.0.0.1 below, I turned to 'pfiles':
[root@vm100 ~]# /usr/bin/netstat -f inet -na TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- 127.0.0.1.27017 *.* 0 0 128000 0 LISTEN *.22 *.* 0 0 128000 0 LISTEN 10.0.8.100.22 192.168.56.1.50078 64128 0 128872 0 ESTABLISHED [root@vm100 ~]# /usr/bin/pfiles /proc/* | /usr/bin/egrep "^[0-9]|sockname" 5866: zsched 5933: /sbin/init 5973: /lib/svc/bin/svc.startd 5975: /lib/svc/bin/svc.configd 6020: /lib/inet/ipmgmtd 6054: /usr/lib/pfexecd 6219: /usr/sbin/nscd sockname: AF_ROUTE 6287: mongod --fork -f /opt/local/etc/mongodb.conf --pidfilepath /var/mongod sockname: AF_INET 127.0.0.1 port: 27017 sockname: AF_UNIX /tmp/mongodb-27017.sock 6293: /usr/sbin/cron 6295: /usr/lib/inet/inetd start sockname: AF_UNIX /var/run/.inetd.uds 6299: /usr/sbin/rsyslogd -c5 -n 6300: /usr/lib/saf/sac -t 300 6305: /usr/lib/utmpd 6306: /usr/lib/saf/ttymon 6307: /usr/lib/saf/ttymon -g -d /dev/console -l console -T vt100 -m ldterm,t 6331: /usr/lib/ssh/sshd sockname: AF_INET6 :: port: 22 6498: /usr/lib/ssh/sshd sockname: AF_INET6 ::ffff:10.0.8.100 port: 22 6499: /usr/lib/ssh/sshd sockname: AF_INET6 ::ffff:10.0.8.100 port: 22 6502: -bash [root@vm100 ~]# exit logout Connection to vm100 closed. troy@glados [0]Awesome, port 27017 appears to have been opened by 'mongod', PID 6287.
After identifying that, we exit our VM. Next I wanted to test if SmartOS
is mindful of VM state and to ensure that our "world-nic" service from
part 3 was going to operate appropriately following a reboot. With that
in mind, I stopped "vmnonet", verified it was stopped, and rebooted
"cn40":
# back to cn40: cn40 [0] /usr/sbin/vmadm stop c7885cb0-4489-484d-9cb9-3e048c1f0ed5 Successfully completed stop for c7885cb0-4489-484d-9cb9-3e048c1f0ed5 cn40 [0] /usr/sbin/vmadm list UUID TYPE RAM STATE ALIAS 7b7f6343-2584-46a4-a077-707281108449 OS 256 running vm100 c7885cb0-4489-484d-9cb9-3e048c1f0ed5 OS 256 stopped vmnonet cn40 [0]cn40 [0] /usr/sbin/init 6 cn40 [0] Connection to cn40 closed.Following the reboot of "cn40", I've logged back in via 'ssh' and verified
our network stack in the global zone; everything appears as it should.
Further, I've verified with 'vmadm' that "vm100" was restarted on boot
up of our compute node whereas "vmnonet" is still in a stopped state,
just as we left it:
troy@glados [0] ssh -l root cn40 Password: Last login: Tue Jan 29 03:50:05 2013 from 192.168.56.1 - SmartOS Live Image v0.147+ build: 20130111T010112Z [root@cn40 ~]# # reset the prompt: cn40 [0] /usr/sbin/ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g0: flags=1100943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,ROUTER,IPv4> mtu 1500 index 2 inet 10.0.7.40 netmask ffffff00 broadcast 10.0.7.255 ether 8:0:27:2d:59:51 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 cn40 [0] /usr/sbin/dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g0 phys 1500 up vmwarebr -- e1000g1 phys 1500 up vmworld -- vmwarebr0 bridge 1500 up -- e1000g0 vmworld0 bridge 1500 up -- e1000g1 net0 vnic 1500 ? -- e1000g1 cn40 [0] /usr/sbin/dladm show-bridge BRIDGE PROTECT ADDRESS PRIORITY DESROOT vmwarebr stp 32768/8:0:27:2d:59:51 32768 32768/8:0:27:2d:59:51 vmworld stp 32768/8:0:27:91:1f:6e 32768 32768/8:0:27:91:1f:6e cn40 [0] /usr/sbin/vmadm list UUID TYPE RAM STATE ALIAS 7b7f6343-2584-46a4-a077-707281108449 OS 256 running vm100 c7885cb0-4489-484d-9cb9-3e048c1f0ed5 OS 256 stopped vmnonet cn40 [0]For a final act of verification, we can log into "vm100" via 'ssh'
as seen below:
troy@glados [0] ssh -i ~/.ssh/vms-smartos -l root vm100 Last login: Tue Jan 29 04:51:07 2013 from 192.168.56.1 __ . . _| |_ | .-. . . .-. :--. |- |_ _| ;| || |(.-' | | | |__| `--' `-' `;-| `-' ' ' `-' / ; SmartMachine (nodejs 1.4.0) `-' http://wiki.joyent.com/jpc2/SmartMachine+Node.JS [root@vm100 ~]# uptime 04:57am up 0:03, 1 user, load average: 0.09, 0.17, 0.08 [root@vm100 ~]#So this was my first look at SmartOS, and to be honest, I like what
I see. Granted, while I have a lot of familiarity with Solaris, I'm
still getting accustomed to SmartOS so some of the routes I've gone may
have been unnecessary or over-kill. Anyhow, SmartOS is definitely worth
a look. (Hopefully I won't catch too much flak from the Joyent guys
and gals over my stumblings in this series of write ups.)
see also:
Intro SmartOS Setup pt 1
Intro SmartOS Setup pt 2
Intro SmartOS Setup pt 3