management workstation host (glados) and its 2 components, the generic
infrastructure host (lns1) and the management client (fenster).
Our details for this are:
HOSTs: glados, lns1, fenster OSes: CentOS (6.0 = glados, 6.2 = lns1), Windows 2008 Server (fenster) NETWORK: 10.0.129.0/24 (management network)GLADOS (back to top)
glados.lab.none is a physical workstation whose primary purpose
in this setup is hostng both lns1.lab.none and fenster.lab.none as
VirtualBox VMs. Given this, there isn't much configuration necessary
for glados. Obviously VirtualBox needs to be installed, which is as
easy as downloading it from VirtualBox.org and installing its rpm package:
glados [0] /bin/rpm -i VirtualBox-4.1-4.1.8_75467_rhel6-1.x86_64.rpmSince I don't like to unnecessarily run things as "root", I configured
my account to also be a group member of the "vboxusers" group:
glados [0] /usr/sbin/usermod -G vboxusers troy glados [0] /usr/bin/id -a troy uid=500(troy) gid=500(troy) groups=500(troy),501(vboxusers)With VirtualBox installed and my login ID modified, I can run VirtualBox
under my own account, configure it, and configure the VMs:
/usr/bin/VirtualBox &Since the purpose here isn't to describe VirtualBox installation and
usage, we'll simply skip to the next configuration items. On glados,
eth1 is physically connected via a crossover cable to the lab vSphere
host and is IP'd on the lab management network:
glados [0] /sbin/ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:1B:21:D5:59:B4 inet addr:10.0.129.1 Bcast:10.0.129.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13353 errors:0 dropped:0 overruns:0 frame:0 TX packets:10249 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11973812 (11.4 MiB) TX bytes:3118939 (2.9 MiB) Memory:fea20000-fea40000 glados [0] /bin/cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE="eth1" NM_CONTROLLED="no" ONBOOT=yes TYPE=Ethernet BOOTPROTO=none IPADDR=10.0.129.1 PREFIX=24 NETMASK=255.255.255.0 BROADCAST=10.0.129.255 IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth1" HWADDR=00:1B:21:D5:59:B4 glados [0] /bin/cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=glados.lab.none NOZEROCONF=yes glados [0] /bin/cat /etc/resolv.conf domain lab.none nameserver 10.0.129.160 search lab.none stor.lab.none vmo.lab.none vms.lab.noneNotably, I did not configure a default gateway. Additionally, for my
testing purposes, I've set up network routes for each configured vSphere
lab network using vrout0 (10.0.129.220) as the gateway (setting a default
gateway could have alleviated the need for the static routes):
glados [0] [0] /bin/cat /etc/sysconfig/static-routes any net 10.0.130.0 netmask 255.255.255.0 gw 10.0.129.220 any net 10.0.131.0 netmask 255.255.255.0 gw 10.0.129.220 any net 10.0.132.0 netmask 255.255.255.0 gw 10.0.129.220 glados [0] /bin/netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface <snip...> 10.0.129.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.0.130.0 10.0.129.220 255.255.255.0 UG 0 0 0 eth1 10.0.131.0 10.0.129.220 255.255.255.0 UG 0 0 0 eth1 10.0.132.0 10.0.129.220 255.255.255.0 UG 0 0 0 eth1 <snip...>From here, 2 VMs have been configured in VirtualBox, lns1 and fenster,
using the default VM configurations provided by VirtualBox for "Red Hat
(64bit)" for lns1 and "Windows 2008 (64bit)" for fenster. Additionaly
under the "Network" settings for both VMs, another network interface
(Intel PRO/1000 MT Desktop Adapter) has been added. The added interfaces
are configured for both VMs in VirtualBox as:
- Attached to: Bridged Adapter - Name: eth1The above effectively bridges the new VM interfaces across eth1
configured on glados. Aside from installing the two VMs (an exercise
left for the reader), one with CentOS 6.2, the other with Windows 2008,
the configuration on glados ends here.
FENSTER (back to top)
Following the installation of Windows 2008 Server to the related
VirtualBox VM, fenster.lab.none is minimally configured since it is only
used for the vSphere management client. Aside from the following network
configuration for "Local Area Connection 3" (the interface bridged to
eth1 on glados), the only other major modification is the installation
of the vSphere Client. The network configuration on fenster is:
C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : fenster Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : lab.none stor.lab.none vmo.lab.none vms.lab.none Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : lab.none Description . . . . . . . . . . . : Intel(R) PRO/1000 MT Desktop Adapter #3 Physical Address. . . . . . . . . : 08-00-27-40-75-30 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.129.145(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.129.220 DNS Servers . . . . . . . . . . . : 10.0.129.160 NetBIOS over Tcpip. . . . . . . . : Disabled <snip...> C:\>netstat -rn <snip...> IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.129.220 10.0.129.145 266 10.0.129.0 255.255.255.0 On-link 10.0.129.145 266 10.0.129.145 255.255.255.255 On-link 10.0.129.145 266 <snip...> =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.129.220 Default =========================================================================== <snip...>Also, under "Control Panel -> Date and Time ("Internet Time" tab)",
I configured the time server to be 10.0.129.160, the IP address of
lns1.lab.none. For the VMware Client, I simply downloaded it from
VMware.com and copied it over to fenster from glados using WinSCP
(ftp could have been used instead of WinSCP). Alternatively, the
VMware Client could also be downloaded to the Windows host by opening
a web browser and pointing it to the address of either an ESXi host
or vCenter host. Since neither were installed at this point, this
is irrelevant. The Client package from VMware.com downloaded as
VMware-viclient-all-5.0.0-455964.exe. Simply run the file and follow
the steps from the installation wizard.
LNS1 (back to top)
Following the installation of CentOS 6.2 to the related VirtualBox VM,
there's a bit of work to be done for lns1.lab.none. To start, eth2 is the
interface on lsn1 that is bridged to eth1 on glados. The configuration
for eth2 is:
lns1 [0] /sbin/ifconfig eth2 eth2 Link encap:Ethernet HWaddr 08:00:27:BC:1C:4B inet addr:10.0.129.160 Bcast:10.0.129.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1926 errors:0 dropped:0 overruns:0 frame:0 TX packets:1360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:133220 (130.0 KiB) TX bytes:117222 (114.4 KiB) lns1 [0] /bin/cat /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE="eth2" NM_CONTROLLED=no ONBOOT=yes HWADDR=08:00:27:BC:1C:4B TYPE=Ethernet BOOTPROTO=none IPADDR=10.0.129.160 NETMASK=255.255.255.0 BROADCAST=10.0.129.255 USERCTL=no IPV6INIT=no lns1 [0] /bin/cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=lns1.lab.none GATEWAY=10.0.129.220 NOZEROCONF=yes lns1 [0] /bin/cat /etc/resolv.conf domain lab.none nameserver 10.0.129.160 search lab.none stor.lab.none vmo.lab.none vms.lab.noneGiven the above, the routing table is fairly simple:
lns1 [0] /bin/netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.129.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 0.0.0.0 10.0.129.220 0.0.0.0 UG 0 0 0 eth2Since lns1 serves as a time server for the entire lab environment,
NTPd was installed via the CentOS installation media. The package is
ntp-4.2.4p8-2.el6.centos.x86_64. Given that the lab setup is entirely
private with no external access, the following lines in /etc/ntp.conf
need to be uncommented:
server 127.127.1.0 prefer # local clock fudge 127.127.1.0 stratum 10These effectively configure NTPd to assume the local time on the host
(lns1) is always correct. Since there is no external network access
and all hosts and VMs will be configured to keep time by lns1, this
shouldn't be a problem. Additionally, the following lines were added
to ntp.conf for each of our lab networks:
restrict 10.0.129.0 mask 255.255.255.0 nomodify notrap restrict 10.0.130.0 mask 255.255.255.0 nomodify notrap restrict 10.0.131.0 mask 255.255.255.0 nomodify notrap restrict 10.0.132.0 mask 255.255.255.0 nomodify notrap(Of note, 10.0.130.0/24 and 10.0.131.0/24 aren't exactly necessary
since they are used for the storage and vMotion networks, respectively.
Hosts on these networks are also configured on the management network
(10.0.129.0/24) and would sync time via those interfaces.) The resulting
ntp.conf file is:
lns1 [0] /bin/egrep -v '^#|^$' /etc/ntp.conf driftfile /var/lib/ntp/drift restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1 restrict 10.0.129.0 mask 255.255.255.0 nomodify notrap restrict 10.0.130.0 mask 255.255.255.0 nomodify notrap restrict 10.0.131.0 mask 255.255.255.0 nomodify notrap restrict 10.0.132.0 mask 255.255.255.0 nomodify notrap server 127.127.1.0 prefer # local clock fudge 127.127.1.0 stratum 10 includefile /etc/ntp/crypto/pw keys /etc/ntp/keysWith NTPd configured, we can enable it under runlevel 3 and start it:
lns1 [0] /sbin/runlevel N 3 lns1 [0] /sbin/chkconfig --level 3 ntpd on lns1 [0] /sbin/chkconfig --list ntpd ntpd 0:off 1:off 2:off 3:on 4:off 5:off 6:off lns1 [0] service ntpd start Starting ntpd: [ OK ] lns1 [0]Next, a nameserver is needed so I downloaded bind 9.8.1-p1 from isc.org,
scp'd it over to lns1, untar'd it, and ran the configure script. The only
changes to the generic configure / compile time options were to set the
prefix to /app/bind-9.8.1-p1 and disabling SSL support:
lns1 [0] cd /app/src lns1 [0] /bin/gunzip bind-9.8.1-P1.tar.gz | /bin/tar xf - lns1 [0] cd bind-9.8.1-P1 lns1 [0] ./configure --prefix=/app/bind-9.8.1-p1 --with-openssl=no <snip...> lns1 [0] /usr/bin/make && /usr/bin/make install <snip...> lns1 [0] /bin/ln -s /app/bind-9.8.1-p1 /app/bind-currentThe symlink in the last step above isn't necessary, but using the
generic of 'bind-current' is easier than remembering the bind version if
I decide to reinstall it with a differing version. With bind installed,
I copied a generic rc script I had handy to start and stop 'named':
lns1 [0] /bin/cat /etc/init.d/named #!/bin/sh HOSTNAME=`/bin/hostname` NAMED="/app/bind-current/sbin/named" CONFFILE="/app/bind-config/named/master.conf" LOGGER="/usr/bin/logger" PKILL="/usr/bin/pkill" BASENAME="/bin/basename" STARTMSG="NAMED was started (rc.d) on $HOSTNAME" STOPMSG="NAMED was stopped (rc.d) on $HOSTNAME" case "$1" in start) if [ -x $NAMED ] && [ -f $CONFFILE ]; then echo "Starting named with conffile: " $CONFFILE $NAMED -c $CONFFILE $LOGGER -p user.alert $STARTMSG fi ;; stop) $PKILL named $LOGGER -p user.alert $STOPMSG ;; halt) $PKILL -9 named $LOGGER -p user.alert $STOPMSG " - STOPPED SIGKILL!" ;; *) echo "" echo "Usage: `$BASENAME $0` { start | stop | halt }" echo "" ;; esacWe'll need to chmod the above script to be executable and a symlink to
rc3.d will take care of starting 'named' on boot:
lns1 [0] /bin/chmod 755 /etc/init.d/named lns1 [0] /bin/ln -s /etc/init.d/named /etc/rc3.d/S99namedBefore starting 'named', we need to configure it. As seen above, my
configuration is stored separately from the bind installation. (A force
of habit in keeping my configuration separate from my install point.)
The following files exist under my /app/bind-config/named directory:
lns1 [0] /bin/ls -F /app/bind-config/named master.conf named.log named.run zones/ lns1 [0] /bin/find /app/bind-config/named -print /app/bind-config/named /app/bind-config/named/named.run /app/bind-config/named/zones /app/bind-config/named/zones/10.0.130.rev /app/bind-config/named/zones/10.0.131.rev /app/bind-config/named/zones/10.0.129.rev /app/bind-config/named/zones/127.0.0.rev /app/bind-config/named/zones/localhost /app/bind-config/named/zones/10.0.132.rev /app/bind-config/named/zones/lab.none /app/bind-config/named/master.conf /app/bind-config/named/named.logThe 'named' configuration file, master.conf contains the following:
lns1 [0] /bin/cat /app/bind-config/named/master.conf # server-wide options options { directory "/app/bind-config/named"; allow-transfer { 10.0.129.0/24; }; allow-recursion { none; }; version "lab server 0.3.1"; }; # logging directives logging { channel named_log { file "named.log" versions 3 size 25m; severity info; print-severity yes; print-time yes; print-category yes; }; category default { named_log; }; }; zone "lab.none" { type master; file "zones/lab.none"; }; zone "129.0.10.in-addr.arpa" { type master; file "zones/10.0.129.rev"; }; zone "130.0.10.in-addr.arpa" { type master; file "zones/10.0.130.rev"; }; zone "131.0.10.in-addr.arpa" { type master; file "zones/10.0.131.rev"; }; zone "132.0.10.in-addr.arpa" { type master; file "zones/10.0.132.rev"; }; zone "localhost" { type master; file "zones/localhost"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" { type master; file "zones/127.0.0.rev"; allow-update { none; }; };The above configuration only allows zone transfers (allow-transfer) from
our lab management network and disallows all recursion (allow-recursion).
This means we will only respond with DNS information for which we are
authoritative. Our "root" directory is set to "/app/bind-config/named"
which is where our logging is written to (named.log) and any additional
configuration should be found. To keep things clean, I've setup a "zones"
directory under our "root" directory" to contain all zone configuration.
The contents of each zone file are as follows:
lns1 [0] for i in lab.none 10.0.129.rev 10.0.130.rev 10.0.131.rev \ > 10.0.132.rev 127.0.0.rev localhost ; do \ > echo "==> /app/bind-config/named/zones/${i} <==" ; \ > /bin/cat /app/bind-config/named/zones/${i} ; echo ; done ==> /app/bind-config/named/zones/lab.none <== $TTL 86400 ; Default TTL (1 day) @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021603 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 864000 ; Expire (10 days) 86400 ) ; Negative TTL (1 day) NS lns1.lab.none. glados A 10.0.129.1 fenster A 10.0.129.145 lns1 A 10.0.129.160 pesx0 A 10.0.129.200 vesx0 A 10.0.129.210 vesx1 A 10.0.129.211 vrout0 A 10.0.129.220 dstor0 A 10.0.129.230 vcent0 A 10.0.129.240 ; storage subdomain = stor vesx0.stor A 10.0.130.210 vesx1.stor A 10.0.130.211 vrout0.stor A 10.0.130.220 dstor0.stor A 10.0.130.230 ; vmotion subdomain = vmo vesx0.vmo A 10.0.131.210 vesx1.vmo A 10.0.131.211 vrout0.vmo A 10.0.131.220 ; vm subdomain = vms vrout0.vms A 10.0.132.220 bsd0.vms A 10.0.132.50 bsd1.vms A 10.0.132.51 bsd2.vms A 10.0.132.52 bsd3.vms A 10.0.132.53 lin0.vms A 10.0.132.60 lin1.vms A 10.0.132.61 lin2.vms A 10.0.132.62 lin3.vms A 10.0.132.63 ==> /app/bind-config/named/zones/10.0.129.rev <== $TTL 86400 ; Default TTL (1 day) @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021602 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 864000 ; Expire (10 days) 86400 ) ; Negative TTL (1 day) NS lns1.lab.none. 1 PTR glados.lab.none. 145 PTR fenster.lab.none. 160 PTR lns1.lab.none. 200 PTR pesx0.lab.none. 210 PTR vesx0.lab.none. 211 PTR vesx1.lab.none. 220 PTR vrout0.lab.none. 230 PTR dstor0.lab.none. 240 PTR vcent0.lab.none. ==> /app/bind-config/named/zones/10.0.130.rev <== $TTL 86400 ; Default TTL (1 day) @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021601 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 864000 ; Expire (10 days) 86400 ) ; Negative TTL (1 day) NS lns1.lab.none. 210 PTR vesx0.stor.lab.none. 211 PTR vesx1.stor.lab.none. 220 PTR vrout0.stor.lab.none. 230 PTR dstor0.stor.lab.none. ==> /app/bind-config/named/zones/10.0.131.rev <== $TTL 86400 ; Default TTL (1 day) @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021601 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 864000 ; Expire (10 days) 86400 ) ; Negative TTL (1 day) NS lns1.lab.none. 210 PTR vesx0.vmo.lab.none. 211 PTR vesx1.vmo.lab.none. 220 PTR vrout0.vmo.lab.none. ==> /app/bind-config/named/zones/10.0.132.rev <== $TTL 86400 ; Default TTL (1 day) @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021601 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 864000 ; Expire (10 days) 86400 ) ; Negative TTL (1 day) NS lns1.lab.none. 220 PTR vrout0.vms.lab.none. 50 PTR bsd0.vms.lab.none. 51 PTR bsd1.vms.lab.none. 52 PTR bsd2.vms.lab.none. 53 PTR bsd3.vms.lab.none. 60 PTR lin0.vms.lab.none. 61 PTR lin1.vms.lab.none. 62 PTR lin2.vms.lab.none. 63 PTR lin3.vms.lab.none. ==> /app/bind-config/named/zones/127.0.0.rev <== $TTL 86400 @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021301 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 8640000 ; Expire (100 days) 86400 ) ; Default TTL (1 day) NS lns1.lab.none. 1 PTR localhost. ==> /app/bind-config/named/zones/localhost <== $TTL 86400 @ SOA lns1.lab.none. hostmaster.lab.none. ( 2012021301 ; Serial number (YYYYMMDDNN) 3600 ; Refresh (1 hour) 300 ; Retry (5 minutes) 8640000 ; Expire (100 days) 86400 ) ; Default TTL (1 day) NS lns1.lab.none. @ A 127.0.0.1With that the entire lab setup is accounted for in DNS and we we can
now start 'named':
lns1 [0] /etc/init.d/named start Starting named with conffile: /app/bind-config/named/master.conf lns1 [0] /bin/ps -ef|/bin/grep named root 2963 1 0 00:59 ? 00:00:00 \ > /app/bind-current/sbin/named -c /app/bind-config/named/master.confThe last thing to do is install the VMware vSphere
CLI and perl SDK packages. For this, I downloaded
VMware-vSphere-CLI-5.0.0-422456.x86_64.tar.gz from VMware.com, scp'd
it over to lns1, untar'd it and ran the install script. Of note, since
I didn't have all of the requisite perl Modules installed, I needed to
add them. For my install, they are:
Archive::Zip 1.20 or newer Class::MethodMaker 2.10 or newer UUID 0.03 or newer Data::Dump 1.15 or newer SOAP::Lite 0.710.08 or newer XML::SAX 0.16 or newer XML::NamespaceSupport 1.09 or newer XML::LibXML::Common 0.13 or newer XML::LibXML 1.63 or newerYou can either download each and its related dependencies from cpan.org
or, for most of them, you can install them from the CentOS installation
media. I did a little of both. The following were installed from source
from cpan.org:
lns1 [0] pwd lns1 [0] for i in Archive-Zip-1.30.tar.gz Class-MethodMaker-2.18.tar.gz \ > Data-Dump-1.21.tar.gz Class-Inspector-1.25.tar.gz Task-Weaken-1.04.tar.gz \ > SOAP-Lite-0.714.tar.gz ; do a=`echo "${i}" | /bin/sed -e 's/.tar.gz//g'` ; \ > /bin/gunzip -c ${i} | /bin/tar xf - ; cd ${a} ; \ > /usr/bin/perl Makefile.PL && /usr/bin/make && /usr/bin/make install ; \ > cd .. ; done <snip...> lns1 [0]At this point, I decided to just simply install the other needed packages
from the CentOS install media. (The ISO image was presented to the
"lns1" VirtualBox VM as a CD device.):
lns1 [0] /bin/mount -t iso9660 /dev/sr0 /a mount: block device /dev/sr0 is write-protected, mounting read-only lns1 [0] /usr/bin/yum --disablerepo=* --enablerepo=c6-media install \ > libxml2.x86 64 libxml2-devel.x86 64 perl-libxml-perl.noarch <snip...> lns1 [0] /usr/bin/yum --disablerepo=* --enablerepo=c6-media install uuid.x86_64 \ > uuid-devel.x86_64 uuid-perl.x86_64 <snip...> lns1 [0] /usr/bin/yum --disablerepo=* --enablerepo=c6-media install \ > libuuid-devel.x86_64 <snip...> lns1 [0] /bin/umount /aThe last dependency, UUID-0.03.tar.gz needed to be installed, which is
not on the installation media:
lns1 [0] /bin/gunzip -c UUID-0.03.tar.gz | /bin/tar xf - lns1 [0] cd UUID-0.03 lns1 [0] /usr/bin/perl Makefile.PL && /usr/bin/make && /usr/bin/make install <snip...> lns1 [0] cd ..With the dependecies installed, we can now install the vSphere CLI and
perl SDK packages. I placed the vSphere CLI package under /tmp/src so:
lns1 [0] /bin/gunzip -c VMware-vSphere-CLI-5.0.0-422456.x86_64.tar.gz | /bin/tar xf - lns1 [0] /bin/ls -F VMware-vSphere-CLI-5.0.0-422456.x86_64.tar.gz vmware-vsphere-cli-distrib/ lns1 [0] cd vmware-vsphere-cli-distrib lns1 [0] ./vmware-install.pl Creating a new vSphere CLI installer database using the tar4 format. Installing vSphere CLI 5.0.0 build-422456 for Linux. You must read and accept the vSphere CLI End User License Agreement to continue. Press enter to display it. <snip...> Do you accept? (yes/no) yes Thank you. In which directory do you want to install the executable files? [/usr/bin] /usr/local/vmware Please wait while copying vSphere CLI files... <snip...> This installer has successfully installed both vSphere CLI and the vSphere SDK for Perl. Enjoy, --the VMware team lns1 [0]Of note, when prompted during the vSphere CLI install, I chose to override
the default install directory so that I could easily find vSphere specific
files that would be installed. After installing the vSphere CLI and perl
SDK, our configuration of lns1 is now complete, thus also completing the
setup of the workstation host (glados) and its VMs. Part 3 will discuss
the setup behind the various infrastructure components configured on
the physical ESXi host.
see also:
vSphere 5 Lab Setup pt 1: The Overview
vSphere 5 Lab Setup pt 3: Installing the Physical ESXi Host
vSphere 5 Lab Setup pt 4: Network Configuration on the Physical ESXi Host
vSphere 5 Lab Setup pt 5: Infrastructure VM Creation
vSphere 5 Lab Setup pt 6: Infrastructure VM Configurations and Boot Images
vSphere 5 Lab Setup pt 7: First VM Boot