been a big fan of the VMWare server product for a while now. I started
out with the issue where I was running a multi-server environment at
home. I always thought it was a good idea to have at least one spare
environment to act as a redundant as I was constantly experimenting. As
my concepts became more complex, the number of machines grew and at
some point, I couldn't afford to keep up with the number of machines
required to operate both my home management environment and my lab
Then my friend, Camille, introduced me to VMWare
server, which was launched as an open-sourced product. I was on a home
refresh cycle anyway, so I decided to upgrade my home environment to
the then newly launched AMD X2 3800+ cpu running the Windows Server
2003 operating system. I rebuilt my entire environment from ground up.
After 6 months, I discovered that there was a serious flaw in my design
for Operations by using Windows Server 2003 as the underlying server
for the virtual machines (vm) and this was most apparent on Microsoft
Patch Tuesday of every month. Secondly, I had a very difficult time
with my AMD machines. So I used this opportunity to go to Intel Quad
Core on the 64 bit version of Ubuntu Linux. The migration of the VMs
were flawless. From that point on, I was sold on VMWare Server.
Server released version 2.0 recently. Overall, it is an improvement to
1.0.x of the product. For one, the install seems to be simpler. It
bundles both the server and management user interface install on Ubuntu
in one script. I couldn't get the mui to work consistently with any of
my versions of Ubuntu. For 2.0, it worked flawlessly. The other change
is that the server console is now web based which is different from the
client-based consoles which was used before. This has both pros and
cons. It's much easier to now manage and access vms without the need to
install the console client on multiple desktops. On the other hand,
VMWare seems to have changed the way you locate VMs. They've created a
concept called storage where all vms created are stored in the root
directory of your storage directory. I typically create mount points
for each of the external drives that I attached to my server and
mounted them as sub-directories to the default vm folder. There is no
way to select a sub-folder in the interface and I would have to create
a new storage location. What makes this especially tricky is that you
can't chooses a sub directory of an existing storage folder as a new
storage folder. While it makes sense, it's a bit of a conundrum for me.
hard to compare versions of the MUI because mine was never stable
enough to do any thorough testing. However, the features which I really
like now that it is working are that I can sequence which vms boot up
first. This is particularly useful because in my windows environment, I
prefer the Active Directory come up before any other machines come up.
The other thing which I like is that I can control the shutdown
behaviour of the vms. In my environment, I am able to set the vms to
suspend to disk rather than shut down when the host shuts down. The
reason why this is useful is that the only time I forsee me not
manually shutting down a vm is during a power outage in which case, I
would like the vms to shutdown as quickly as possible.
the idea of abstracting the hardware in building out my environments
and vmware has served me well in that regard. So far, it's been a lot
easier for me to recover from outages because I usually have a backup
of a VM somewhere. The abstraction of hardware has allowed me to
quickly port a machine from one physical machine to the next with
minimal effort on my part. It's also allowed me to build and test new
software easily, painlessly and more importantly safely. Recovering
from a botched install is often as easy as just copying a base vm image
and starting again.
Sent from my Windows Mobile® phone.