Virtual datacentres

I sometimes wonder how many people realise that having a virtual datacentre is a balancing act between staying fit and agile or becoming over weight and cumbersome.

I see it time and time again where people think that the only use for virtualising servers is to consolidate physical boxes. They take a look at their physical box and say it has 2gb of ram, 2.0 dual core processor and 128gb mirrored system drive. So when they virtualise it what do they do…. Obvious, isnt it?

They monitor the system, look at its usage over a few days and work out its peaks and dips in system resource usage. Take some averages and then P2V and give it only what it needs and a little extra for good house keeping in the virtual environment.

Well you would be wrong as I see it all the time, they give the box 2gb of dedicated RAM, they create a single 128gb VHD and then assign dual virtual processors. Then over time they oversubscribed and run low on resource on the host server. To ease this scenario a new host is added and then more machines can be virtualised, before you know it your virtual infrastructure is bloating with hosts. Then as people become familiar they realise how easy it is to add a new guest and before you know it all hell is breaking loose and the data centre is falling on its knees.

In order to keep the datacentre agile you have to employ a great deal of change control, guest life cycle control, monitoring of the hosts, best practice and applications for sizing and capacity.

Best practice with VMware for example will tell you that most guests do not benefit from more than one vCPU due to how the HEC (Hardware Execution Cycles) are managed. In fact in my time I have only come across a couple of servers where I have proved that an additional vCPU was required. I tested CPU affinity but found it was not required.

Change control allows for some form of record of systems and why they were added and also gives a degree of control as to what happens in the environment. With the advent of Cloud and the versatile platforms a lot of change control will be automated using products like System Centre Opallis or VMware orhestrator as end users provision their own machines.

There a vast number of third party tools available for capacity and sizing, my experience lies with the VMware offerings like capacity planner. These allow you to accurately monitor and size for physical machines and help to maintain your virtual hosts.

Life cycle management is a new one for me, it’s basically a cloud feature and it allows datacentres to set life times for guests and remove them when their life time has expired. As I find out more I will post up about this.

Working from home

Last year I published a paper internally about working from home. I thought it was a good way to provoke though in the company and it also served as a counterpart to my strategy and design for the new remote access platform.

The culture where I work is very much set in the old school ways of management. If someone is sat at home will they work? If they are at a desk I can see and monitor them. With the advances in technology, green thinking and high property prices should be enough to give middle managers reason to embrace the home workers but it seems to me that they cannot transform their methods to new ways of working.

Parking all that business case aside lets talk about the technology. Looking at the current platform we had a system from the dark ages, two Cisco VPN Concentrator 3000’s terminating legacy Cisco VPN clients and OWA which was published on the sluggish secure desktop. For those not in the know, secure desktop basically puts the user into a read only sandbox.

The concentrators were on their last legs, constantly falling over or running so slowly that users could walk to the office quicker than connect over the web!! The funny thing was that even after drastic action (full software rebuild) they still ran like dogs.

Having an immediate operational issue I set about a short-term strategy first. This was to replace the aging hardware boxes with new Cisco ASA 5520’s. I also compared these to Checkpoints Connectra gateways. The ASA gave me a good fit into the existing Cisco infrastructure and would natively support existing legacy clients for a smoother transition. The new Anyconnect and Clientless portals would be utilised to offer faster and more robust platforms to work from.

Not wanting to stop there I looked to a longer term roadmap, with Windows 7 on the horizon I was keen to push the new Direct Access feature. This was a rather slick always on VPN connection that is started right from the OS load, meaning that users are connected right from logon (when on the web) and all computer and user GPO’s will be applied. IT also get a permanent management channel too.

Not wanting to let the Cisco investment go to waste I intended its used to be for access to the network from non corporate systems or vendors. It would also terminate any site to site IPSEC VPN tunnels.

I would like to revisit this topic at some point as I now think that Cloud could actually offer some pretty cool on demand access to desktop systems.

Testing

Finally, after a few years of thinking about it I am blogging.

Being a techie this is quite clearly the traditional “test” post!

Do you find that you always use accounts called Test or server called Server test in a domain called dave.com or even test.com.

Apart from this post I decided a while back to buck the trend and actually come up with meaningful names and descriptions. For the past few years this has actually made my testing life so much easier, especially when your working on a POC for a customer or even as a reference point for yourself against a live system.