Nutanix – My experience on a recent VDI POC

Before I kick off, lets just clear this point up. I do not work for Nutanix and they have had no part in this blog post being commissioned. Nor do these views reflect any of my employer.

We have recently undertaken a VDi POC. Being that storage is the fabled killer of VDi, we have decided to look at a different option. Enter Nutanix, for those who have never heard of it think of it like this.

You have one box that has all of your storage and compute onboard. No more SAN fabric is required as your hypervisor (ESXi) actually installs onto the hardware and has a direct interface with the storage.

Pretty cool huh, clearly this is a very simplistic view, but I don’t want to delve deep here, as I want to discuss our experience with the product.

I want to start with an event that happened about half way through our POC. The rack we had the Nutanix sat in was going to be decommissioned. Far from ideal, but we set about moving it. When we had finished I just had this thought, we had lifted our entire desktop environment in under an hour! Damn… that’s cool and very powerful too.

Onto our hardware, we had a single block with three nodes on board. First impressions were mixed, these are just Super Micro boxes with multi core CPUs and 192GB (max) ram. Storage was in the form 300GB SATA drives and a host of SSD storage, all bundled with the rather tasty Fusion IO cards. Network connectivity was 10GB to a core switch.

In my mind all of the above was clearly something I could build but I knew that there was some good stuff to come.

Enter the Nutanix software and web interface. This is where the magic happens. The web interface itself is pretty slick and was very easy to navigate. Sadly I didn’t get to set the box up as they came preconfigured, personally I would have really like to have at least shadowed the setup ūüė¶ However a quick nose around the interface and you can very quickly get a feel for some of the configuration items.

The real benefit for me was how simple it was to get performance stats from the system. I am by no means a storage guru and performance troubleshooting certainly takes me a while to do, however on a Nutanix this becomes simple. Both real-time and historic stats are all there, IOPS, BW, Latency, all can be viewed at the click of a button per node, per host or per VM. Its awesome!

We had a total of 12TB on board all presented to the ESXi hosts as a single NFS data store. We have only had block storage here, with NFS only serving templates and ISOs. Suffice to say that the Nutanix has proved to the powers that be, NFS can perform and is by no means a SMB only protocol.

The performance was blistering and it really shocked a few of our NetApp admins, especially when they saw that it was all coming from a tiny box. In fact I will go as far to say that performance testing that we carried out actually showed a considerable performance increase running our desktops on the Nutanix vs our 3200 series NetApp filers running of FC disks over a 4GB fabric.

My final take home(s) from this experience were that we could be witnessing the end of the monolithic SAN in the enterprise, we all know that they cost a bomb and are expensive to run, however there are still some features that Nutanix does not have, once we have full parity Nutanix really does become compelling.

Also, I started to think that my server roadmap that was heading down the Cisco UCS path on the next refresh could be wrong, and in actual fact I should look at Nutanix units for my server compute and not just the desktops.

Time will tell as maybe by the time we get to that refresh we will be on Liquid storage or some fancy hologram storage, as storage seems to change so quickly!

All I can say is my eyes are now wide open and the storage space is really interesting at the moment, and Nutanix have my vote!

Advertisements

How to shutdown a Nutanix unit

Only a quick one as there doesn’t seem to be a great deal of information out there on these boxes.

We had to shut down our POC kit and move it, this was the first time we had looked to shut the box down and we wanted to get it all done the right order.

  1. Shut down all your VMs РWe did View Desktops first, then our servers.
  2. Shut off the Nutanix CVM VMs
  3. Shutdown the ESXi Hosts
  4. Power down the physical nodes

Power on was straight forward, we brought up all the network and then powered on the 3 nodes. The ESXi¬†hosts were set to auto start but the CVM’s¬†were not. We observed that it takes quite a while for the storage to show as connected, even though the CVMs were all up.

 

Cannot see storage on Nutanix

Running this View POC on Nutanix I have picked up on a few little issues and I thought that I would record them.

After a power cut we found that everything came back up ok, however we could not see the NFS datastores in VMware. They were listed as disconnected and you could not remount them. A refresh and rescan of storage also did not help.

Anyway, its quite a simple one, as the Nutanix CVM machines had not powered on, a simple power on of these VMs and we were back in business before you know it.

Everything is simple when you know how!

NetApp Lun Clone & Lun Clone Split

The other day I was asked if I could create a virtual copy of a SQL server for some performance and load testing. Of course I said that wouldn’t be a problem. My only brief was that the original server must remain live and it can only be taken offline on a Sunday evening.

So I sat down and planned an approach. The first part was the server and the applications. Thankfully it was just one SQL server and the application was also on the same box. So just the one P2V was required.

I planned to stop the SQL and Application services before the P2V. I also planned to re-size the system partition as it was 8GB!! The P2V will let me do this and this would allow the bench mark utilities to be installed.

Then I looked at the rest of the drives attached and they were on the NetApp. This was interesting as I could not just present them to the test box for obvious reasons and I did not have a Flex Clone license available. That’s when I found out about LUN copy clone and UN copy split.

Using these would allow me to create a separate LUN that was a clone of the original. This could then be presented to the virtual test machine.

Here are my findings around these commands.

  • The cloned LUN must reside in the same volume as the LUN you are cloning.
  • The LUN clone split can only split off into the same volume that the LUN clone is in.
  • Data Motion might work to move the volume and then the duplicate LUNs could be deleted. Though I think Data Motion may require the volumes to change aggregate?
  1. ¬†Double the size of your volume to hold the additional LUN. If you don’t you will get an error saying your out of space. I did a quick test just to see if the clone LUN is deleted and its not so make sure you remove it.
  2. Create a snapshot of the volume containing the LUN. I would suggest using a sensible name and not the default name!
  3. Run the LUN Clone Command below. If you see this error check the path closely… LUN clone create: No such LUN exists in snapshot. Once the clone has been created the snapshot is locked and cannot be deleted until you delete the clone or split it. The LUN clone actually becomes a writable snap shot which is pretty handy too but I wanted a totally separate environment.
  4.  Run the LUN clone split command below. You can monitor the status of the split by running LUN clone status command
  5. Add new LUNs to correct initiator group and present to host.
  6. Tidy up and delete the snap shot that you created.

NOTE:You can also run multiple splits and this appears to have no impact on the speed of completion. My largest LUN took a few hours to split which was 200GB.

Command syntax LUN CLONE CREATE <path to volume clone LUN> -0 noreserve -b <parent LUN path> <snap shot name>

The switch for -o noreserve means that the LUN is not thin provisioned.

Here is an example of what I ran.

un clone create /vol/sv02_sys/sv02_sysclone.lun -o noreserve -b /vol/sv02_sys/sv02_sys.lun sv02_sys_SNAP
lun clone split start -d /vol/sv02_sys/svr02_sysclone.lun