How do i prepare for Hyper-Converged solutions in my datacenter.

Right now we can see there are many companies that have Hyper-Converged solutions, To name but a few there are VMware, Simplivity, VCE, Nutanix and more. In future blogs I will discuss each of these solutions to different degrees. But before we look under the covers we need to make sure we have an idea of what HyperConvergence is and why we would want to use it in the first place.

4 companies 2

So lets start with what many datacenters may look like right now. Listed here are some of the products that you will see in a datacenter. People may familiar with these however being an expert across all can certainly be a challenge.

  • Storage Arrays
  • Servers/Hosts
  • Switches
  • Back Up Appliances
  • Disaster Recover Solutions

As we know there are many more items that can be listed above, and remember you may have storage arrays from many different vendors, same goes for switches and so on and on. With this we have the issue that not every person can switch between 3 different types of storage arrays, go and perform regular maintenance on switches and after that ensure all backups and replication are configured correctly. I could literally talk for hours about how frustrating it can get when you have to keep jumping between different screens and different products at different times.

So what are some of the things I need to consider before i go getting one of these bad boys?

  • DataCenter Consolidation
  • VDI
  • Remote Office Branch Office (ROBO)
  • Data Protection
  • Test/Dev
  • Data Migration
  • Mission Critical Applications
  • Data Analytics
  • Cloud

So lets quickly explain what I mean by some of the above.

DataCenter Consolidation

You may want to consolidate your infrastructure in a single datacentre or you might even want to consolidate multiple datacenters. Doing so would probably mean a lot of research initially however that initial time and research investment would be imperative to ensure success. In later blogs I will explain what types of tests and proof of concepts you can perform. But hey, Imagine not using up such a huge footprint in your datacenter, saving on floor space, power, cooling, cabling. There are huge savings and efficiencies to avail of with Hyper-Converged solutions.

Remote Office Branch Office (ROBO)

With ROBO you may have one large datacenter which acts as your primary datacenter and have smaller offices remotely, these can usually replicate to each other or perform backups for the purpose of disaster recovery.

Disaster Recovery

Having worked in tech support for many organisations including EMC,VMware and Dell one of the things customers certainly need to do is perform backups regularly. From a customer point of view when something becomes corrupt and no backups are available. well….. that can be seen as a mortal career defining moment 😦

Data Migration

Whether you are wheeling in a new server or a huge storage array into your datacenter you will always look at it and think to yourself, I really hope this migration goes well, that my customers are not impacted from and access or performance point of view. Flicking the switch to a new product can also be a terrifying experience.

So above are some of the things you need to consider if you are going to go down the Hyper-Converged route.

What are some of the tests I can do to ensure I choose the correct HyperConverged appliance?

The following are some of the tests that you need to perform so you can ensure you are choosing the right Hyper-Converged appliance to meet your business needs. Most of these tests can be relatively easy to perform however they are very necessary. Especially if you come to a situation where you are performing a proof of concept. You dont want to be doing a proof of concept and not know what your requirements are. Otherwise you could end up buying a VW Golf when you actually need a Ferrari, Or even worse, Vice Versa 🙂

Testing

How do i perform testing on some of the above?

OK, this can differ for different appliacations, OS’s, Arrays, platforms etc so there is no way for me to cover all of these but here are a few,

IOPS

Lets say you have a Windows environment. A useful tool to use here is perfmon.exe. You can select the different counters that are applicable to you, For example

  1. Processor Information\%Processor Time
  2. \Memory\Available Mbytes
  3. \Logical Disk\
  1. Avg Disk Bytes/Read
  2. Avg Disk Bytes/Transfer
  3. Avg Disk Bytes/Write

You can find more information on setting this up correctly here

In Linux it is different and you can find information on how to perform the benchmarking here

Capacity

Calculating the required capacity can be another headache and there are many different caveats when it comes to capacity, are you currently using thin provisioning, Eager thick zero. You may be totally over-provisioned and not know it. Go and find out the current numbers on your existing storage array so you are aware of everything that is required from the storage array capacity perspective.

  • Raw Disk Capacity
  • Min. Usable Capacity
  • Effective Capacity
  • Max Number of Disk Expansion Shelves
  • Max Flash Capacity per Array
  • Max Flash Capacity with All-Flash Shelf
  • Power Requirement

Business Continuity

Testing for disaster recovery and working out what tests you need to perform can be a huge challenge, you can have all of the Disaster Recovery (DR) and Business Continuity plans in the world but until you have tested them they are of no use really. These plans can be very simple or they can be really complex. It is really important that these plans and tests are carried out regularly to avoid having stale plans. An example of these tests would be prioritizing the order in which your VM’s come up on a Recovery site. Doing this means you need to decide which VM’s are most important to you. Your VM with Active Directory may be one of the first VM’s that need to come up and after that you may need to bring up your Exchange databases and so on and on …… Plan it all out and then TEST, TEST, TEST.

Dedupe

Data dedupe is also referred to as single instance storage or intelligent compression, the idea is to eliminate redundant data, leaving just one instance of the data on the storage, pointers are then used instead. So i might have a 1GB mp4 and there may be 30 instances of this mp4. This means 30GB of storage space.

But with dedupe we only have one instance and the rest reference that one copy. Essentially saving ~29GB of space. Depending on the type of dedupe you wish to perform there are different ways to test it. Most of the backup software providers offer products with global dedupe, including Symantec NetBackup and EMC Avamar, and data deduplication appliances, such as IBM’s ProtecTier and Sepaton’s DeltaStor offer global deduplication. To understand more about dedupe take a look at these tool here.

All of this may seem daunting and i am aware that there are plenty of other things to consider when going down the Hyper-Converged route, however it must be said that hyperconvergence is here to stay and its only a matter of time before many datacenters are using these appliances.

I will follow up soon with more information on specific Hyerconverged appliances so come back soon for more info.

Thanks

Francis