EMC’s VNX2 line completed changed the way that hot spares work as a practice. The change was so drastic that stands to impact not only current generation EMC devices but next-generation devices, as well. The alterations stand to affect both the devices themselves and required administration practices in a variety of ways both positive and negative. The changes affect all VNX2 devices running MCx series code like the VNX8000, the VNX5800, the VNX5200 and more. The MCx series code is also commonly referred to as “multi-core optimization.”
Perhaps the biggest change that VNX2 has brought to hot sparing is that these devices no longer utilize hot-spare designations at all. The practice is sparing is now handled from unassigned drives in a permanent way – no equalization takes place during the standard replacement process. Instead, the aforementioned unassigned drive itself becomes the replacement on a permanent basis.
In simpler terms, the user is no longer selecting a specific drive to use for hot sparing purposes. The MCx code instead now considers every un-configured drive in a particular array to be an available spare. As long as there are drives available in the array that meet these specifications, sparing will take place.
VNX2 devices use the following criteria to select a hot spare drive to replace a failed drive:
1. Type – The array will look for all available drives that are the same type (SAS, NL-SAS, etc.) as the failed drive.
2. Bus – The array will then select drives on the same bus if available.
3. Size/Capacity – Drives that are of the same capacity or larger will be selected.
4. Enclosure – Drives in the same DAE (disk array enclosure) as the failed drive are selected.
VNX2: The Potential Downsides
One of the major potential downsides to these new practices has to do with performance issues. If the user is no longer selecting drives for hot sparing, the MCx code could potentially use a lower revolution drive for that purpose. The MCx code as it is currently designed does not use characteristics like form factor (2.5” or 3.5”), revolutions per minute (7.2K RPM, 10K RPM, 15K RPM, etc.) and other qualities when selecting a replacement – the only criteria is that the drive is available in the array and unconfigured. For example, if a drive with a lower RPM is the only unassigned drive available, a drive that was 15K RPM will spare out to a 7.2K RPM drive.
As a result, performance is potentially at risk in a significant way and the burden to fix these types of issues will fall to the system administrator. Because it may not be immediately obvious where devices are failing to, the administrator will need to keep a close eye on what is going on to head off any performance issues before they occur.
In order to try and avoid performance issues, when different drives (form factor, capacities, speeds, etc.) are used, these should ideally be placed on different buses from one another to avoid them sparing for each other.
RAID Configuration Changes
Another major change that VNX2 brings about has to do with the ways in which RAID group elements are identified. With VNX Gen 1 devices, elements like the bus, enclosure and disk (also commonly abbreviated as B.E.D.) are used. With VNX2 devices, serial numbers are used to pinpoint exactly which drives belong to a RAID group.
One of the benefits this change brings with it, however, comes in the form of a new feature called “drive mobility.” When physically moving a drive, it will still be recognized as part of its previous RAID group regardless of where you place it thanks to its previously identified serial number.
Looking to the Future
Essentially, these changes are complicated in regards to OS and code implementation. If you are considering upgrading to a VNX2 model, you will need to take into account the potential performance issues associated with these changes. You will also need significant planning from a certified engineer to ensure that you are not putting your organization’s data at risk. Reach out to us via chat, email, or phone to discuss a potential upgrade to VNX2.