Understanding all the talk about software-defined storage technology can be quite confusing. As an independent reseller of used EMC and NetApp, we hear lots of misconceptions thrown around about what software-defined storage is.
A basic understanding of the architecture of a storage array can be helpful here. The array is comprised of magnetic or solid-state storage components set up in trays and set in a rack operated by a controller. The controller is usually a PC motherboard that runs a regular operating system like Windows or Linux. The system can run on RAID software or some other software products that add value by delivering services such as thin provisioning, compression, and inline data deduplication.
The costs of storage are driven by the software, not the hardware. One popular storage array costs the maker about $7000 in non-specialized hardware. The software provided with the array adds all the value, allowing it to be sold at retail for over $400,000.
The storage application in a non-SDS environment is a volume created with solid-state devices or hard disks during set-up and configuration. Functionality if imparted to the volume through value-added software. Applications using the volume must be reconfigured anytime the application moves to a different server.
Early approaches required SANs to be broken up, returning the storage to server-internal or server-attached configurations. By doing this, physical storage resources were associated to virtual workloads. A few minor adaptations allowed data to be available at the same exact coordinates no matter where the application was being hosted. This model resulted in a spike in demand for storage capacity. In virtual server environments, increases in storage capacity were projected to range from 300% to over 600%.
Alternatively, the SAN infrastructure can be left where it was built and virtualized by moving the storage application to a storage hypervisor, or a storage virtualization server. This lets the routes to certain volumes to move along with virtual machines as they themselves move between physical servers. When this happens, the rerouting is done by the storage virtualization engine. This allows services to be provided in a way that more accurately fits the needs of guest machines or different workloads.
Engineers are in agreement that this is the storage application sought by SDS. It provides applications with consistent storage volume, adequate capacity, and services in a flexible and appropriate manner. This makes many vendor marketers uneasy. They prefer to focus less on storage virtualization technology and more on SDS offerings that are only compatible with their own hardware kits or server hypervisors.
The storage application is used by applications for reading and writing data. The hard disk drives which comprise both the resource and the pathway to the resource are invisible to the applications and the end users. If you are using server virtualization technology, or are actively conducting virtualization trials, you understand the advantages and disadvantages of the storage application and storage virtualization.
Reliant is a breath of fresh air when it comes to optimizing your data storage environment. Our goal is to provide you with the resources you need to make educated decisions and feel empowered. Contact us if you think a free assessment could benefit you. Sales@Reliant-Technology.com.