Buyer Beware. Not All Flash Storage Solutions are the Same

3PARA famous architect once said that the origin of architecture was defined by the first time “two bricks were put together well.” And the more bricks you have, the more important putting them together well becomes. The same holds true in our data centers. The architecture of our compute, storage and network devices has always been important, but as the demands on our IT infrastructures grow, and we add more “bricks,” the architecture becomes more critical.

Today supporting mobile, cloud, and big data demands a new approach to storage to enable dynamic workloads and explosive data growth. Many enterprises are turning to flash storage arrays to satisfy these needs. In fact, Gartner predicts that by 2019, 20% of traditional high-end storage arrays will be replaced by solid-state arrays.

Flash storage can be crazy-fast, with the speediest arrays promising millions of IOPS with sub-millisecond latency. But there’s a dirty little secret that some storage vendors don’t want you to know – not all flash solutions are the same.

Some flash arrays don’t provide the common Tier-1 data services you depend on, like synchronous/asynchronous replication or data migration, or only provide those services through external appliances. The fact that a drive is made of silicon instead of magnetic disks does not necessarily mean it will deliver Tier-1 flash performance. It’s all about the architecture – how well the “bricks” are connected.

Achieving the promise of flash performance requires an architecture that’s optimized for flash through the entire I/O path. If you’re considering flash, there are some architectural features you should be looking for.

Massively parallel design

Flash media is fast enough that common OLTP workloads can easily saturate storage controllers, creating bottlenecks that add latency. Multi-node architectures (scaling beyond two nodes) evenly spread the load across more controllers. Mesh-active clustering allows each volume to be active on every mesh controller in the system, supporting load balancing and scalability. System-wide striping of data and I/O for each volume allows accelerated and consistent performance while avoiding single points of contention.

Massively parallel architectures provide an added benefit: they tend to perform well even under failure conditions, with active components easily picking up the extra load of any failed component, ensuring uninterrupted performance.

Mixed workload support/CPU off-load

Mixing transactional and throughput-intensive workloads can cause resource contention and degrade performance due to the resource demands of large block I/O operations on resource controllers. CPU cycles are consumed moving data and performing functions like RAID parity calculations and deduplication instead of delivering Tier-1 data services. Architectures that use dedicated hardware to offload that work from controllers reduce contention and keep CPU resources free for more complex data services. 

Converged flash

Not all workloads need the performance of flash. Some architectures don’t allow you to use low-cost and dense spinning disks in the same arrays that provide high performance from its solid-state tier. Converged flash arrays can provide the ultimate performance and QoS flexibility, especially if those arrays share a common architecture and common data services. Storage optimization should allow for non-disruptive movement of data across different tiers of storage, within the same array and outside of that array, all online.

And that’s just the tip of the iceberg. We could talk about bandwidth, backup and recovery, and a host of other topics, all of which affect the overall performance and ROI of any flash solution. The bottom line is, not all flash solutions are the same. It’s not just about switching from spinning disks to silicon. Maximizing performance from flash is a matter of architecture.

Ask Axxys about HP 3PAR StoreServ solutions for your business TODAY!