Welcome!

Virtualization to the Rescue

Dave Graham

Subscribe to Dave Graham: eMailAlertsEmail Alerts
Get Dave Graham via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Virtualization Magazine, EMC Journal

Blog Feed Post

Why Policy Is the Future of Storage, Part 2

The concept of extending policy BEYOND a localized resource

In my first post on policy, I tried to show what policy’s influence would be like from a top-level standpoint. We see that policy can start from a localized array standpoint and “fix” performancing or hotspot issues based on key LUN migration technologies (like EMC’s FAST promises to do; Compellent already does this) while allowing business processes to continue “unharmed.” To a certain extent, the LUN migrations of yesteryear were an act of policy, if only to show the level of manual involvement required in order to manage a SAN. What I’d like to look at today is the concept of extending policy BEYOND a localized resource or array and looking at its impact in a global storage environment.

Why use Policy?

As I’ve noted numerous times, there is a time and a place to staff up to handle data challenges but there’s also a time to look at the inherent inefficiences of processes and procedures that are implemented in a data centre. The overarching goal here is to help the business succeed, not fetter it with burdensome management workloads and complex architectural banalities. (yes, folks, I came up with that phrase myself…I AM a word-smith. ;) ) Net result for policy enactment via some level of automation is a decrease in the amount of time to market for internal/external product, a decrease in the provisioning and SAN/NAS performancing routines, as well as a “watchdog” that tracks activity (or whatever metrics you’re enacting policy on) and can implement change.

Where Policy is going

Policy is, with notably few exceptions, relegated to on-array management and utility tasks. FAST v1, for example, is siloed to an array. Same goes for Compellent and other competing technologies. While this is good for tasks and single-array companies, it doesn’t take into consideration the other dependencies that rely on extant data to complete. Again, if the array itself is unaware of anything but the data that it manages, it will only act along that parallel, not in cohort with other systems. We’re definitely seeing an emergence of this awareness, most notably (to me, at least) the CLARiiON CX4 “VMware-aware” FLARE 29 operating system.  Not only does FLARE 29 give you bottom-up visibility to the VMware ESX host layer, it also goes to the host dependency tree and looks at the virtual machines running within them. Further, anytime a LUN is added to a storage group, it automatically rescans against the ESX host’s attached HBAs to allow these new luns to be recognized. That level of automation, while perhaps small and insignificant, cuts down the time to provision and report against the physical ESX host quite dramatically. The policy behind this is rather simple: if a LUN is added to a storage group and the hosts are VMware ESX based, then force bus rescan. Simplistic but powerful. This level of awareness, then, is the next baby-steps for policy-based array development.

Taking it a different direction, EMC’s Atmos product has approached policy with the understanding that nothing is ever done in isolation. In a global economy with global IT infrastructures, it makes sense to understand your data in a global fashion and enact change in the same way. Atmos’ policy engine, then, is tuned both at an installed level (e.g. for our product that sits on-premises) as well as from a distributed level. Let me give you an example of this (please be kind…).

Say your company has 3 different locations across the United States and Europe (San Franscisco, Boston, and London). In each of these offices, you’re generating massive sets of unstructured data that need to be protected and available to all within your company. Not only that, but due to international policies/law, some of these documents cannot be shared across the entities but still need the same level of protection and availability. With Atmos, you can set policies that dictate data flow based on any number of criteria (such as, document extensions, origin, etc.) and set replication levels against these objects to ensure data dissemination. Let’s assume that I created a Word document called “Chris Evans – CV.doc” in London and I needed to ensure that it was accessible throughout the entire company. I could set my Atmos data policies to push asynchronous copies of the CV.doc to Boston and San Fransisco on ingest as well as ensuring that a copy was available on Atmos Online. To kick things up another notch, I can enable this policy through both the central administration portal as well as through my document processing application (provided they’ve integrated the Atmos API). That level of flexibility between systems is key to moving to a private or hybrid cloud model.

Conclusion

The net result of policy-based storage engines becoming more and more critical to the success of a business is not to be overlooked. There are definite areas of overlap, perhaps, when looking at data management policies but there’s nothing to prevent layering global policies on top of  localized array policies.  The result of that symbiotic relationship would allow for a fine-tuned global data management practice and that’s nothing to sneeze at. So, where are you going with policy?  what roles/benefits do you see in your organization for policy? Let me know!

Read the original blog entry...

More Stories By Dave Graham

Dave Graham is a Technical Consultant with EMC Corporation where he focused on designing/architecting private cloud solutions for commercial customers.