Next month, I will be heading to Chicago for TechTarget’s Storage Decisions conference. This show does a good job on the editorial side, suggesting timely topics and bringing in independent voices like Howard Marks. I will have three presentations to give: Sessions on data reduction and storage virtualization in the main conference track, as well as a dinner discussion focusing on controlling the growth of data. Registration is free for qualified end-users, and I urge you to attend on June 21, 2011.
Reclaim Capacity with Data Reduction for Primary Storage
Depending on which industry study you read, most companies are wasting anywhere from 30% to 50% of their installed disk capacity, which translates into thousands of dollars spent with no effective return on investment. Storage vendors are beginning to provide tools that can help storage managers make the most of the disk they have installed. For example, data reduction for primary storage borrows data deduplication technology developed for backup and classic compression algorithms to help squeeze the air out of nearline and primary data and reduce its footprint. This session’s topics will include an overview of data reduction technologies and where they will have the greatest impact, what key storage vendors are offering in data reduction and an update on the major players, and the consequences of using primary data dedupe along with dedupe for backups. We’ll also look at the potential for vendor lock-in and consider why we’re reducing data in the first place.
Topics include:
- Introducing data reduction technologies
- Compression: How it works and where it’s found
- Deduplication: From single-instancing to variable block
- Application-specific: Cracking open files
- Overview of data reduction products
- Where to use them
- The capacity conundrum: Store less and reduce utilization
- Ideal applications: Justifying the cost of data reduction
- Side effects: Considering the impact on backup, replication, I/O workload and vendor lock-in
Storage Virtualization: Who’s Doing It and Why
Storage virtualization has been around for decades and, although research indicates that 70% of companies have already virtualized at least some of their installed block or file storage, most remain unaware of this technology. Grandiose schemes for comprehensive virtual SANs have given way to more practical host- and array-based virtualization technologies, and server virtualization has created a new opportunity to create a pool of storage. This session will look at the current state of storage virtualization, how to quantify its benefits and describe which approaches are best for particular environments, and also cover how storage virtualization compares to private storage clouds.
Topics include:
- Defining storage virtualization: What it is and where to find it
- Abstraction of storage resources
- Tiered storage
- Flexibility
- Popular approaches to storage virtualization
- SAN controllers
- File virtualization
- Volume managers
- The pool, the hypervisor and the cloud
- The impact of server virtualization
- Is this a private cloud?
Cutting Off Data Growth at the Disk
In this special dinner presentation, Stephen Foskett will discuss how to apply key data management technologies to arrest the growth of data. You’ll learn how capacity optimization technologies such as data deduplication and compression can reduce the trajectory of data growth as well as how tiering can reduce the cost of storage. Finally, Stephen will explore why the time may have finally come for active archiving and will leave you with practical ways to help your corporation better manage its data.
Note that space is limited for the dinner, which is sponsored by my friends at Dell.
Registration
To register for Storage Decisions Chicago, just go to the TechTarget registration page. Dinner guests will apparently be selected from that same pool of attendees.
Disclosure: TechTarget pays my expenses to attend and present at Storage Decisions, and has for many years. I also get a speaker fee for the dinner session.
Leave a Reply