There are some who would refer to optimized disk-based target devices that are optimized for as Purpose-Built Backup Appliances (PBBAs), but that term is actually misleading.
When one refers to a “backup server,” the implication is that the server performs backups—and in fact, those servers do perform backups.
When one refers to a “storage controller,” it is in reference to a device that controlsstorage.
In both examples, the first word is the activity performed by the second word.
Some in the industry have chosen to refer to the disk-based target devices covered in ESG’s recently published “Market Landscape Report on Disk-Based Backup Targets” as purpose-built (i.e. specifically architected) “backup appliances” — but most of those appliances don’t actually do backups; they enable backups. The consideration factors and representative solutions discussed in the report should more correctly be referred to as deduplication appliances or optimized‑retention appliances, because they are appliances that do deduplication and do deliver optimized retention.
To actually be a “purpose built” backup appliance (i.e. do backups), a solution would need to be self‑contained, including not only optimized-storage, but also backup server software for scheduling and management backups, such as an EMC Avamar, Axcient or Symantec 3600/5230.
In a future Market Landscape Report, it is likely that ESG will separately look at the market trends and the range of real purpose-built backup appliances – but in the meantime, enjoy the MLR on Disk-Based Target Systems, covering Dell, EMC, Exagrid, HP, NetApp, Quantum, and Sepaton.
The nice folks at Windows IT Pro magazine recently published an article that I wrote on Data Protection Manager within Microsoft System Center 2012.
According to Enterprise Strategy Group (ESG) research, the number-one IT spending priority in 2012 was improving data backup and recovery, tied with increased use of server virtualization. Interestingly enough, improving business continuity or disaster recovery (BC/DR) scored in the top 10 as well. There are a few key reasons:
First, commoditization of virtualization has made many IT processes easier but makes backups more difficult.
Second, data is growing faster than most organizations can manage it, and legacy backup solutions are choking to keep up.
Other factors include an ever-growing reliance on IT (forcing raised prioritization of BC/DR) and the consumerization of IT (causing new protection scenarios for privately owned endpoint devices).
Add the growing complexities of backing up and recovering Microsoft workloads (e.g., Microsoft SQL Server, SharePoint, Exchange Server, Hyper-V, Windows Server file services), and you can understand why Microsoft started building its own data-protection solution.
Easily one of the most discussed topics with me in 2012 is how virtualization is changing data protection strategies.
Virtualization solves so many problems for IT that it continues to become more mainstream every day. But the more that you virtualize, the more that your legacy backup methods will likely dissappoint you. So, here is a video that summarizes the challenges and the trends in virtualization protectoin, as well as what IT Pro’s should be looking for when considering new virtualization protection solutions.
On January 31st, in Irving, Texas … or a simulcast location near you … you have the opportunity to learn more (a lot more) about Microsoft’s management platform – System Center.
For those of you who don’t know of my past lives, I used to be the product manager for two SysCtr products, Operations Manager and Data Protection Manager. One of my favorite things about the SysCtr world is the community, including the MVPs as well as the passionate user communities around the world. Since its launch last Spring, Microsoft has been doing a lot more “solution” centric readiness events, around use cases that leverage multiple parts of the SysCtr2012 portfolio – and the capabilities are impressive. But as a long-time IT Pro, I still want to dig deep into each technology on its own.
Its kind of like building a cabinet. Its great to use all of the tools and parts to build a cabinet – but somebody still needs to be a master with a power-saw, or drill, or screwdriver.
Community experts like the ones at Catapult Systems are meeting the need by delivering events like System Center Universe. It was a privilege to speak at SCU2012 and am just as jazzed to attend, learn-from and hang-out-with this year’s awesome lineup of MS SysCtr experts, as well as my friends from Microsoft, Veeam and Catapult. There are user groups all over the planet that are dialing in, so it ought to be an awesome day.
And if you happen to be attending in person – tweet me to talk more about all things data protection.
You can’t have an IT “modernization” discussion without bringing up the cloud. And in the realm of data protection, that comes in a few obvious flavors:
Backup as a Service (BaaS) – where your data is backed up either directly to a cloud provider or first to a local appliance and then to that provider. The latter gives you faster restore and other performance related benefits, but the end result is the same.
Disaster Recovery as a Service (DRaaS) – where entire parts of your infrastructure, usually whole VMs, are replicated to a cloud provider, with the ability for you to bring those VMs online and resume business services from the provider’s infrastructure after a crisis. Some DRaaS solutions even provide BaaS as a side benefit.
Cloud-Storage for your On-Premise Backup – where your existing backup solution is working fine, but you’d like another copy of your data outside of the building – and cloud economics are interesting. Great, add cloud-based storage as a target to your on-premise backup server …or back up (BaaS) your backup server to the cloud. Either way is okay.
But instead of talking about data protection AS a service … what about data protection OF a service?
Many of us put our data into SaaS (software as a service) solutions today – e.g. SalesForce. We assume that SalesForce (or any other SaaS solution) have multiple points of presence on the Internet, and that they have resiliency between sites. The assumption is that if a site were to have a crisis, the other site(s) would still be available. For some large SaaS solutions, that may be enough – though it can still be hard to document (or test) when doing a BC/DR audit.
But what about if the SaaS provider goes dark?
Maybe out of business? Perhaps a victim of Denial of Service attacks or broad data corruption (that is then replicated between sites). What is your plan?
Do you back up the data from your SaaS provider?
In what format(s) is the backup in?
Is the data readable or importable into a platform that you own?
How would you bring the functionality back online?
Most importantly, have you tested that recovery?
This is not a blog post where I offer you answers, but one that I wanted to pose some questions for discussion.
If you’re an IT Pro who backs up and has a validated recovery plan for a SaaS solution, I’d love to hear your comments below (and maybe a phone call next year).
If you are a vendor of technologies that back up SaaS (and we aren’t already talking), ping Lauren to set up some time.
As always, thank you for reading … and Merry Christmas !!