By Stephen Doherty, Consultant IT Project Manager
In my last post, (click here to read Part 5) I explained how we set up a Symmetrix Remote Data Facility (SRDF) bridge between our old and new datacenters that would allow us to use Storage VMotion to transfer VMs and data to our new Private Cloud datacenter. It worked very well. We could move VMs and data pretty effectively. However, setting them up and getting them to run an application was more of a challenge. We had to roll back one of the first three applications that we tried to migrate; the other two took us a long time to trouble shoot and configure.
The solution to minimize risk and downtime seemed obvious to me. It was just like a technology refresh in the physical world. Build a new environment with all new components and test it. Once all the bugs were worked out then you could synch the data and cutover. Why did I need to move a VM when making another one was just as easy and would provide an opportunity to configure and test it?
Fifth in a series on EMC’s Durham Data Center.
By Steve Doherty — Consultant IT Project Manager
With the migration plan completed (click here to read part 4) for EMC’s Durham Data Center, we began the daunting task of the migration. We weren’t going to use trucks or airplanes to move the gear. We were going to migrate all the applications and data over the wire. The fact that it really hadn’t been done before was a technical challenge that we would just have to overcome.
In late Q4 2010, as we were completing the Durham Data Center infrastructure build (click here to read part 2) our migration team began experimenting.
The first attempt was a straight virtual to virtual (V2V) migration over the WAN. We thought how cool would that be? No downtime, little risk, we were already well over 50 percent virtualized. It turns out the distance between North Carolina and Massachusetts is too far apart, more than 600 miles, which resulted in 25 milliseconds latency. The V2V experiment failed. It took nearly 30 hours to move one Virtual Machine. V2V migration wouldn’t work at that distance. It also wasn’t a viable solution for the hundreds of physical servers that we were still running.
Fourth in a series on EMC’s new Durham Cloud Data Center. Click here to read part three.
By Stephen Doherty, Consultant IT Project Manager
If you are struggling to sort out decades of intertwined databases and mission critical applications to move them to a brand new data center, take heart, you’re not alone. In this blog I’ll discuss our struggles to come up with a migration plan.
As soon as EMC’s Durham Data Center Migration Program to move six petabytes of data and hundreds of applications to our new cloud data center was underway, we initiated the discovery and planning efforts. These work streams ran in parallel to our Architecture Design (Part 1) and our First 90 Days (Part 2) work streams.
I had never migrated a data center before and I had no idea how complex the effort would be. Discovery? Why would we need to do that? We know what’s running where….right?
By Stephen Doherty — Principal IT Project Manager
This is the third part in a series on EMC’s new Durham Cloud Data Center by Stephen. Click here to read part two.
Many organizations these days are facing the substantial task of migrating their traditional data centers to new, cloud-enabled data environments to improve efficiency and provide for growing space needs.
As you strategize to migrate your data center into the cloud, you should be ready to spend as much as 80 percent of your effort sorting out interdependencies between all your applications, databases and servers, which have probably become more and more entangled over time. (Read EMC Durham Cloud Data Center: Migration Planning and Program Management.)
As EMC IT learned in our recent migration of six petabytes of data and hundreds of mission critical applications to our new cloud data center, there is something you should do before you even begin the discovery process—invest in a streamlined configuration management system.
By Stephen Doherty, Principal IT Project Manager
This is the second installment of a blog series about the EMC Durham Data Center. Click here to read part one.
In my first blog in this data center blog series, I talked about the challenges of architecting this cloud-optimized, 100 percent virtualized, scalable and sustainable data center. In this second blog, I would like to share some insights from the first 90 days of the project.
EMC IT set an aggressive two-year target to migrate and transform our data center into a private cloud. Every day was critical. To preserve as much schedule as possible for migrating applications, we allotted 90 days from when the facility was completed to stand up our infrastructure at our the Cloud Data Center in Durham, NC. We needed to install the network, storage, compute and backup to eventually host more than 350 applications, 2,000 servers and six petabytes of storage. In the old dedicated physical IT world this would have been impossible. In the cloud? We were about to find out.
It was a tall order, but by standardizing on the Vblock architecture and virtualizing applications our dedicated team of engineers and technicians were able to complete the initial Cloud Data Center build on schedule.
By Stephen Doherty, Principal IT Project Manager
There are big IT projects at every company and in everyone’s career. I was fortunate enough to be a part of the largest IT infrastructure project in EMC’s history. Simple, open a datacenter, migrate all of the applications, close a data center.
One of our Massachusetts data centers had served EMC and Data General well for decades. However, we were constrained by power, cooling and space. It was also far too close to our other data center to protect EMC from a regional disaster like Hurricane Sandy. EMC selected Durham, North Carolina to build out a new 20,000-square-foot, state-of-the-art data center.
First mover advantage
We’ve written a lot about our Durham Cloud Data Center in the past. We purchased the Durham site in October 2009 and planned to close the near-capacity Massachusetts data center by December 31, 2012. If the migration took longer, we estimated that it would cost EMC millions of dollars in 2013 to extend the lease and staff, power, cool and insure the facility. Three years, no problem—except the Durham facility was a warehouse, not a data center. The facility remodel wouldn’t be ready until October 2010, giving us eight quarters to migrate more than 2,500 servers and 500 applications and a ninth quarter to decommission the Westborough facility.
When it comes to moving a live data center that is supporting a nearly $20-billion corporation, things change daily. So even though the team overseeing EMC Corporation’s transfer of data to a new data center 600 miles away spent 12 months in a discovery process to hone our migration strategy, the process is still a bit of a moving target.
With so many moving pieces, the fact is—as the saying goes—you don’t know what you don’t know with this kind of project.
One of the challenges to keep in mind when migrating to a data center like this is that it’s not just for today. The capital investment is huge, and migrating a data center takes months of planning and many more months to implement.
Nonetheless, we have achieved great success in data migration from Westborough, MA, to our new Durham, N.C., data center in 2011. We are primed to complete the migration process by September of 2012.
Here is the latest update on our migration to the Durham Cloud Data Center. The Data Center is “live” and running production applications. The bundling process (what applications must migrate together) has been completed. This turned out to be more complicated than we had originally planned because of the high level of virtualization and consolidation (Data Base Grids) within our data centers.
Our first attempt at bundling was centered on the applications themselves. Focusing the bundles on specific applications that must move together made perfect sense and, for the most part it, was always the way it was done. When we were finished with our first pass it turned out that we had a handful of really, really large move events. Because we are 75% Virtualized and have 8 Data Base Grids (4 SQL and 4 Oracle), we ended up with just about everything connected to everything!
We went back to the drawing board. Because we had all the data in the MAD (Migration Access Database) we were able to re-bundle, focusing on the 8 Data Base Grids this time. The team worked through all the application interdependencies and multi-tenant environments and came up with 21 distinct migration events that have been scheduled through 2012. As of July 15th 2011 we have completed six events which represent 15% of the overall environments.
The EMC IT Durham Cloud Data Center survived its first major run in with Mother Nature. On Saturday April 16th at least 25 tornados touched down across North Carolina making it the state’s most active tornado season on record. A tornado that traveled 63 miles through Sanford, Holly Springs and Raleigh had top winds of 160 miles per hour and caused major damage. I am happy to report that the Durham Cloud Data Center’s redundant infrastructure worked flawlessly and more importantly none of our EMC employee’s were hurt or affected by the storm.
The map shows the tracks of many of the tornados and the Durham Cloud Data Center was in the middle of all the activity.
The data center did experience a power outage but the new Flywheel UPS infrastructure and the generators kicked in (within 10 seconds of power loss) and provided uninterrupted power throughout the event.
Designing a multimillion-dollar cloud data center from scratch—the focus of my efforts over the last year—is really a once-in-a-lifetime opportunity.
We were running out of capacity in our enterprise data center in Westborough, MA, and after considering all of the options we decided to build a new, highly efficient virtual data center located in Durham, North Carolina.