In my last post, (click here to read Part 6) I explained how we invented a better way to migrate and transform an application either across the room or across the country: build a parallel, virtualized environment, pre-configure and pre-test the new environment and practice the migration. However, nothing is perfect. As we found out, there are still some things you can’t test.
The ESRS 2 migration is probably the pinnacle success story of the entire Durham migration. ESRS 2 connects EMC Customer Service to customers and helps us monitor installed systems, identify problems and connect back to the systems to diagnose and fix problems remotely or through a service request.
The migration team was able to build out a new entirely virtualized architecture running on Vblock. Performance testing results were outstanding. The new architecture was tested at 4x the current load and ran faster than the pre-migration system. We were able to test and fully document our disaster recovery plans. (more…)
In my last post, (click here to read Part 5) I explained how we set up a Symmetrix Remote Data Facility (SRDF) bridge between our old and new datacenters that would allow us to use Storage VMotion to transfer VMs and data to our new Private Cloud datacenter. It worked very well. We could move VMs and data pretty effectively. However, setting them up and getting them to run an application was more of a challenge. We had to roll back one of the first three applications that we tried to migrate; the other two took us a long time to trouble shoot and configure.
The solution to minimize risk and downtime seemed obvious to me. It was just like a technology refresh in the physical world. Build a new environment with all new components and test it. Once all the bugs were worked out then you could synch the data and cutover. Why did I need to move a VM when making another one was just as easy and would provide an opportunity to configure and test it?
With the migration plan completed (click here to read part 4) for EMC’s Durham Data Center, we began the daunting task of the migration. We weren’t going to use trucks or airplanes to move the gear. We were going to migrate all the applications and data over the wire. The fact that it really hadn’t been done before was a technical challenge that we would just have to overcome.
The first attempt was a straight virtual to virtual (V2V) migration over the WAN. We thought how cool would that be? No downtime, little risk, we were already well over 50 percent virtualized. It turns out the distance between North Carolina and Massachusetts is too far apart, more than 600 miles, which resulted in 25 milliseconds latency. The V2V experiment failed. It took nearly 30 hours to move one Virtual Machine. V2V migration wouldn’t work at that distance. It also wasn’t a viable solution for the hundreds of physical servers that we were still running.
If you are struggling to sort out decades of intertwined databases and mission critical applications to move them to a brand new data center, take heart, you’re not alone. In this blog I’ll discuss our struggles to come up with a migration plan.
As soon as EMC’s Durham Data Center Migration Program to move six petabytes of data and hundreds of applications to our new cloud data center was underway, we initiated the discovery and planning efforts. These work streams ran in parallel to our Architecture Design (Part 1) and our First 90 Days (Part 2) work streams.
I had never migrated a data center before and I had no idea how complex the effort would be. Discovery? Why would we need to do that? We know what’s running where….right?
Many organizations these days are facing the substantial task of migrating their traditional data centers to new, cloud-enabled data environments to improve efficiency and provide for growing space needs.
As EMC IT learned in our recent migration of six petabytes of data and hundreds of mission critical applications to our new cloud data center, there is something you should do before you even begin the discovery process—invest in a streamlined configuration management system.
This is the second installment of a blog series about the EMC Durham Data Center. Click here to read part one.
In my first blog in this data center blog series, I talked about the challenges of architecting this cloud-optimized, 100 percent virtualized, scalable and sustainable data center. In this second blog, I would like to share some insights from the first 90 days of the project.
EMC IT set an aggressive two-year target to migrate and transform our data center into a private cloud. Every day was critical. To preserve as much schedule as possible for migrating applications, we allotted 90 days from when the facility was completed to stand up our infrastructure at our the Cloud Data Center in Durham, NC. We needed to install the network, storage, compute and backup to eventually host more than 350 applications, 2,000 servers and six petabytes of storage. In the old dedicated physical IT world this would have been impossible. In the cloud? We were about to find out.
It was a tall order, but by standardizing on the Vblock architecture and virtualizing applications our dedicated team of engineers and technicians were able to complete the initial Cloud Data Center build on schedule.
There are big IT projects at every company and in everyone’s career. I was fortunate enough to be a part of the largest IT infrastructure project in EMC’s history. Simple, open a datacenter, migrate all of the applications, close a data center.
One of our Massachusetts data centers had served EMC and Data General well for decades. However, we were constrained by power, cooling and space. It was also far too close to our other data center to protect EMC from a regional disaster like Hurricane Sandy. EMC selected Durham, North Carolina to build out a new 20,000-square-foot, state-of-the-art data center.
First mover advantage
We’ve written a lot about our Durham Cloud Data Center in the past. We purchased the Durham site in October 2009 and planned to close the near-capacity Massachusetts data center by December 31, 2012. If the migration took longer, we estimated that it would cost EMC millions of dollars in 2013 to extend the lease and staff, power, cool and insure the facility. Three years, no problem—except the Durham facility was a warehouse, not a data center. The facility remodel wouldn’t be ready until October 2010, giving us eight quarters to migrate more than 2,500 servers and 500 applications and a ninth quarter to decommission the Westborough facility.
The opinions and interests expressed on Dell EMC employee blogs are the employees' own and do not necessarily represent Dell EMC's positions, strategies or views. Dell EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the Dell EMC logo and content regarding Dell EMC products and services, employee blogs are independent of Dell EMC and Dell EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.