From adapting energy use to maximizing data consolidation, Big Data (BD) analytics has taken the guesswork out of optimizing the modern data center.
More than ever, the modern data center is a living, changing environment, with new technologies coming in, old technologies being cycled out, and evolving energy efficiency strategies to keep it all humming. We have to make sure we have the space and power to install the latest technology, while we still have the old equipment in place.
Up until recently, orchestrating this shifting ecosystem was only partially data-driven and the rest was based on gauging changing needs from past experience. At EMC IT—like most IT organizations—we had long tracked metrics on our data center facilities, including space, power, cooling, humidity, temperature, etc. And we collected storage data—server utilization, virtual machines, growth trends. But we lacked the tools to process this vast amount of data and we were never able to aggregate this information into one data base.
Despite the emergence of IT as a Service and the rise of self-service catalogues, most IT operations—including EMC’s—have remained largely manual when it comes to filling users’ requests for networking, storage and compute, struggling to keep pace with growing demand. Until now, that is.
EMC ITis in the process of rolling out a new set of tools, based on a combined approach to infrastructure and automation that will reduce the time it takes to fill customers’ infrastructure demands from months to days or even hours.
The new production environment uses EMC’s Federation Enterprise Hybrid Cloud (FEHC) management platform on VCE Vblock™ converged and hyper-converged infrastructure to provide the abstraction of hardware through software. Translation: IT clients will no longer have to come to the IT infrastructure team every time they need a new environment or an additional server. They can self-provision these services using a truly automated portal and with a standardized set of components.
Creating a data protection strategy for your organization is a little bit like selecting the right insurance policy for your home. It isn’t the most flashy of endeavors and nobody likes paying those insurance premiums, but when a hurricane rips the roof off your house, you’re glad that you took the time to do it right.
Structuring your data protection strategy is not exclusively an IT decision. It’s primarily a business decision involving a range of stakeholders (not just IT) which provides the products, solutions and processes to execute that strategy based on the value of the data and the objectives of the business.
Data protection is not a one-size-fits-all process, as we in EMC IT, have come to learn. The following are best practices and lessons learned that EMC IT uses to create and maintain our data protection strategy. (more…)
EMC IT’s ongoing quest to meet business’ need for speed and on-demand infrastructure has entered a new chapter as our IT organization implements a software defined data center using EMC’s Federation Enterprise Hybrid Cloud technology. As we continue to build our infrastructure and services in the cloud, there are several lessons we have learned along the way that will hopefully help your organization on your path to the hybrid cloud.
Like most organizations, EMC IT has virtualized and consolidated our infrastructure, achieved significant cost savings, and continued to drive down provisioning time and increase agility. After this, we used a myriad of tools, software, and scripts to deliver some Infrastructure-as-a-Service (IaaS) capabilities. The introduction of new EMC Federation Enterprise Hybrid Cloud technology (FEHC) is accelerating our progress toward a software defined data center by leveraging a fully integrated technology stack with virtual networking, storage and security, in addition to the virtual compute layer we have been – accustomed to for years.
Today, everyone is talking about IT in the cloud, but there still has to be a physical infrastructure on the back end on which to run the cloud. Welcome to EMC’s Durham Data Center.
Our 20,000-square foot, state-of-the-art facility illustrates the most efficient way to implement the hardware your organization needs to run the cloud. It features one of the largest Vblock environments in existence. Its leading-edge green features demonstrate the savings that can be gained with a creative approach to environmental technology. And finally, our Data Center serves to showcase the full array of EMC’s products and solutions in the real world as we “drink our own champagne” in virtualizing, automating, and backing up some 12 petabytes of data.
Our virtual tour of the Durham Data Center gives you a high level understanding of how our data center works and a glimpse of EMC Cloud computing using Vblock architecture. It features purpose-built Vblocks which run our SAP-based, enterprise resource planning (ERP) system and Exchange environment, as well as 100 percent tapeless backup environments built on our Data Domain and Avamar technologies. With tens of thousands of VMs in our data center, our sales staff can tap in to Durham to demonstrate products and services in a real-life lab setting.
In my last post, (click here to read Part 6) I explained how we invented a better way to migrate and transform an application either across the room or across the country: build a parallel, virtualized environment, pre-configure and pre-test the new environment and practice the migration. However, nothing is perfect. As we found out, there are still some things you can’t test.
The ESRS 2 migration is probably the pinnacle success story of the entire Durham migration. ESRS 2 connects EMC Customer Service to customers and helps us monitor installed systems, identify problems and connect back to the systems to diagnose and fix problems remotely or through a service request.
The migration team was able to build out a new entirely virtualized architecture running on Vblock. Performance testing results were outstanding. The new architecture was tested at 4x the current load and ran faster than the pre-migration system. We were able to test and fully document our disaster recovery plans. (more…)
In my last post, (click here to read Part 5) I explained how we set up a Symmetrix Remote Data Facility (SRDF) bridge between our old and new datacenters that would allow us to use Storage VMotion to transfer VMs and data to our new Private Cloud datacenter. It worked very well. We could move VMs and data pretty effectively. However, setting them up and getting them to run an application was more of a challenge. We had to roll back one of the first three applications that we tried to migrate; the other two took us a long time to trouble shoot and configure.
The solution to minimize risk and downtime seemed obvious to me. It was just like a technology refresh in the physical world. Build a new environment with all new components and test it. Once all the bugs were worked out then you could synch the data and cutover. Why did I need to move a VM when making another one was just as easy and would provide an opportunity to configure and test it?
With the migration plan completed (click here to read part 4) for EMC’s Durham Data Center, we began the daunting task of the migration. We weren’t going to use trucks or airplanes to move the gear. We were going to migrate all the applications and data over the wire. The fact that it really hadn’t been done before was a technical challenge that we would just have to overcome.
The first attempt was a straight virtual to virtual (V2V) migration over the WAN. We thought how cool would that be? No downtime, little risk, we were already well over 50 percent virtualized. It turns out the distance between North Carolina and Massachusetts is too far apart, more than 600 miles, which resulted in 25 milliseconds latency. The V2V experiment failed. It took nearly 30 hours to move one Virtual Machine. V2V migration wouldn’t work at that distance. It also wasn’t a viable solution for the hundreds of physical servers that we were still running.
If you are struggling to sort out decades of intertwined databases and mission critical applications to move them to a brand new data center, take heart, you’re not alone. In this blog I’ll discuss our struggles to come up with a migration plan.
As soon as EMC’s Durham Data Center Migration Program to move six petabytes of data and hundreds of applications to our new cloud data center was underway, we initiated the discovery and planning efforts. These work streams ran in parallel to our Architecture Design (Part 1) and our First 90 Days (Part 2) work streams.
I had never migrated a data center before and I had no idea how complex the effort would be. Discovery? Why would we need to do that? We know what’s running where….right?
Many organizations these days are facing the substantial task of migrating their traditional data centers to new, cloud-enabled data environments to improve efficiency and provide for growing space needs.
As EMC IT learned in our recent migration of six petabytes of data and hundreds of mission critical applications to our new cloud data center, there is something you should do before you even begin the discovery process—invest in a streamlined configuration management system.
The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the EMC logo and content regarding EMC products and services, employee blogs are independent of EMC and EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.
EMC builds information infrastructures and virtual infrastructures to help people and businesses around the world unleash the power of their digital information. EMC offerings in backup and recovery, enterprise content management, unified storage, big data, enterprise storage, data federation, archiving, security, and deduplication help customers move to and build IT trust in their next generation of information management and enable them to offer IT-as-a-Service as part of their journey to cloud computing.
We are an Equal Employment Opportunity employer that values the strength diversity brings to the workplace. All qualified applicants, regardless of race, color, religion, gender, sexual orientation, marital status, gender identity or expression, national origin, genetics, age, disability status, protected veteran status, or any other characteristic protected by applicable law, are strongly encouraged to apply.