How do you cool today’s modern data centers, running increasingly high density and high performance equipment built to manage exploding amounts of enterprise data? This presents a substantial cooling challenge for data center managers. Fortunately, we at Dell IT have found a way to take the heat off of such cooling demands.
After many months of careful experimentation, we recently determined that using a cold aisle containment approach in our Durham, N.C, data center, we can safely maintain our equipment at 78 degrees F. This is six degrees warmer than the original design threshold of 72 degrees F. The increase means we can now leverage free-air cooling—air circulated from outside rather than mechanically cooled air—in our data center 80 percent of the time instead of 60 percent. (Think of it as opening a window in your house rather than running the air conditioner.) This will cut our cooling costs by 25 percent.
If your organization is struggling with how to keep your enterprise data secure in the cloud, you aren’t alone. The fact is, the modern data center poses some fairly new security challenges and there is no rule book on how to meet them. Even in security, we are learning as we go.
EMC IT is innovating and developing new IT solutions that not only meet our internal customers’ growing data and IT demands but also help us drive improved space utilization and energy efficiencies in our modern data centers.
For example, in our regional data center in Cork, Ireland we used “hot aisle containment” technology to decrease machine energy consumption by 24 percent. In our Hopkinton Data Center, we increased space efficiency and reduced power consumption to extend the facility’s life by five years. And leveraging IT’s own business analytics tools, we were able to apply predictive and deeper analytics into application and device power usage—to drive further efficiencies.
Read more about our Efficient Data Centers and how they further EMC’s commitment to sustainability in EMC’s 2015 Sustainability Report.
From adapting energy use to maximizing data consolidation, Big Data (BD) analytics has taken the guesswork out of optimizing the modern data center.
More than ever, the modern data center is a living, changing environment, with new technologies coming in, old technologies being cycled out, and evolving energy efficiency strategies to keep it all humming. We have to make sure we have the space and power to install the latest technology, while we still have the old equipment in place.
Up until recently, orchestrating this shifting ecosystem was only partially data-driven and the rest was based on gauging changing needs from past experience. At EMC IT—like most IT organizations—we had long tracked metrics on our data center facilities, including space, power, cooling, humidity, temperature, etc. And we collected storage data—server utilization, virtual machines, growth trends. But we lacked the tools to process this vast amount of data and we were never able to aggregate this information into one data base.
Despite the emergence of IT as a Service and the rise of self-service catalogues, most IT operations—including EMC’s—have remained largely manual when it comes to filling users’ requests for networking, storage and compute, struggling to keep pace with growing demand. Until now, that is.
EMC ITis in the process of rolling out a new set of tools, based on a combined approach to infrastructure and automation that will reduce the time it takes to fill customers’ infrastructure demands from months to days or even hours.
The new production environment uses EMC’s Federation Enterprise Hybrid Cloud (FEHC) management platform on VCE Vblock™ converged and hyper-converged infrastructure to provide the abstraction of hardware through software. Translation: IT clients will no longer have to come to the IT infrastructure team every time they need a new environment or an additional server. They can self-provision these services using a truly automated portal and with a standardized set of components.
Creating a data protection strategy for your organization is a little bit like selecting the right insurance policy for your home. It isn’t the most flashy of endeavors and nobody likes paying those insurance premiums, but when a hurricane rips the roof off your house, you’re glad that you took the time to do it right.
Structuring your data protection strategy is not exclusively an IT decision. It’s primarily a business decision involving a range of stakeholders (not just IT) which provides the products, solutions and processes to execute that strategy based on the value of the data and the objectives of the business.
Data protection is not a one-size-fits-all process, as we in EMC IT, have come to learn. The following are best practices and lessons learned that EMC IT uses to create and maintain our data protection strategy. (more…)
EMC IT’s ongoing quest to meet business’ need for speed and on-demand infrastructure has entered a new chapter as our IT organization implements a software defined data center using EMC’s Federation Enterprise Hybrid Cloud technology. As we continue to build our infrastructure and services in the cloud, there are several lessons we have learned along the way that will hopefully help your organization on your path to the hybrid cloud.
Like most organizations, EMC IT has virtualized and consolidated our infrastructure, achieved significant cost savings, and continued to drive down provisioning time and increase agility. After this, we used a myriad of tools, software, and scripts to deliver some Infrastructure-as-a-Service (IaaS) capabilities. The introduction of new EMC Federation Enterprise Hybrid Cloud technology (FEHC) is accelerating our progress toward a software defined data center by leveraging a fully integrated technology stack with virtual networking, storage and security, in addition to the virtual compute layer we have been – accustomed to for years.
Today, everyone is talking about IT in the cloud, but there still has to be a physical infrastructure on the back end on which to run the cloud. Welcome to EMC’s Durham Data Center.
Our 20,000-square foot, state-of-the-art facility illustrates the most efficient way to implement the hardware your organization needs to run the cloud. It features one of the largest Vblock environments in existence. Its leading-edge green features demonstrate the savings that can be gained with a creative approach to environmental technology. And finally, our Data Center serves to showcase the full array of EMC’s products and solutions in the real world as we “drink our own champagne” in virtualizing, automating, and backing up some 12 petabytes of data.
Our virtual tour of the Durham Data Center gives you a high level understanding of how our data center works and a glimpse of EMC Cloud computing using Vblock architecture. It features purpose-built Vblocks which run our SAP-based, enterprise resource planning (ERP) system and Exchange environment, as well as 100 percent tapeless backup environments built on our Data Domain and Avamar technologies. With tens of thousands of VMs in our data center, our sales staff can tap in to Durham to demonstrate products and services in a real-life lab setting.
In my last post, (click here to read Part 6) I explained how we invented a better way to migrate and transform an application either across the room or across the country: build a parallel, virtualized environment, pre-configure and pre-test the new environment and practice the migration. However, nothing is perfect. As we found out, there are still some things you can’t test.
The ESRS 2 migration is probably the pinnacle success story of the entire Durham migration. ESRS 2 connects EMC Customer Service to customers and helps us monitor installed systems, identify problems and connect back to the systems to diagnose and fix problems remotely or through a service request.
The migration team was able to build out a new entirely virtualized architecture running on Vblock. Performance testing results were outstanding. The new architecture was tested at 4x the current load and ran faster than the pre-migration system. We were able to test and fully document our disaster recovery plans. (more…)
In my last post, (click here to read Part 5) I explained how we set up a Symmetrix Remote Data Facility (SRDF) bridge between our old and new datacenters that would allow us to use Storage VMotion to transfer VMs and data to our new Private Cloud datacenter. It worked very well. We could move VMs and data pretty effectively. However, setting them up and getting them to run an application was more of a challenge. We had to roll back one of the first three applications that we tried to migrate; the other two took us a long time to trouble shoot and configure.
The solution to minimize risk and downtime seemed obvious to me. It was just like a technology refresh in the physical world. Build a new environment with all new components and test it. Once all the bugs were worked out then you could synch the data and cutover. Why did I need to move a VM when making another one was just as easy and would provide an opportunity to configure and test it?
The opinions and interests expressed on Dell EMC employee blogs are the employees' own and do not necessarily represent Dell EMC's positions, strategies or views. Dell EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the Dell EMC logo and content regarding Dell EMC products and services, employee blogs are independent of Dell EMC and Dell EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.