As I noted, we were stalled. While we had achieved significant benefits from adopting a standardized and virtualized infrastructure, we had an operating model and 2,000-person organization that had one foot in traditional IT and one foot in ITaaS. With EMC Services’ help, we regrouped, assessed our model’s maturity level and created a clearer roadmap to move forward. The next step was creating the workstreams to execute against that roadmap; reshaping our organization’s processes and the roles of our people in the new ITaaS world.
Big Data is changing the way IT organizations operate and deliver solutions to the business. It is a new, contemporary approach for IT to help business users harness and interpret information to drive more efficiency, productivity, performance and value for the business. As EMC IT embraces Third Platform, we are breaking new ground with Big Data analytics to better position the organization to deliver a more competitive solutions.
EMC CIO Vic Bhagat (@VicBhagat) addressed this topic and more in a recent interview with the Pivotal Blog, tackling the questions, challenges and opportunities facing both EMC IT and global CIOs. Where can IT organizations begin? How can they drive new behaviors? How should they address internal clients?
By Paul Divittorio — Director of Cloud Infrastructure, EMC IT
Flash technology isn’t just for storing your most critical data anymore. Thanks to all-flash storage arrays with super-efficient, in-line deduplication capabilities, flash can now be the most cost-effective choice for your less critical storage needs as well.
This can be illustrated by two use cases we’ve developed for EMC’s all-flash, solid-state clustered storage system, XtremIO. The first is virtual desktop infrastructure. I know what you’re thinking—why would you want to use the most expensive storage for one of the less expensive applications, virtual desktops? To provide a consistent desktop experience and to save money, actually. Continue reading →
By Dave Martin — Vice President and Chief Security Officer
Technologies such as mobile, social networking, analytics and cloud computing are changing the security landscape, and security technologies are rapidly evolving to address that change.
It’s not just the technology that needs to change, however: security teams need to change as well.
EMC has evolved and must continue to evolve our security team to effectively combat the threats of today and tomorrow. The core skills essential to expand include business engagement and awareness; a consultative approach; the ability to sell or “market” security; and creative control design for the mobile and cloud-enabled world of tomorrow.
By Doug Graham — Sr. Director, Global Security Organization, EMC IT
Data-hacking hound dogs beware. EMC recently got a little help from Elvis in battling cyber criminals.
The “King” was at the center of an integrated marketing campaign our Global Security Operations ran this spring to encourage IT users to avoid clicking on suspicious email links that could lead to phishing attacks on our company’s data.
The several-week advertising effort featured a videotaped parody of the Elvis Presley song “Suspicious Minds,” in which ITers acted out why users shouldn’t click on “Suspicious Links,” It also featured a security awareness contest.
The campaign resulted in more than double the number of users reporting phishing attempts via suspicious emails. It also substantially increased the number of users going to our security awareness site, which we call FirstLine in recognition of the fact that the actions of IT users are the first line of defense against cyber-attacks.
By Oshry Ben-Harush — Data Scientist Manager, EMC IT
One of the challenges hardware (and software) manufacturers are facing is estimating the future level of support required in maintaining their products. Underestimating the support requirements would lead to major loses on the support contract while overestimating hurts the competitive edge of the product.
Future level of support includes: replacements, repairs, remote and on-site support. To that end, manufacturers develop reliability models for everything fromhard/flash drives to cars and aircraft. These models take into account different configuration parameters of the final product and its internal components.
Click to Enlarge
In 2007, Google conducted a large-scale analysis for a subset of its drive population. It utilized an environment containing a large number of disk drives, collected different types of data from these drives to a Big Data store (Google’s Bigtable) and conducted an analysis of the different Key Performance Indicators (KPIs) and their correlation with drive mortality:
Manufacturer, Models and Vintage
Self-Monitoring, Analysis and Reporting Technology (M.A.R.T)
Contrary to expectations, Google’s researchers found that these KPIs are more useful for predicting trends for a large population than for predicting a single drive failure.
Making the transformation from the old, traditional IT world to the new IT as a service (ITaaS) world is about more than technology. It involves the much more gradual and, at times, more challenging evolution of your operation’s people and processes into a new operational and organizational structure, while maintaining the old processes and structures that need to exist until the transformation is complete.
This split focus of having one foot in the traditional IT world and one in ITaaS can undermine your transformation journey by adding complexity and uncertainty that conspire to prevent true change from taking root in the organization. The reality is, every IT organization on a transformational journey faces this type of challenge and there’s no need to tackle it alone. You may want to take the time to get some help from someone who’s done it before and can provide some “tough love” to get you over the hump.
By Dave Scheffler — EMC Director of Data Center Services
Today, everyone is talking about IT in the cloud, but there still has to be a physical infrastructure on the back end on which to run the cloud. Welcome to EMC’s Durham Data Center.
Our 20,000-square foot, state-of-the-art facility illustrates the most efficient way to implement the hardware your organization needs to run the cloud. It features one of the largest Vblock environments in existence. Its leading-edge green features demonstrate the savings that can be gained with a creative approach to environmental technology. And finally, our Data Center serves to showcase the full array of EMC’s products and solutions in the real world as we “drink our own champagne” in virtualizing, automating, and backing up some 12 petabytes of data.
Our virtual tour of the Durham Data Center gives you a high level understanding of how our data center works and a glimpse of EMC Cloud computing using Vblock architecture. It features purpose-built Vblocks which run our SAP-based, enterprise resource planning (ERP) system and Exchange environment, as well as 100 percent tapeless backup environments built on our Data Domain and Avamar technologies. With tens of thousands of VMs in our data center, our sales staff can tap in to Durham to demonstrate products and services in a real-life lab setting.
Previously, everyone used the same kind of computers on the same corporate network in the same offices. Alas, those days are gone. Today, we aren’t just defending against denial of service attacks – we are vigilantly protecting our companies from more organized, persistent threats to infiltrate our environment and exfiltrate our intellectual property. On the flipside, we must mitigate risks with a more mobile, global and social workforce that expects their IT capabilities at work to mirror the IT experience they have in their personal lives.
Dr. Raphael Cohen — Principal Data Scientist, EMC IT
The first step we ask as data scientists when we approach a new project is what’s the customer’s available data? While some of the time the answer will be a table or file with lots of nice numbers just waiting to be ingested by a machine learning classifier, most of the time a big chunk of the information will be stored in free text columns or documents.
As a customer-facing organization we store information describing EMC’s interaction with clients: some of it structured such as time to close, problem codes, etc.— but also free text fields such as problem summary or comments from the customer satisfaction survey. These free text fields can be used for accurately routing service requests to the right support team to improve resolution times and customer satisfaction, identify burning issues in the customer satisfaction survey or identify emerging problems.
Similarly, Sales would like to use a potential customer’s web site in order to categorize that company’s needs and identify products sold to similar companies.
By Neil Thibodeau — Senior Director, EMC IT Business Management
Becoming financially transparent and allowing IT customers to see and control what they invest in IT services is a critical part of transforming your IT operation into an IT-as-a-Service model. But those financial details are only as good as the data they are drawn from. Data Quality Management is foundational to building an ITaaS model, as well as to maintaining credible financial transparency as your IT operation evolves and matures.
EMC IT began focusing on Data Quality Management back in 2011, when we pursued financial transparency as part of our ITaaS transformation. The goal was to transition our IT operation from a traditional centralized, cost-center based IT budget— where users had little or no information on the cost and value of what they consumed—to a financially transparent one providing increased detail on users’ IT spend.
When it comes to transforming your traditional IT operation, convincing business users to embrace your new cloud architecture can be an uphill battle.
As I noted in my last blog, EMC IT initially provided infrastructure to our business users free of charge and stepped up our guaranteed service levels to convince them to adopt our new infrastructure. Virtualization and multi-tenancy were creating tremendous cost and efficiency benefits.
Nonetheless, we faced an interesting phenomenon—even though our infrastructure was free, some business units were still opting to work around us and spend real money on services. It really caused us to pause, and ask, “Why is this happening?” Continue reading →
By Dr. Alon Grubshtein, Principal Data Scientist — EMC IT
This is a great time to be a data scientist –a bit like rock stars with all the fans always trying to catch some private time with us. While there’s is no clear definition of what a data scientist is (see related blog or view diagram of DS skillset) our take on this role is quite simple:
Work with stakeholders to elevate high impact business related questions
Find the means to answer these questions
This blog aggregates our collective experiences as members of EMC’s Corporate IT Data-Science-as-a-Service (DSaaS) team. Our team has been active since 2012, providing Data Science (DS) services to different business units as part of EMC IT’s transformation to an agile and innovative IT-as-a-Service model.
Although we aimed for a technical blog, we thought that the first post should provide a broader context to the DSaaS offering and it will, therefore, be dedicated to the process of innovating and driving data science projects in the corporate environment.
As data growth continues unabated, with users demanding performance across multiple devices, file sharing has become an indispensable part of successful collaboration and management. To support this growth, EMC IT embraced Syncplicity to provide our global workforce of more than 60,000 people with a fast, simple, reliable service to share the content they use every day. Syncplicity has proven to be a game changer and now comes out of the box for every new EMCer.
With speed and efficiency, EMC IT migrated nearly the entire enterprise onto Syncplicity within mere months. How? Watch below as EMC Chief Information Officer Vic Bhagat (@VicBhagat) discusses the rollout of Syncplicity.
Want more information? Watch EMC Chief Security Officer Dave Martin address the security elements behind Syncplicity. You can also read the following content that further details Syncplicity’s place within the EMC service catalog.
By Barbara Latulippe— Sr. Director, Office of Architecture and Innovation
Like most organizations these days, your company has probably realized the crucial role Data Quality plays in driving operational efficiencies and leveraging business and predictive analytics.
Be aware, however, that managing master data and driving data quality is a long-term journey that is best approached in phases based on your needs. If your organization is just starting out, it usually takes three to five years to get to a matured/optimized model, so I encourage you to take pragmatic steps and focus on your important business priorities. It’s not a one-size-fits-all endeavor.
At EMC, we are in the midst of evolving from a managed to an optimized Master Data Management (MDM) and data quality model. Over the past several years, we established and implemented that model for our customer data domain and are in the process of applying that same approach to additional domains, such as contacts and vendors.
Along the way, we have learned some lessons that might help your organization with its data quality journey. These lessons include:
A big mistake companies make as they work to transform their IT operation is thinking that they can achieve that change without making some bold, big bets.
CIOs and IT leaders I talk with often reason, “if we make a bunch of little incremental steps, we’ll get to where we want to get.”
That is absolutely not the case, because when you make small incremental steps, in most cases, you’re ruling by consensus; you’re letting the silos dictate the strategy and the direction. And in the end, there are so many vested interests in traditional IT organizations, that a CIO or a VP of infrastructure responding to consensus can end up missing the bigger picture.
You’re engineering complexity into the system, as opposed to taking complexity out. The bigger picture is that there is an entirely different model that we need to get to as quickly as we can, and to do that requires some bold moves.
By Stephen Doherty — Consultant IT Project Manager
In my last post, (click here to read Part 6) I explained how we invented a better way to migrate and transform an application either across the room or across the country: build a parallel, virtualized environment, pre-configure and pre-test the new environment and practice the migration. However, nothing is perfect. As we found out, there are still some things you can’t test.
The ESRS 2 migration is probably the pinnacle success story of the entire Durham migration. ESRS 2 connects EMC Customer Service to customers and helps us monitor installed systems, identify problems and connect back to the systems to diagnose and fix problems remotely or through a service request.
The migration team was able to build out a new entirely virtualized architecture running on Vblock. Performance testing results were outstanding. The new architecture was tested at 4x the current load and ran faster than the pre-migration system. We were able to test and fully document our disaster recovery plans. Continue reading →
By Vic Bhagat — EMC Chief Information Officer @VicBhagat
With wheels up and the neon lights of Las Vegas behind me, I reflected on two days spent with nearly 80 global CIOs at EMC’s fourth annual CIO Summit. Whether in our panel discussions, collaborative breakout sessions, or during the networking breaks, we tackled a variety of timely topics for CIOs.
Of course, it would be overly ambitious to say we collectively solved all that ails CIOs because we have just scratched the surface. However, faced with pressure to provide our businesses and users with agile, elastic and contemporary IT services, we only saw an opportunity to unlock more value. Here are some takeaways from the Summit conversation:
The reality is if your IT organization is working to transform into an IT-as-a-Service model to meet changing user demands, you didn’t just wander onto that path. Transformation is typically not something you do when everything is good… it’s a response to disruptive influences that make the status quo increasingly untenable.
To understand what’s driving today’s IT transformation groundswell, you need only look at the escalating pressures facing the CIO in a traditional IT operation. On one side, you have external IT service providers, promoting standardized offerings with friction-free consumption experiences and pay-by-the-drink pricing selling directly to the lines of business in competition with corporate IT. On the other side, many CIOs contend with Corporate Finance models that want to treat corporate IT like a regulated monopoly–rationing the supply of IT in order to keep total IT costs in check.
While corporate IT may have once been somewhat of a monopoly within an enterprise, those days are long gone. Increasingly tech savvy business users, empowered by consumerization trends and an explosion in IT services offered from the public cloud, are finding alternatives to corporate IT. They perceive IT as too slow, too expensive, too restrictive and too rooted in traditional thinking.
By Darryl Smith — Chief Database Architect, EMC IT
First off, my apologies for delaying the last part of this four part blog for so long. I have been building a fully automated application platform as a service product for EMC IT to allow us to deploy entire infrastructure stacks in minutes – all fully wired, protected and monitored, but that topic is for another blog.
In my last post,Best Practices For Virtualizing Your Oracle Database With VMware, the best practices were all about the virtual machine itself. This post will focus on VMware’s virtual storage layer, called a datastore. A datastore is storage mapped to the physical ESX servers that a VM’s luns, or disks, are provisioned onto. This is a critical component of any virtual database deployment as it is where the database files reside. It is also a silent killer of performance because there are no metrics that will tell you that you have a problem, just unexplained high IO latencies.