管理下一代的IT基础架构

2026/1/12 21:29:10

Managing next-generation IT infrastructure

The days of building to order are over. The time is ripe for an industrial revolution. James M. Kaplan, Markus L?ffler, and Roger P. Roberts The McKinsey Quarterly, Web exclusive, February 2005

In recent years, companies have worked hard to reduce the cost of the IT infrastructure—the data centers, networks, databases, and software tools that support businesses. These efforts to consolidate, standardize, and streamline assets, technologies, and processes have delivered major savings. Yet even the most effective cost-cutting program eventually hits a wall: the complexity of the infrastructure itself.

The root cause of this complexity is the build-to-order mind-set traditional in most IT organizations. The typical infrastructure may seem to be high tech but actually resembles an old-fashioned automobile: handmade by an expert craftsperson and customized to the specifications of an individual customer. Today an application developer typically specifies the exact server configuration for each application and the infrastructure group fulfills that request. The result: thousands of application silos, each with its own custom-configured hardware, and a jumble of often incompatible assets that greatly limit a company's flexibility and time to market. Since each server may be configured to meet an application's peak demand, which is rarely attained, vast amounts of expensive capacity sit unused across the infrastructure at any given time. Moreover, applications are tightly linked to individual servers and storage devices, so the excess capacity can't be shared.

Now, however, technological advances—combined with new skills and management practices—allow companies to shed this build-to-order approach. A decade into the challenging transition to distributed computing, infrastructure groups are managing client-server and Web-centered architectures with growing authority. Companies are adopting standardized application platforms and development languages. And today's high-performance processors, storage units, and networks ensure that infrastructure elements rarely need hand-tuning to meet the requirements of applications.

In response to these changes, some leading companies are beginning to adopt an entirely new model of infrastructure management—more off-the-shelf than build-to-order. Instead of specifying the hardware and the configuration needed for a business application (\my network-attached storage box . . .\high-speed scalability . . .\\delivers these products in optimal fashion (Exhibit 1). As product orders roll in, a factory manager monitors the infrastructure for capacity-planning and sourcing purposes.

1

With this model, filling an IT requirement is rather like shopping by catalog. A developer who needs a storage product, for instance, chooses from a portfolio of options, each described by service level (such as speed, capacity, or availability) and priced according to the infrastructure assets consumed (say, $7 a month for a gigabyte of managed storage). The system's transparency helps business users understand how demand drives the consumption and cost of resources.

Companies that make the transition gain big business benefits. By reducing complexity, eliminating redundant activity, and boosting the utilization of assets, they can make their infrastructure 20 to 30 percent more productive—on top of the benefit from previous efficiency efforts—thereby providing far greater output and flexibility. Even larger savings can be achieved by using low-cost, commodity assets when possible. Developers no longer must specify an application's technical underpinnings and can therefore focus on work that delivers greater business value; the new model improves times to market for new applications.

Nevertheless, making this transition calls for major organizational changes. Application developers must become adept at forecasting and managing demand so that, in turn, infrastructure groups can manage capacity more tightly. Infrastructure groups must develop new capabilities in product management and pricing as well as introduce new technologies such as grid computing and virtualization.1 As for CIOs, they must put in place a new model of governance to manage the new infrastructure organization.

The road forward

Deutsche Telekom knows firsthand the challenges involved: over 18 months, hoping to balance IT supply and demand, it implemented this new infrastructure-management model at two divisions (see sidebar, 2

\Next-generation infrastructure at Deutsche Telekom\most, was a landscape of application silos. Today accurate forecasts of user demand are critical, so newly minted product managers must take a horizontal view, across applications, to assess the total needs of the business and create the right products. They must then work closely with infrastructure teams to align supply—infrastructure assets such as hardware, software, and storage—with demand.

In the past, employees of the infrastructure function were order takers. Now, they can be more entrepreneurial, choosing the mix of hardware, software, and technology that optimizes the infrastructure. To keep costs low, they can phase in grids of low-end servers, cheaper storage disks, and other commodity resources. Factory managers now focus on automating and \divisions didn't radically change their organizational or reporting structures, IT governance now seeks to ensure that product and service levels are consistent across business units in order to minimize costs and to improve the infrastructure's overall performance.

What we've seen at Deutsche Telekom and other companies suggests that creating a next-generation infrastructure involves action on three fronts: segmenting user demand, developing productlike services across business units, and creating shared factories to streamline the delivery of IT. Segmenting user demand

Large IT organizations support thousands of applications, hundreds of physical sites, and tens of thousands of end users. All three of these elements are critical drivers of infrastructure demand: applications require servers and storage, sites need network connectivity, and users want access to desktops, laptops, PDAs, and so forth. To standardize these segments, an IT organization must first develop a deep understanding of the shape of current demand for infrastructure services and how that demand will most likely evolve. Then it needs to categorize demand into segments (such as uptime, throughput, and scalability) that are meaningful to business users.

When grouped in this way, most applications fall into a relatively small number of clusters. A pharmaceutical manufacturer, for instance, found that most of a business unit's existing and planned applications fell into one of five categories, including sales force applications that need around-the-clock support and off-line availability and enterprise applications that must scale up to thousands of users and handle batch transactions efficiently. In contrast, a typical wholesale bank's application portfolio has more segments, with a wider range of needs. Some applications—such as derivatives, pricing, and risk-management tools—must execute

computation-intensive analyses in minutes rather than hours. Funds-transfer applications allow for little or no downtime; program-trading applications must execute transactions in milliseconds or risk compromising trading strategies.

Although simple by comparison, the needs of physical sites and user groups can be categorized in a similar way. One marketing-services company that evaluated its network architecture, for example, segmented its sites into offices with more than 100 seats, those with 25 to 100, and remote branches with fewer than 25. A cable systems operator divided its users into senior executives with \employees, call-center agents, and field technicians.

Most companies find that defining the specific infrastructure needs of applications, sites, and users is the key challenge of segmenting demand. Major issues include the time and frequency of need, the number of users, the amount of downtime that is acceptable, and the importance of speed, scalability, and mobility. 3

Standardizing products

Once the infrastructure group has assessed current and future demand, it can develop a set of productlike, reusable services for three segments: management and storage products for applications, access products such as desktops and laptops for end users, and network-access products for various sites. For each of these three product lines, the group must then make a series of decisions at both the portfolio and the product level. At the portfolio level, it has to make decisions about the scope, depth, and breadth of product offerings, with an eye toward optimizing resources and minimizing costs. Exceptions must be detailed up front. The group may decide, for example, against offering products to support applications with stringent requirements, such as very-low-latency processing; these applications may be better built \applications, such as legacy ones, may be better left outside the new model if they're running well and can't easily be ported to new hardware. The group should also decide how to introduce new technologies and to migrate existing applications that are easier to move.

At the product level, the group must define the features, service levels, and price of each product. For each application support product, to give one example, it will be necessary to specify a programming language, an acceptable level of downtime, and a price for infrastructure usage. That price, in turn, depends on how the group decides to charge for computing, storage, processor, and network usage. The group has to consider whether its pricing model should offer discounts for accurate demand forecasts or drive users to specific products through strategic pricing.

Looking forward, companies may find that well-defined products and product portfolios are the single most important determinant of the infrastructure function's success. Developers and users may rebel if a portfolio offers too few choices, for instance, but a portfolio with too many won't reap the benefits of scale and reuse. Good initial research into user needs is critical, as it is for any consumer products company. The supply side: Creating shared factories

The traditional build-to-order model limits the infrastructure function's ability to optimize service delivery. Delivery has three components: operational processes for deploying, running, and supporting applications and technologies; software tools for automating these operational processes; and facilities for housing people and assets.

At most companies, variations in architecture and technology make it impossible to use repeatable processes applied across systems. This problem hinders efficiency and automation and restricts the amount of work that can be performed remotely in low-cost locations, thus limiting the scope for additional cost savings.

In the next-generation infrastructure model, however, application developers specify a service need but have no input into the underlying technologies or processes chosen to meet it. The application may, for instance, require high-speed networked storage, but the developer neither knows nor cares which vendor provides the storage media. This concept isn't new—consumers who have call waiting on their home telephone lines don't know whether the local carrier has a Lucent Technology or Nortel Networks switch at its closest central office. Because the infrastructure function can now choose which software technologies, hardware, and processes to use, it can rethink and redesign its delivery model for optimal efficiency. Using standardized and documented processes, it can start developing an integrated set of software tools to automate its operations. Next, by 4


管理下一代的IT基础架构.doc 将本文的Word文档下载到电脑
搜索更多关于: 管理下一代的IT基础架构 的文档
相关推荐
相关阅读
× 游客快捷下载通道(下载后可以自由复制和排版)

下载本文档需要支付 10

支付方式:

开通VIP包月会员 特价:29元/月

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信:xuecool-com QQ:370150219