WHEN TO MURDER YOUR MAINFRAME That old computer system is too inflexible to serve many future needs, but its data and customized programs are too valuable to toss. What to do?
By Peter Nulty

(FORTUNE Magazine) – SOON AFTER you switch on a new computer, it starts to sprout cables like branches and peripherals like leaves. Time-lapse photography would show how quickly the typical personal computer can strangle a desk with new growth and burst into malevolent bloom like a rogue vine in a sci-fi flick. The reason, of course, is that hardware -- from the smallest PC to the mightiest mainframe -- is easy to upgrade quickly and productively. Need a disk with more memory? A faster printer? Just plug 'em in. When that doesn't satisfy, you can always move up to a more powerful version of the same technology. A 486-based PC easily supplants a 386; an ES/9000 mainframe from IBM takes over for an ES/3090. For many companies, the process used to be routine, if expensive. When the ''glass house'' -- the centrally managed department that operates the mainframes -- ran out of capacity, it usually ordered IBM's newest model and either swapped it for the old one or plugged it in alongside. Alas, a day comes when upgrading reaches a limit. Add-ons no longer add much, and bigger processors cost more than they're worth. Then your system has hit a wall. It's time to start all over. Unfortunately, junking the old computer for the latest miracle of technology is increasingly complicated and risky, especially for corporations. Indeed, the survival of some companies may depend on how well they handle this transition. ''Anyone who bungles the transfer of 'mission critical' data, such as a utility's records of customer power use and billing, will sink,'' warns an executive at Baltimore Gas & Electric. In the past few years mainframes everywhere have been hitting the wall hard. Small computers connected to one another in local area networks (LANs) will probably replace many of them eventually. But switching to networks from mainframes, sometimes called downsizing, is proving slow and treacherous. The change sometimes shakes the corporate status quo severely, setting off bloody battles for turf and power. What is this wall that can cause such turmoil? Which paths lead around it? How can you make the passage safely? The wall is the mainframe's inflexibility in the face of accelerating change. For many businesses, information processing is becoming the major factor in competitive advantage. It is key to such new-wave management techniques as just-in-time inventory, total quality management, and process reengineering. In some financial service companies, information is the product, and computers are the assembly line. Insurers, for instance, craft new policies on computers, and banks use the machines to create sophisticated derivative financial instruments like swaps. To compete in a world where speed is paramount, designers who use computers to develop products or services must be able to program and reprogram their machines quickly -- lest the competition beat them to market. Increasingly, corporate computing will have to zig, zag, and then turn on a dime. Mainframes aren't good at that. Reprogramming mainframes to spit out information in new formats can be frightfully time-consuming and expensive. Take customer service departments in banks. Traditionally, when you called with questions about multiple accounts -- mortgages, checking accounts, trusts, and the like -- a customer representative might reply, ''I'll look up your files and call you back tomorrow.'' The customer rep then had to query several databases, sometimes in different divisions of the bank, to assemble a complete record of your accounts. Now, in a race to serve customers better, banks are speeding up the process so that when you telephone, a bank representative can immediately summon on a desktop terminal a thorough tally of all your accounts, no matter how many and varied. Teaching the old mainframe that simple new trick can take as much time as retooling a Rust Belt auto plant. Says John Hagel, a principal at McKinsey & Co., the management consultants: ''Typically the service department will ask the glass house to carry out the project, and they will be told by the keepers of the mainframe in the glass house that it will cost $2 million to $3 million and take two years. By the time the changeover is done, some competing bank will have found a better way.'' MANY THINGS make mainframes unwieldy: Their operating systems are proprietary. Their applications software has been elaborately custom-tailored over the years by systems engineers who have added bells upon bells upon whistles at the request of company executives. The customized programming is often so poorly described in the records that engineers who come in years later and need to change it aren't always sure what they are dealing with. ''Upper management is beginning to see the information systems department as an obstacle to nimbleness,'' says David Litwack, president of Powersoft Corp., a company in Burlington, Massachusetts, that makes software for networks. Networks offer the flexibility mainframes usually lack. They take many forms but generally consist of ''clients,'' a group of PCs or workstations tied together so they can swap data or messages, and ''servers,'' one or more computers that hold databases for the network along with the programs that manage it. These client-server networks respond to users of information, such as customer service representatives, more quickly than mainframes usually do, in part because they are not burdened with the legacy of years of accumulated out-of-date programming. They are also easier to reprogram. Programming projects that take years and up to 100 people to accomplish on a mainframe can often be done in months or even weeks by five to 15 people working with a network, according to Robert Franz, a senior vice president at Booz Allen & Hamilton. Because networks are made up of discrete standardized parts, they are easily upgraded or enlarged by adding new personal computers for additional users. Says Glover Ferguson, director of research for Andersen Consulting in Chicago: ''LANs can be expanded one fanny at a time.'' THE PRESSURE IS ON. Data about the full scope of networking are sketchy, but it appears to be growing rapidly. Forrester Research Inc. of Cambridge, Massachusetts, a consulting company that specializes in the field, queried 50 companies on the FORTUNE 500 lists of the largest industrial and service corporations about how many people in each company were connected by terminals % to IBM mainframes. In 1991, only 14% said the number was falling. By 1993, 64% were reporting declines. The reason? ''PCs, LANs, and client servers, no question about it,'' says Janet Hyland, a director of Forrester. Her company estimates that the market for client-server system software will grow from $1.4 billion in 1991 to $20.8 billion in 1996. Wall Street has taken notice. Powersoft is a pure client- server play, because all its software is targeted at that market. When the company went public in February, on the first day of trading the stock opened at $20 a share and closed at $39. Recently it traded around $34. If you are feeling the urge to network at your company, remember that the technology is young and evolving rapidly. The costs are not fully understood. Remember too that your aging mainframe may not be completely obsolete -- yet. Consider the following strategies devised by those who have gone before:

-- Go straight to the promised LAN. When Paul Watz, director of information technology for Motorola's computer group, was told to cut the cost of management information systems (MIS) from 3.7% of sales to 1%, he embraced networks with a passion. Over the past two years he has been shifting work from two IBM 4381 mainframes to three local area networks consisting of over 1,000 computers and workstations, all connected to 25 to 30 minicomputer servers. This fall the mainframes will be unplugged and carted off. Watz reports encouraging savings, chiefly in maintaining the hardware and software. His mainframes were each costing roughly $30,000 a month to maintain and program vs. $10,000 for the equivalent computing power in network form. His MIS costs are down to 1.2% of sales. Last October, Hellene Runtagh became General Electric's first CIO -- chief information officer, head of all data processing and information systems. She says anyone contemplating downsizing should proceed by considering the information on mainframes as assets that may -- or may not -- increase in value when run on fast, flexible networks. (Massive and repetitive computing chores -- such as maintaining the payroll -- may best be left on a mainframe.) If putting a database on a network would improve overall financial performance, do it. If not, let it be. GE's medical division put maintenance and repair records for its customers' equipment -- such as magnetic resonance imaging machines -- on a wide area network that spans North America, Europe, and Asia. The machines themselves + are also on the network so that maintenance engineers at service headquarters in Milwaukee can query the diagnostic electronic sensors of a broken machine in Los Angeles, pull its repair records from computers in Milwaukee and St. Louis, and leave for California within hours taking along the necessary parts. The network does in hours what took days with a mainframe and no network. Runtagh estimates that ten years ago 95% of GE's computing was on mainframes, but by 1995 under 50% will be.

-- Try halfway measures. Many companies don't want to leap into networks directly from their mainframes. Moving vital computing chores such as inventory management too quickly can be disastrous if the new system contains serious bugs. A more cautious alternative is an intermediate strategy often called front-ending, which uses special software programs, in effect, to translate the mainframe's messages into forms usable by PCs. This software usually resides in a minicomputer parked, figuratively speaking, in front of the mainframe. Firms like Booz Allen and McKinsey recommend front-ending strategies to many of their clients with large mainframe systems. Bernard McGarrigle, manager of emerging technologies at First Bank System in Minneapolis, chose this path when he was given a deadline of only a few months to whip the bank's databases into new shapes. First Bank's service reps needed to see customers' consolidated records, but the data were tricky to assemble: In one database, for example, a customer's name might appear last name first, while in another the first name came first. To the mainframe, there were two different customers. McGarrigle loaded onto the mainframe software made by Early Cloud & Co. in Newport, Rhode Island. Its programs can search mainframe databases for relevant chunks of information and pull the pieces together even if a customer's name is not spelled consistently. Eventually McGarrigle will convert First Bank to LANs. He chose front-ending as an interim step partly because he needed results fast; the front-end computer was up and running in nine months vs. 18 months or longer for a network. Also, he wanted to test LANs in noncritical areas before committing the bank to them for sensitive applications.

-- Tweak the mainframe. When the mainframes at Baltimore Gas & Electric hit the wall, the company's information systems engineers lowered their shoulders, grunted, and shoved the problem a few years into the future. The strategy they used buys BG&E time and saves money while the company moves cautiously toward networking. Joseph E. Hunter, manager of the information systems department, recalls what happened in 1990: ''Load was increasing 35% annually. Every year we would add a new mainframe at a cost of $5 million to $10 million. Then IBM announced new machines that were going to cost us up to $20 million each, and we were going to need two in 1991. It was new technology, and we don't like to pioneer. So we just said we're not going to buy any more.'' Hunter adds: ''I didn't think we could stem the ocean of load, but we did it.'' His systems people rescheduled computer runs for slow times at night and early in the morning. They devised an archiving program that cleans out and saves data not currently needed -- for example, they shortened customers' active meter records from two years to one. They moved two engineering applications onto workstations. They concocted an incentive program, awarding T-shirts, luncheons, and merit coupons to MIS workers who reduced their need for computer capacity. (The more coupons an employee received, the bigger his or her bonus at the end of the year.) The crunch has been put off until 1995.

-- Drop back and punt. A controversial strategy to watch: outsourcing all data processing. If it works well for two pioneers, Eastman Kodak (which started doing it in 1989) and Continental Bank of Chicago, it might spread. Richard Huber, vice chairman of Continental, recounts the bank's decision: ''Our system wasn't bad broke, but we were like a lot of companies. We developed highly centralized operations in the Seventies and early Eighties. Then it was 'Let a thousand flowers bloom,' and suddenly we had a crazy quilt of systems. Some were pretty good, but they couldn't talk to each other. We wanted to liberate ourselves from all those boxes and DOS and Unix.'' Among the headaches Huber is gladdest to be rid of: the shortage of top people to keep up with the ever faster pace of change in information technology. Such folks are getting harder to hold on to because they want to be on the cutting edge of their professions at companies like Microsoft. Continental needed about 200 people in MIS for normal operations and about 1,000 for a big upgrade. ''So we kept about 400 and were always either overstaffed or understaffed,'' Huber says. In 1992, Continental hired ISSC, a division of IBM, to run the MIS department -- mainframes, networks, and all. Now information processing is a . variable cost rather than a fixed one. Says Huber: ''It's elastic. If doing something is justified on our P&L, we go ahead. It's like buying newspaper ads.'' He figures he is saving roughly $10 million a year out of a budget that used to be about $70 million. Doubters worry about the security of the system, but Huber says that in the past Continental's main security concern was keeping the system from going down, not shutting out intruders. He adds that the bank's information technology people, most of whom joined ISSC, used to talk as if they owned the place. ''Now there's a very subtle difference, but one we like. We are clients now, and they are more solicitous. We wanted that.''

Although each of these companies is cutting a different path around the wall, they agree on some common lessons: 1. Have a clearly defined business objective. Don't adopt networking just because everyone else is doing it. Warns David Volpe, a senior manager at Early Cloud: ''This is a confusing time because the technologies are proliferating. We don't know yet if the operating system standard will be Unix, Windows, or OS/2. When it gets sorted out, a lot of promises won't be kept.'' 2. Get everyone on board for any major changes. That's particularly important with downsizing programs, which can appear threatening to what one executive calls the ''mainframe bigots.'' Gary Gagliardi, president of FourGen Software of Seattle, a company that makes applications software for client- server systems, warns that war can break out between the glass house and the networkers. He recalls that when one division after another at a client company started abandoning the corporate mainframe for LANs, up went the cost of the mainframe to divisions still using it. Eventually the glass house staged a counterrevolution that ousted the company's MIS manager and forced even divisions no longer using the mainframe to ante up for its upkeep. Most companies look for ways to bring the keepers of the glass house into the new world. Hunter of BG&E cross-trains mainframe engineers to staff his networks. GE's Runtagh suggests putting resisters on teams that study the best practices in their industry so they will recognize the advantages of change. 3. Don't assume that going to networking will be quick, easy, or cheap. Try it first, if possible, on discrete, noncritical applications. Hunter reports that unexpected expenses often arise: ''These systems are sold on the basis of ( the low cost of the hardware, but conversion costs are critical.'' For example, BG&E spent $1.6 million for hardware for one network that stores maps and diagrams of its utility grids for the service department. The conversion cost, for software and for scanning the maps into the new system, was $5.8 million. Litwack of Powersoft warns that networks create new hierarchies: ''Now you have to have network administrators where before there were none.'' Those administrators must be computer engineers, but they also need more business savoir-faire than the glass house gang usually had because they work closely with executives who aren't experts. Administrators who combine those talents are in short supply. Two years ago a good one could command $40,000 a year. Today it is $70,000 and rising. When your mainframe hits the wall, remember that switching computers in mid- data stream can be profitable -- or a whole lot more trouble than you thought.