Supriya Ghosh (Editor)

Legacy system

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system." Often a pejorative term, referencing a system as "legacy" means that it paved the way for the standards that would follow it. This can also imply that the system is out of date or in need of replacement.

Contents

Overview

The first use of the term legacy to describe computer systems probably occurred in the 1970s. By the 1980s it was commonly used to refer to existing computer systems to distinguish them from the design and implementation of new systems. Legacy was often heard during a conversion process, for example, when moving data from the legacy system to a new database.

While this term may indicate that some engineers may feel that a system is out of date, a legacy system may continue to be used for a variety of reasons. It may simply be that the system still provides for the users' needs. In addition, the decision to keep an old system may be influenced by economic reasons such as return on investment challenges or vendor lock-in, the inherent challenges of change management, or a variety of other reasons other than functionality. Backward compatibility (such as the ability of newer systems to handle legacy file formats and character encodings) is a goal that software developers often include in their work.

Even if it is no longer used, a legacy system may continue to impact the organization due to its historical role. Historic data may not have been converted into the new system format and may exist within the new system with the use of a customized schema crosswalk, or may exist only in a data warehouse. In either case, the effect on business intelligence and operational reporting can be significant. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used.

Organizations can have compelling reasons for keeping a legacy system, such as:

  • The system works satisfactorily, and the owner sees no reason to change it.
  • The costs of redesigning or replacing the system are prohibitive because it is large, monolithic, and/or complex.
  • Retraining on a new system would be costly in lost time and money, compared to the anticipated appreciable benefits of replacing it (which may be zero).
  • The system requires near-constant availability, so it cannot be taken out of service, and the cost of designing a new system with a similar availability level is high. Examples include systems to handle customers' accounts in banks, computer reservations systems, air traffic control, energy distribution (power grids), nuclear power plants, military defense installations, and systems such as the TOPS database.
  • The way that the system works is not well understood. Such a situation can occur when the designers of the system have left the organization, and the system has either not been fully documented or documentation has been lost.
  • The user expects that the system can easily be replaced when this becomes necessary.
  • Newer systems perform undesirable (especially for individual or non-institutional users) secondary functions such as a) tracking and reporting of user activity and/or b) automatic updating that creates "back-door" security vulnerabilities and leaves end users dependent on the good faith and honesty of the vendor providing the updates. This problem is especially acute when these secondary functions of a newer system cannot be disabled.
  • Problems posed by legacy computing

    Legacy systems are considered to be potentially problematic by some software engineers for several reasons (for example, see Bisbal et al., 1999).

  • If legacy software runs on only antiquated hardware, the cost of maintaining the system may eventually outweigh the cost of replacing both the software and hardware unless some form of emulation or backward compatibility allows the software to run on new hardware.
  • These systems can be hard to maintain, improve, and expand because there is a general lack of understanding of the system; the staff who were experts on it have retired or forgotten what they knew about it, and staff who entered the field after it became "legacy" never learned about it in the first place. This can be worsened by lack or loss of documentation. Comair airline company fired its CEO in 2004 due to the failure of an antiquated legacy crew scheduling system that ran into a limitation not known to anyone in the company.
  • Legacy systems may have vulnerabilities in older operating systems or applications due to lack of security patches being available or applied. There can also be production configurations that cause security problems. These issues can put the legacy system at risk of being compromised by attackers or knowledgeable insiders.
  • Integration with newer systems may also be difficult because new software may use completely different technologies. Integration across technology is quite common in computing, but integration between newer technologies and substantially older ones is not common. There may simply not be sufficient demand for integration technology to be developed. Some of this "glue" code is occasionally developed by vendors and enthusiasts of particular legacy technologies.
  • Budgetary constraints often lead corporations to not address the need of replacement or migration of a legacy system. However, companies often don’t consider the increasing supportability costs (people, software and hardware, all mentioned above) and do not take into consideration the enormous loss of capability or business continuity if the legacy system were to fail. Once these considerations are well understood, then based on the proven ROI of a new, more secure, updated technology stack platform is not as costly as the alternative - and the budget is found.
  • Improvements on legacy software systems

    Where it is impossible to replace legacy systems through the practice of application retirement, it is still possible to enhance (or "re-face") them. Most development often goes into adding new interfaces to a legacy system. The most prominent technique is to provide a Web-based interface to a terminal-based mainframe application. This may reduce staff productivity due to slower response times and slower mouse-based operator actions, yet it is often seen as an "upgrade", because the interface style is familiar to unskilled users and is easy for them to use. John McCormick discusses such strategies that involve middleware.

    Printing improvements are problematic because legacy software systems often add no formatting instructions, or they use protocols that are not usable in modern PC/Windows printers. A print server can be used to intercept the data and translate it to a more modern code. Rich Text Format (RTF) or PostScript documents may be created in the legacy application and then interpreted at a PC before being printed.

    Biometric security measures are difficult to implement on legacy systems. A workable solution is to use a telnet or http proxy server to sit between users and the mainframe to implement secure access to the legacy application.

    The change being undertaken in some organizations is to switch to automated business process (ABP) software which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software.

    Model-driven reverse and forward engineering approaches can be also used for the improvement of legacy software. Model-driven tools and methodologies can support the migration of legacy software to Cloud computing environments and allow for its modernization, in the notion of Software as a service, exploiting the advanced business and technical characteristics of clouds.

    NASA example

    Andreas Hein, from the University of Stuttgart, researched the use of legacy systems in space exploration. According to Hein, legacy systems are attractive for reuse if an organization has the capabilities for verification, validation, testing, and operational history. These capabilities must be integrated into various software life cycle phases such as development, implementation, usage, or maintenance. For software systems, the capability to use and maintain the system are crucial. Otherwise the system will become less and less understandable and maintainable.

    According to Hein, verification, validation, testing, and operational history increases the confidence in a system's reliability and quality. However, accumulating this history is often expensive. NASA's now retired Space Shuttle program used a large amount of 1970s-era technology. Replacement was cost-prohibitive because of the expensive requirement for flight certification. The original hardware completed the expensive integration and certification requirement for flight, but any new equipment would have had to go through that entire process again. This long and detailed process required extensive tests of the new components in their new configurations before a single unit could be used in the Space Shuttle program. Thus any new system that started the certification process becomes a de facto legacy system by the time it is approved for flight.

    Additionally, the entire Space Shuttle system, including ground and launch vehicle assets, was designed to work together as a closed system. Since the specifications did not change, all of the certified systems and components performed well in the roles for which they were designed. Even before the Shuttle was scheduled to be retired in 2010, NASA found it advantageous to keep using many pieces of 1970s technology rather than to upgrade those systems and recertify the new components.

    Additional uses of the term Legacy in computing

    The term legacy support is often used in conjunction with legacy systems. The term may refer to a feature of modern software. For example, Operating systems with "legacy support" can detect and use older hardware. The term may also be used to refer to a business function; e.g. A software or hardware vendor that is supporting, or providing software maintenance, for older products.

    A "legacy" product may be a product that is no longer sold, has lost substantial market share, or is a version of a product that is not current. A legacy product may have some advantage over a modern product making it appealing for customers to keep it around. A product is only truly "obsolete" if it has an advantage to nobody – if no person making a rational decision would choose to acquire it new.

    The term "legacy mode" often refers specifically to backward compatibility. A software product that is capable of performing as though it were a previous version of itself, is said to be "running in legacy mode." This kind of feature is common in operating systems and internet browsers, where many applications depend on these underlying components.

    The computer mainframe era saw many applications running in legacy mode. In the modern business computing environment, n-tier, or 3-tier architectures are more difficult to place into legacy mode as they include many components making up a single system.

    Virtualization technology is a recent innovation allowing legacy systems to continue to operate on modern hardware by running older operating systems and browsers on a software system that emulates legacy hardware.

    Brownfield architecture

    The field of Information Technology has borrowed the term brownfield from the building industry, where undeveloped land (and especially unpolluted land) is described as greenfield and previously developed land – which is often polluted and abandoned – is described as brownfield.

  • A brownfield architecture is a type of software or network architecture that incorporates legacy systems.
  • A brownfield deployment is an upgrade or addition to an existing software or network architecture that retains legacy components.
  • Alternative view

    There is an alternate point of view — growing since the "Dot Com" bubble burst in 1999 — that legacy systems are simply computer systems that are both installed and working. In other words, the term is not pejorative, but the opposite. Bjarne Stroustrup, creator of the C++ language, addressed this issue succinctly:

    "Legacy code" often differs from its suggested alternative by actually working and scaling.

    IT analysts estimate that the cost of replacing business logic is about five times that of reuse, and that is not counting the risks involved in wholesale replacement. Ideally, businesses would never have to rewrite most core business logic; debits must equal credits — they always have, and they always will. New software may increase the risk of system failures and security breaches.

    The IT industry is responding to these concerns. "Legacy modernization" and "legacy transformation" refer to the act of reusing and refactoring existing core business logic by providing new user interfaces (typically Web interfaces), sometimes through the use of techniques such as screen scraping and service-enabled access (e.g. through web services). These techniques allow organizations to understand their existing code assets (using discovery tools), provide new user and application interfaces to existing code, improve workflow, contain costs, minimize risk, and enjoy classic qualities of service (near 100% uptime, security, scalability, etc.).

    The re-examination of attitudes toward legacy systems is also inviting more reflection on what makes legacy systems as durable as they are. Technologists are relearning that sound architecture, practiced up front, helps businesses avoid costly and risky rewrites in the first place. The most common legacy systems tend to be those which embraced well-known IT architectural principles, with careful planning and strict methodology during implementation. Poorly designed systems often don't last, both because they wear out and because their reliability or usability are low enough that no one is inclined to make an effort to extend their term of service when replacement is an option. Thus, many organizations are rediscovering the value of both their legacy systems themselves and those systems' philosophical underpinnings.

    References

    Legacy system Wikipedia