Developer.com Logo Click here to support our advertisers
Click here to support our advertisers
SHOPPING
JOB BANK
CLASSIFIEDS
DIRECTORIES
REFERENCE
Online Library
LEARNING CENTER
JOURNAL
NEWS CENTRAL
DOWNLOADS
COMMUNITY
CALENDAR
ABOUT US

Journal:

Get the weekly email highlights from the most popular journal for developers!
Current issue
developer.com
developerdirect.com
htmlgoodies.com
javagoodies.com
jars.com
intranetjournal.com
javascripts.com

All Categories : Servers

Client/Server Computing csc06.htm


— 6 —
Client/Server Systems Development
—Software

Executive Summary

If the selling price of automobiles had kept with the selling price of computer hardware, in 1992 dollars, a Geo would sell for $500. If the productivity improvement of telephone operators had kept pace with the productivity improvement in systems development, 60 percent of the adult U.S. population would need to work as telephone operators to handle the current volume of calls compared to the volume of the 1920s.

An Index Group survey found that up to 90 percent of information technology (IT) departments' budgets are spent maintaining and enhancing existing systems.1 This maintenance and enhancement continues to be done using old, inefficient, and undisciplined processes and technology. Figure 6.1 documents the change in maintenance effort measured in Fortune 1000 companies from the 1970s until today. As the number of installed systems increases, organizations find more of their efforts being invested in maintenance. Ed Yourdon claims that the worldwide software asset base is in excess of 150 billion lines of code. Most of this code was developed in the 1960s and 1970s with older technologies. Thus, this code is unstructured and undocumented, leading to what the Gartner Group is calling the "Maintenance Crisis." We simply must find more effective ways to maintain systems.


Figure 6.1. Percentage of IS budgets dedicated to maintenance.

Business Process Reengineering (BPR) techniques help organizations achieve competitive advantage through substantive improvements in quality, customer service and costs. BPR must be aligned with technology strategy to be effective. Organizations must use technology to enable the business change defined by the BPR effort. In too many organizations technology is inhibiting change. Many CIOs are finding that their careers are much shortened when they discover that the business strategy identified by their organization cannot be realized because the technical architecture employed lacks the openness to support the change.

Senior executives look for new applications of technology to achieve business benefit. New applications must be built, installed, and made operational to achieve the benefits. Expenses incurred in maintenance and enhancement are not perceived to produce value. Yet, most measurements show that 66 percent of the cost of a system is incurred after its initial production release during the maintenance and enhancement phases. In this period of tight budgets it is increasingly difficult to explain and justify the massive ongoing investment in maintenance of systems that do not meet the current need.

Figure 6.2 illustrates how demand for new systems is increasing as technology costs decline and performance improves. Our challenge is to change the expenditures from ongoing maintenance to new development. Buying off-the-shelf application solutions frequently will meet the need. However, unless the packaged solution perfectly matches the needs of the organization, additional and expensive maintenance will be required to modify the package to make it fit.


Figure 6.2. Systems development demand.

Clearly, the solution is to design and build systems within a systems development environment (SDE). Applications and systems within an SDE are built to be maintained and enhanced. The flexibility to accept enhancements is inherent in the design. A methodology defines the process to complete a function. The use of a systems integration life cycle methodology ensures that the process considers the ramifications of all decisions made from business problem identification through and including maintenance and operation of the resultant systems. The changes implied by BPR and the movement from mainframe-centered development to client/server technology requires that you adopt a methodology that considers organizational transformation. Object-oriented technologies (OOTs) can now be used to define the necessary methodology and development environment to dramatically improve our ability to use technology effectively.

With effective use of OO technologies productivity improvements of 10:1 are being measured. Systems are being built with error rates that are one-third that of traditionally developed systems. The creation and reuse of objects supports the enterprise on the desk through the reuse of standard technology to support the user and developer. OO technology allows business specialists to work as developers assembling applications by reusing objects previously constructed by more technical developers.

Factors Driving Demand for Application Software Development

Strategic planning, development, and follow-on support for applications software is a vital,—albeit expensive—process, that may yield enormous benefits in terms of cost savings, time to market for new products, customer satisfaction, and so on. There are opportunities to influence and compress application development planning time—through the use of an existing enterprise-wide architecture strategy or the adoption of a transformational outsourcing strategy. BPR and total quality management (TQM) programs demand software development and enhancements. A competitive market insists that companies demonstrate their value to a skeptical buyer through increasing the value of product and services.

Rising Technology Staff Costs

Coincident with the increasing demand for systems development, enrollment in university-level technology programs is declining, and the pool of available technical talent is shrinking relative to the exploding demand. As a result, technology personnel costs are rising much faster than inflation. In 1994, we see a 22-percent increase in demand for computer technologists. Many organizations find that technology professionals, in whom much organization specific application and technology knowledge has been invested, change jobs every three to five years. This multiplies the burden of reinvestment and retraining in organ-izations that are struggling to reduce costs. If organizations are to maximize their return on technology investments, they must develop a continuous learning program to ensure reuse of training programs, standard development procedures, developer tools and interfaces built for other systems.

Pressure to Build Competitive Advantage by Delivering Systems Faster

There is tremendous pressure on organizations to take advantage of new technology to build competitive advantage. This can be most easily accomplished by bringing innovative service offerings to market sooner than a competitor does. In most cases, new service offerings are required just to keep pace with competitors. The application backlog is horrific. Studies show that 80 to 90 percent of the traditional host-based MIS shop's staff time is devoted to maintaining or enhancing existing—often technically obsolete—applications. Some portion of the relatively small amount of time remaining is available for development of new applications.

For many organizations, implementing systems that not only increase efficiency and effectiveness but also transform fundamental processes to create a competitive advantage is absolutely essential to survival. For many companies, the prospects of global competition and uncertain recessionary times add fuel to the fire to succeed. Companies that cannot find inventive ways to refine their business process and streamline the value chain quickly will fall behind companies that can.

Need to Improve Technology Professionals' Productivitiy

The Index Group reports that the Computer-Aided Software Development (CASE) and other technologies that speed software development are cited by 70 percent of the tope IT executives surveyed as the most critical technologies to implement. The CASE market is growing at a rate of 30 percent per year, and Index's estimates predict it will be a $5 billion market by 1995, doubling from 1990 figures.

This new breed of software tools helps organizations respond more quickly by cutting the time it takes to create new applications and making them simpler to modify or maintain. Old methods, blindly automating existing manual procedures, can hasten a company's death knell. Companies need new, innovative mission-critical systems to be built quickly, with a highly productive, committed professional staff partnered with end-users during the requirements, design, and construction phases. The client/server development model provides the means to develop horizontal prototypes of an application as it is designed. The user will be encouraged to think carefully about the implications of design elements. The visual presentation through the workstation is much more real than the paper representation of traditional methods.

Yourdon reports that less than 20 percent of development shops in North America have a methodology of any kind, and even a lower percentage actually use the methodology. Input Research reports that internally developed systems are delivered on time and within budget about 1 percent of the time. They compare this result to those outsourced through systems integration professionals who use high-productivity environments, which are delivered on time and within budget about 66 percent of the time.

The use of a proven, formal methodology significantly increases the likelihood of building systems that satisfy the business need and are completed within their budgets and schedules. Yourdon estimates that 50 percent of errors in a final system and 75 percent of the cost of error removal can be traced back to errors in the analysis phase. CASE tools and development methodologies that define systems requirements iteratively with high and early user involvement have been proven to significantly reduce analysis phase errors.

Need for Platform Migration and Reengineering of Existing Systems

Older and existing applications are being rigorously reevaluated and in some cases terminated when they don't pay off. A 15-percent drop in proprietary technology expenditures was measured in 1993 and this trend will continue as organizations move to open systems and workstation technology. BPR attempts to reduce business process cost and complexity by moving decision making responsibility to those individuals who first encounter the customer or problem. Organizations are using the client/server to bring information to the workplace of empowered employees.

The life of an application tends to be 5 to 15 years, whereas the life of a technology is much shorter—usually one to three years. Tremendous advances can be made by reengineering existing applications and preserving the rule base refined over the years while taking advantage of the orders-of-magnitude improvements that can be achieved using new technologies.

Need for a Common Interface Across Platforms

Graphical user interfaces (GUIs) that permit a similar look and feel and front-end applications that integrate disparate applications are on the rise.

A 1991 Information Week survey of 157 IT executives revealed that ease of use through a common user interface across all platforms is twice as important as the next most important criteria as a purchasing criterion for software. This is the single-system image concept.

Of prime importance to the single-system image concept is that every user from every workstation have access to every application for which they have a need and right without regard to or awareness of the technology.

Developers should be equally removed from and unconcerned with these components. Development tools and APIs isolate the platform specifics from the developer. When the single-systems image is provided, it is possible to treat the underlying technology platforms as a commodity to be acquired on the basis of price-performance without concern for specific compatibility with the existing application. Hardware, operating systems, database engines, communication protocols—all these must be invisible to the application developer.

Increase in Applications Development by Users

As workstation power grows and dollars-per-MIPS fall, more power is moving into the hands of the end user. The Index Group reports that end users are now doing more than one-third of application development; IT departments are functioning more like a utility. This is the result of IT department staff feeling the squeeze of maintenance projects that prevent programmers from meeting critical backlog demand for new development.

This trend toward application development by end-users will create disasters without a consistent, disciplined approach that makes the developer insensitive to the underlying components of the technology. End-user application developers also must understand the intricacies of languages and interfaces.

Object-oriented technologies embedded in SDE have regularly demonstrated to produce new development productivity gains of 2 to 1 and maintenance productivity improvements of 5 to 1 over traditional methods—for example, process-driven or data-driven design and development. More recently mature OO SDEs with a strong focus on object reusability are achieving productivity gains of 10 to 1 over traditional techniques.

Production-capable technologies are now available to support the development of client/server applications. The temptation and normal practice is to have technical staff read the trade press and select the best products from each category, assuming that they will combine to provide the necessary development environment. In fact, this almost never works. When products are not selected with a view as to how they will work together, they do not work together.

Thus, the best Online Transaction Processing (OLTP) package may not support YOUR best database. Your security requirements may not be met by any of your tools. Your applications perform well, but it may take forever to change them. Organizations must architect an environment that takes into account their particular priorities and the suite of products being selected. The selection of tools will provide the opportunity to be successful.

An enterprise-wide architecture strategy must be created to define the business vision and determine a transformation strategy to move from the current situation to the vision. This requires a clear understanding of industry standards, trends, and vendor priorities. Combining the particular business requirements with industry direction it is possible to develop a clear strategy to use technology to enable the business change. Without this architecture strategy, decisions will be made in a vacuum with little business input and usually little clear insight into technology direction.

The next and necessary step is to determine how the tools will be used within your organization. This step involves the creation of your SDE. Without the integration of an SDE methodology, organizations will be unable to achieve the benefits of client/server computing. Discipline and standards are essential to create platform-independent systems. With the uncertainty over which technologies will survive as standards, the isolation of applications from their computing platforms is an essential insurance policy.

Client/Server Systems Development Methodology

The purpose of a methodology is to describe a disciplined process through which technology can be applied to achieve the business objectives. Methodology should describe the processes involved through the entire life cycle, from BPR and systems planning through and including maintenance of systems in production. Most major systems integrators and many large in-house MIS groups have their own life cycle management methodology. Andersen Consulting, for example, has its Foundation, BSG has its Blueprint, and SHL Systemhouse has its own SHL Transform—the list goes on and on. These companies offer methodologies tuned for the client/server computing environment. However, every methodology has its own strengths, which are important to understand as part of the systems integration vendor selection process.

Figure 6.3 shows the processes in a typical systems integration life cycle. It is necessary to understand and adhere to the flow of information through the life cycle. This flow allows the creation and maintenance of the systems encyclopedia or electronic repository of data definitions, relationships, revision information, and so on. This is the location of the data models of all systems. The methodology includes a strict project management discipline that describes the deliverables expected from each stage of the life cycle. These deliverables ensure that the models are built and maintained. In conjunction with CASE tools, each application is built from the specifications in the model and in turn maintains the model's where-used and how-used relationships.

Table 6.1 details the major activities of each stage of the systems integration life cycle methodology. No activity is complete without the production of a formal deliverable that documents, for user signoff, the understanding gained at that stage. The last deliverable from each stage is the plan for the next stage.


Figure 6.3. Systems integration life cycle.

    Table 6.1. SILC phases and major activities.
SILC Phase








Major Activities








Systems Planning

Initiate systems planning

Gather data

Identify current situation

Describe existing systems

Define requirements

Analyze applications and data architectures

Analyze technology platforms

Prepare implementation plan

Project Initiation

Screen request

Identify relationship to long-range systems plan

Initiate project

Prepare plan for next phase

Architecture Definition

Gather data

Expand the requirements to the next level of detail

Conceptualize alternative solutions

Develop proposed conceptual architecture

Select specific products and vendors

Analysis

Gather data

Develop a logical model of the new application system

Define general information system requirements

Prepare external system design

Design

Perform preliminary design

Perform detailed design

Design system test

Design user aids

Design conversion system

Development

Set up the development environment

Code modules

Develop user aids

Conduct system test

Facilities Engineering

Gather data

Conduct site survey

Document facility requirements

Design data center

Plan site preparation

Prepare site

Plan hardware installation

Install and test hardware

Implementation

Develop contingency procedures

Develop maintenance and release procedures

Train system users

Ensure that production environment is ready

Convert existing data

Install application system

Support acceptance test

Provide warranty support

Post-implementation

Initiate support and maintenance

Support

services

Support hardware and communication configuration

Support software

Perform other project completion tasks as appropriate

Project Management

Many factors contribute to a project's success. One of the most essential is establishing an effective project control and reporting system. Sound project control practices not only increase the likelihood of achieving planned project goals but also promote a working environment where the morale is high and the concentration is intense. This is particularly critical today when technology is so fluid and the need for isolating the developer from the specific technology is so significant.

The objectives of effective project management are to

  1. Plan the project:

    Define project scope

    Define deliverables

    Enforce methodology

    Identify tasks and estimates

    Establish project organization and staffing

    Document assumptions

    Identify client responsibilities

    Define acceptance criteria

    Define requirements for internal quality assurance review

    Determine project schedules and milestones

    Document costs and payment terms

  2. Manage and control project execution:

    Maintain personal commitment

    Establish regular status reporting

    Monitor project against approved milestones

    Follow established decision and change request procedures log and follow up on problems

  3. Complete the project:

    Establish clear, unambiguous acceptance criteria

    Deliver a high-quality product consistent with approved criteria

    Obtain clear acceptance of the product

New technologies such as client/server place a heavy burden on the architecture definition phase. The lack of experience in building client/server solutions, combined with the new paradigm experienced by the user community, leads to considerable prototyping of applications. These factors will cause rethinking of the architecture. Such a step is reasonable and appropriate with today's technology. The tools for prototyping in the client/server platform are powerful enough that prototyping is frequently faster in determining user requirements than traditional modeling techniques were.

When an acceptable prototype is built, this information is reverse engineered into the CASE tool's repository. Bachman's Information Systems' CASE products provide among the more powerful available tools to facilitate this process.

Architecture Definition

The purpose of the architecture definition phase in the methodology is to define the application architecture and select the technology platform for the application. To select the application architecture wisely, you must base the choice on an evaluation of the business priorities. Your organization must consider and weight the following criteria:

  • Cost of operation—How much can the organization afford to pay?

  • Ease of use—Are all system users well-trained, computer literate, and regular users? Are some occasional users, intimidated by computers, users with little patience, or familiar with another easy to use system? Will the system be used by the public in situations that don't allow for training or in which mistakes are potentially dangerous?

  • Response time—What is the real speed requirement? Is it less than 3 seconds 100 percent of the time? What is the impact if 5 percent of the time the response lag is up to 7 seconds?

  • Availability—What is the real requirement? Is it 24 hours per day, 7 days per week, or something less? What is the impact of outages? How long can they last before the impact changes?

  • Security—What is the real security requirement? What is the cost or impact of unauthorized access? Is the facility secure? Where else can this information be obtained?

  • Flexibility to change—How frequently might this application change? Is the system driven by marketing priorities, legislative changes, or technology changes?

  • Use of existing technology—What is the existing investment? What are the growth capabilities? What are the maintenance and support issues?

  • System interface—What systems must this application deal with? Are these internal or external? Can the systems being interfaced be modified?

These application architecture issues must be carefully evaluated and weighed from a business perspective. Only after completing this process can managers legitimately review the technical architecture options. They must be able to justify the technology selection in the way it supports the business priorities. Figure 6.4 illustrates the conundrum we face as we move from application architecture to technical architecture. There is always a desire to manage risk and a corresponding desire to use the best technology. A balance must be found between the two extremes of selecting something that fits the budget and is known to work versus the newest, best, and unproven option. Cost is always a consideration.


Figure 6.4. The objectives of an architecture.

Once managers understand the application architecture issues, it becomes appropriate to evaluate the technical architecture options. Notice that staff are not yet selecting product, only architectural features. It is important to avoid selecting the product before purchasers understand the baseline requirements.

The following is a representative set of technical architecture choices:

  • Hardware (including peripherals)—Are there predefined standards for the organization? Are there environmental issues, such as temperature, dirt, and service availability?

  • Distributed versus centralized—Does the organization have a requirement for one type of processing over the other? Are there organizational standards?

  • Network configuration—Does the organization have an existing network? Is there a network available to all the sites? What is the capacity of the existing network? What is the requirement of the new one?

  • Communications protocols—What does the organization use today? Are there standards that must be followed?

  • System software—What is used today? Are there standards in place? What options are available in the locale and on the anticipated hardware and communications platforms?

  • Database software—Is there a standard in the organization? What exists today?

  • Application development tools (for example, CASE)—What tools are in use today? What tools are available for the candidate platforms, database engine, operating system, and communications platforms?

  • Development environment—Does such an environment exist today? What standards are in place for users and developers? What other platform tools are being considered? What are the architectural priorities related to development?

  • Application software (make or buy, package selection, and so on)—Does the organization have a standard? How consistent is this requirement with industry-standard products? If there is a product, what platforms does it run on? Are these consistent with the potential architecture here? How viable is the vendor? What support is available? Is source code available? What are the application architecture requirements related to product acquisition?

  • Human interface—What are the requirements? What is in place today? What are users expecting?

Figure 6.5 illustrates the layering of technical architecture and applications architecture. One should not drive the other. It is unrealistic to assume that the application architects can ignore the technical platform, but they should understand the business priorities and work to see that these are achieved. Interfaces must isolate the technical platform from the application developers. These interfaces offer the assurance that changes can be made in the platform without affecting functioning at the application layer.


Figure 6.5. Components of the technical and applications architectures.

With the technical architecture well defined and the application architecture available for reference, you're prepared to evaluate the product options. The selection of the technology platform is an important step in building the SDE. There will be ongoing temptation and pressure to select only the "best products." However, the classification of "best product in the market," as evaluated in the narrow perspective of its features versus those of other products in a category, is irrelevant for a particular organization. Only by evaluating products in light of the application and technical architecture in concert with all the products to be used together can you select the best product for your organization.

Figure 6.6 details the categories to be used in selecting a technology platform for client/server applications. Architectures and platforms should be organizational. There is no reason to be constantly reevaluating platform choices. There is tremendous benefit in developing expertise in a well-chosen platform and getting repetitive benefit from reusing existing development work.


Figure 6.6. Building the technology platform.

Systems Development Environment

Once your organization has defined its application and technical architectures and selected its tools, the next step is to define how you'll use these tools. Developers do not become effective system builders because they have a good set of tools; they become effective because their development environment defines how to use the tools well.

An SDE comprises hardware, software, interfaces, standards, procedures, and training that are selected and used by an enterprise to optimize its information systems support to strategic planning, management, and operations.

  • An architecture definition should be conducted to select a consistent technology platform.

  • Interfaces that isolate the user and developer from the specifics of the technical platform should be used to support the creation of a single-system image.

  • Standards and standard procedures should be defined and built to provide the applications with a consistent look and feel.

  • Reusable components must be built to gain productivity and support a single-system image.

  • Training programs must ensure that users and developers understand how to work in the environment.

IBM defined its SDE in terms of an application development cycle, represented by a product line it called AD/Cycle, illustrated in Figure 6.7. Another way of looking at the SDE is illustrated in Figures 6.8 and 6.9. The SDE must encompass all phases of the systems development life cycle and must be integrated with the desktop. The desktop provides powerful additional tools for workstation users to become self-sufficient in many aspects of their information-gathering needs.


Figure 6.7. IBM AD/Cycle model.


Figure 6.8. An SDE architecture.


Figure 6.9. An office systems architecture.

The most significant advantages are obtained from an SDE when a conscious effort is made to build reusable components. These are functions that will be used in many applications and will therefore improve productivity. Appendix A's case studies illustrate the benefits of projects built within the structure of an SDE. With the uncertainty surrounding product selection for client/server applications today, the benefits of using an SDE to isolate the developers from the technology are even more significant. These technologies will evolve, and we can build applications that are isolated from many of the changes. The following components should be included in any SDE established by an organization:

  • Built-in navigation—Every process uses the same methods to move among processes. For every process a default next process is identified, and all available processes are identified. This navigation definition is done by a business analyst and not the developer. Every user and every developer then views navigation in the same way.

  • Standardized screen design—Well-defined standards are in place for all function types, and these screens are generated by default based on the business process being defined. Users and developers become familiar with the types of screens used for help, add, change, delete, view, and table management functions.

  • Integrated help—A standardized, context-sensitive help facility should respond to the correct problem within the business process. No programmer development is required. The help text is provided by the end-user and analyst who understand how the system user will view the application. Help text is user maintainable after the system is in production.

  • Integrated table maintenance—Tables are a program design concept that calls for common reference data, such as program error codes, printer control codes, and so on, to be stored in a single set of files or databases. A single table maintenance function is provided for all applications in the organization. Programmers and users merely invoke its services. Thus, all applications share standard tables.

  • Comprehensive security—A single security profile is maintained for each authorized user. Navigation is tied to security; thus, users only see options they are eligible to use. Every programmer and user see the same security facilities. Security profiles are maintained by an authorized user and use the table maintenance facilities.

  • Automatic view maintenance—Screens are generated, navigation is prescribed, and skeleton programs are generated based on the security profile and business requirements defined for a process. The developer does not have to write special code to extract data from the database. All access is generated based on the defined business processes and security.

  • Standard skeleton programs—An analyst answers a set of questions to generate a skeleton program for each business process. This feature includes standard functions that the programmer will require.

Every platform includes a set of services that are provided by the tools. This is particularly true in the client/server model, because many of the tools are new and take advantage of object-oriented development concepts. It is essential for an effective SDE to use the facilities and not to redevelop these because of elegance or ego.

Figure 6.10 illustrates the development environment architecture built for a project using Natural 4GL from Software AG. Software AG has successfully ported its Natural product from a mainframe-only environment to the workstation, where it can be used as part of a client/server architecture.


Figure 6.10. Software AG's natural architecture.

The ACTS example shown in Appendix A uses this SDE architecture with Easel and Telon. Users and developers can move between these environments with minimal difficulty because there is such a high degree of commonalty in the look and feel and in the services provided. Development within the justice application (of which ACTS is a part) included the Software AG products, Easel, and Telon. The same developers were productive throughout because of the common architecture. This occurred despite the fact that portions of the application were traditional mainframe, portions were mixed workstation-to-mainframe programs, and portions were pure client/server.

The advantages of building an SDE and including these types of components are most evident in the following areas:

  • Rapid prototyping—The development environment generates skeleton applications with embedded logic for navigation, database views, security, menus, help, table maintenance, and standard screen builds. This framework enables the analyst or developer to sit with a user and work up a prototype of the application rapidly. In order to get business users to participate actively in the specification process, it is necessary to show them something real. A prototype is more effective for validating the process model than are traditional business modeling techniques. Only through the use of an SDE is such prototyping possible. Workstation technology facilitates this prototyping. The powerful GUI technology and the low cost of direct development at the workstation make this the most productive choice for developing client/server applications.

  • Rapid coding—Incorporating standard, reusable components into every program reduces the number of lines of custom code that must be written. In addition, there is a substantial reduction in design time, because much of the design employs reusable, standard services from the SDE. The prototype becomes the design tool.

  • Consistent application design—As mentioned earlier, much of the design is inherent in the SDE. Thus, by virtue of the prototype, systems have a common look and feel from the user's and the developer's perspectives. This is an essential component of the single-system image.

  • Simplified maintenance—The standard code included with every application ensures that when maintenance is being done the program will look familiar. Because more than 50 percent of most programs will be generated from reusable code, the maintenance programmer will know the modules and will be able to ignore them unless global changes are to be made in these service functions. The complexity of maintenance corresponds to the size of the code and the amount of familiarity the programmer has with the program source. The use of reusable code provides the programmer with a common look and much less new code to learn.

  • Enhanced performance—Because the reusable components are written once and incorporated in many applications, it is easier to justify getting the best developers to build the pieces. The ability to make global changes in reusable components means that when performance problems do arise, they can often be fixed globally with a single change.

Productivity Measures

It is difficult to accurately quantify productivity gains obtained by using one method versus another, because developers are not willing to build systems twice with two different teams with the same skill set. However, a limited number of studies have been done estimating the expected cost of developing and maintaining systems without a formal SDE compared to the actual results measured with an SDE. One such analysis studied U.S. competitiveness. The researchers determined that, on average, a Japanese development team produces 170 percent of the debugged lines of code per year that a U.S. development team does. Japanese literature describes the Japanese approach to building systems as very consistent with the SDE approach described here. The necessity for Japanese developers to deal with U.S. software and a Japanese script language user interface has taught them the value of software layers. This led naturally to the development of reusable software components. Measurements by the researchers of errors in systems developed by Japanese and United States development teams showed that the Japanese had only 44 percent of the errors measured in the U.S. code.

Japanese developers work in a disciplined style that emphasizes developing to standards and reuse of common components. Our experience with SDE-based development is showing a 100-percent productivity improvement for lines of debugged source code per work year for new development and a 400-percent productivity increase for maintenance of existing systems. It's easy to understand the new code improvement rate from the facts noted earlier, but it is not as clear why the maintenance improvement is so great.

A significant reason for better productivity appears to be the reduction in testing effort that results from fewer errors. It is difficult to make changes to a production application. The cost and effort involved in changing production code is dramatically greater than changes to a test system. Developers and testers are careful about changes to production products. If you eliminate half the errors, you not only have happier users but also a substantial reduction in effort to correct the problems. The ability to make global changes and the reduction in complexity that comes from the familiar environment also improve maintenance productivity.

CASE

CASE tools are built on an "enterprise model" of the processes to be automated; that is, systems integration and software development. This underlying enterprise model or "metamodel" used by CASE is crucial to the tool's usefulness. Tools based on a poor model suffer from poor integration, are unable to handle specific types of information, require duplicate data entry, cannot support multiple analyst-developer teams, and are not flexible enough to handle evolving new techniques for specifying and building systems solutions. Tools with inadequate models limit their users' development capabilities.

All the leading CASE products operate and are used in a client/server environment. Intel 486-based workstations operating at 50MHz or faster, with 16-24 Mbytes of memory and 250Mbyte hard disks and UNIX workstations of similar size are typically required. Thus, combining hardware and CASE software costs, CASE costs up to $20,000 per user workstation/terminal.

Unfortunately, a thorough review of the available CASE products shows that none adequately provide explicit support for development of client/server applications and GUIs. This lack of support occurs despite the fact that they may operate as network-based applications that support development of host-based applications. There is considerable momentum to develop products that support the client/server model. The Bachman tools are in the forefront in this area because of their focus on support for business process reengineering. With many client/server applications being ported from a minicomputer or mainframe, the abilities to reuse the existing models and to reverse engineer the databases are extremely powerful and time-saving features.

It seems likely that no single vendor will develop the best integrated tool for the entire system's life cycle. Instead, in the probable scenario, developers mix the best products from several vendors. This scenario is envisioned by IBM in their AD/Cycle product line, by Computer Associates in their CA90 products, and by NCR in their Open Cooperative Computing series of products.

As an example, an organization may select Bachman, which provides the best reengineering and reusability components and the only true enterprise model for building systems solutions for their needs. This model works effectively in the LAN environment and supports object-oriented reuse of specifications. The organization then integrates the Bachman tools with ParcPlace Parts product for Smalltalk code generation for Windows, UNIX or OS/2 desktops and server applications and with Oracle for code generation in the UNIX, OS/2, and Windows NT target environment. The visual development environments of these products provide the screen painting, business logic relationship, and prototyping facilities necessary for effective systems development.

A more revolutionary development is occurring as CASE tools like the Bachman products are being integrated with development tools from other vendors. These development tools, used with an SDE, allow applications to be prototyped and then reengineered back into the CASE tool to create process and data models. With the power of GUI-based development environments to create and demonstrate application look and feel, the prototyping approach to rapid application design (RAD) is the only cost-effective way to build client/server applications today.

Users familiar with the ease of application development on the workstation will not accept paper or visual models of their application. They can only fully visualize the solution model when they can touch and feel it. This is the advantage of prototyping, which provides a "real touch and feel." Except in the earliest stages of solution conceptualization, the tools for prototyping must be created using the same products that are to be used for production development.

Not all products that fall into the CASE category are equally effective. For example, some experts claim that the information engineering products—such as Texas Instruments' product, IEF—attempt to be all things to all people. The criticism is that such products are constrained by their need to generate code efficiently from their models. As a result, they are inflexible in their approach to systems development, have primitive underlying enterprise models, may require a mainframe repository, perform poorly in a team environment, and provide a physical approach to analysis that is constrained by the supported target technologies (CICS/DB2 and, to a lesser extent, Oracle). Critics argue that prototyping with this class of tool requires developers to model an unreasonable amount of detail before they can present the prototype.

Object-Oriented Programming (OOP)

OOP is a disciplined programming style that incorporates three key characteristics: encapsulation, inheritance, and dynamic binding. These characteristics differentiate OOP from traditional structured programming models, in which data has a type and a structure, is distinct from the program code, and is processed sequentially. OOP builds on the concepts of reuse through the development and maintenance of class libraries of objects available for use in building and maintaining applications.

  • Encapsulation joins procedures and data to create an object, so that only the procedures are visible to the user; data is hidden from view. The purpose of encapsulation is to mask the complexity of the data and the internal workings of the object. Only the procedures (methods) are visible to the outside world for use.

  • Inheritance passes attributes to dependent objects, called descendants, or receives attributes from objects, called ancestors, on which the objects depend. For example, the family airplane includes all structures, whereas the descendant jet inherits all the properties of airplane and adds its own, such as being nonpropeller-driven; the child F14 inherits all the properties of airplane and jet and adds its own properties—speed greater than 1,400 mph and climbing rate greater than 50 feet per second.

  • Dynamic binding is the process whereby linking occurs at program execution time. All objects are defined at runtime, and their functions depend on the application's environment (state) at the time of program execution. For example, in a stock management application, the function called program trading can sell or buy, depending on a large range of economic variables that define the current state. These variables are transparent to the user who invokes the trade process.

  • Class library is a mature, tested library of reusable code that provides application-enabling code such as help management, error recovery, function key support, navigation logic, and cursor management. The class library concept is inherent to the SDE concept and—in combination with the standards and training fundamental—is inherent to the productivity and error reductions encountered in projects that use an SDE.

Object-oriented programming is most effective when the reusable components can be cut and pasted to create a skeleton application. Into this skeleton the custom business logic for this function is embedded. It is essential that the standard components use dynamic binding so that changes can be made and applied to all applications in the environment. This provides one of the major maintenance productivity advantages.

Certain programming languages are defined to be object-oriented. C++, Objective C, SmallTalk, MacApp, and Actor are examples. With proper discipline within an SDE it is possible to gain many of the advantages of these languages within the more familiar environments of COBOL and C. Because the state of development experience in the client/server world is immature, it's imperative for organizations to adopt the discipline of OOP to facilitate the reuse of common functions and to take advantage of the flexibility of global changes to common functions.

Objects are easily reused, in part because the interface to them is so plainly defined and in part because of the concept of inheritance. A new object can inherit characteristics of an existing object "type." You don't have to reinvent the wheel; you can just inherit the concept. Inheritance gives a concise and precise description of the world and helps code reusability, because every program is at the level in the "type hierarchy" at which the largest number of objects can share it. The resulting code is easier to maintain, extend, and reuse.

A significant new component of object-oriented development has been added with the capability to use server objects with RPC requests. During 1994, the introduction of CORBA compliant object stores will dramatically open the client/server paradigm to the "anything anywhere" dimension. Objects will be built and stored on an arbitrary server for use by any client or server anywhere. The earliest implementations of this model are provided by NeXT with its Portable Distributed Objects (PDO) and Suns Distributed Objects Everywhere (DOE) architecture.

And what about object-oriented database management system (OODBMS)? It combines the major object-oriented programming concepts of data abstraction, encapsulation, and type hierarchies with the database concepts of storage management, sharing, reliability, consistency, and associative retrieval.

When is an OODBMS needed, and when will an extended relational data-base management system (DBMS) do? Conventional database management products perform very well for many kinds of applications. They excel at processing large amounts of homogeneous data, such as monthly credit card billings. They are good for high-transaction-rate applications, such as ATM networks. Relational database systems provide good support for ad hoc queries in which the user declares what to retrieve from the database as opposed to how to retrieve it.

As we traverse the 1990s, however, database management systems are being called on to provide a higher level of database management. No longer will databases manage data; they must manage information and be the knowledge centers of the enterprise. To accomplish this, the database must be extended to

  • Provide a higher level of information integration

  • Store and retrieve all types of data: drawings, documents, fax, images, pictures, medical information, voice, and video

Many RDBMS products already handle binary large objects (BLOBs) in a single field of a relation. Many applications use this capability to store and provide SQL-based retrieval of digital laboratory data, images, text, and compound documents. Digital's Application Driven Database Systems (ADDS) have been established to enable its SQL to handle these complex and abstract data types more explicitly and efficiently.

But applications that require database system support are quickly extending beyond such traditional data processing into computer-aided design (CAD) and CASE, sophisticated office automation, and artificial intelligence. These applications have complex data structuring needs, significantly different data accessing patterns, and special performance requirements. Conventional programming methodologies are not necessarily appropriate for these applications and conventional data management systems may not be appropriate for managing their data.

Consider for a moment the factors involved in processing data in applications such as CAD, CASE, or generally in advanced office automation. The design data in a mechanical or electrical CAD database is heterogeneous. It consists of complex relationships among many types of data. The transactions in a CASE system don't lend themselves to transaction-per-second measurement; transactions can take hours or even days. Office automation applications deal with a hierarchical structure of paragraphs, sentences, words, characters, and character attributes along with page position and graphical images. Database access for these applications is typically a directed graph structure rather than the kind of ad hoc query that can be supported in SQL. Each object contains within its description reference to many other objects and elements. These are automatically collected by the object manager to provide the total view. In typical SQL applications, the developer makes explicit requests for related information.

In trying to manipulate such complex data using a relational system, a programmer writes code to map extremely complex in-memory data structures onto lower-level relational structures using awkward and resource-intensive recursive programming techniques. The programmer finds himself or herself doing database management instead of letting the DBMS handle it. Worse, even if the programmer manages to code the translation from in-memory objects to relational tables, performance is unacceptable.

Thus, relational systems have not been any help for the programmer faced with these complex coding tasks. The object-oriented programming paradigm, on the other hand, has proven extremely useful. The complex data structures CAD and CASE programmers deal with in memory are often defined in terms of C++ or Smalltalk objects.

It would be helpful if the programmer didn't have to worry about managing these objects, moving them from memory to disk, then back again when they're needed later. Some OOP systems provide this object "persistence" just by storing the memory image of objects to disk. But that solution only works for single-user applications. It doesn't deal with the important concerns of multiuser access, integrity, and associative recall.

Persistence means that objects remain available from session to session. Reliable means automatic recovery in case of hardware or software failures. Sharable means that several users should be able to access the data. All of these qualities may require systems that are larger than many that are currently available. In some cases, of course, programmers aren't dealing with overwhelmingly complex data, yet want to combine the increased productivity of object-oriented programming with the flexibility of an SQL DBMS. Relational technology has been extended to support binary large objects (BLOBs), text, image and compound documents, sound, video, graphics, animation, and abstract data types. As a result, organizations will be able to streamline paper-intensive operations to increase productivity and decrease business costs—assuming they use a database as a repository and manager for this data.

[footnote]1Index Group Survey, Fortune 1000, December 1990.

Ruler image
Contact [email protected] with questions or comments.
Copyright 1998 EarthWeb Inc., All rights reserved.
PLEASE READ THE ACCEPTABLE USAGE STATEMENT.
Copyright 1998 Macmillan Computer Publishing. All rights reserved.
Click here for more info

Click here for more info