Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Open Applications Architecture'

Open Applications Architecture:
A User-Oriented Reference Model for
Standardization of the Application Platform

Computer Standards & Interfaces 11 (1990) 15-27

Roger Clarke **

© Xamax Consultancy Pty Ltd, 1990

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/SOS/OAA-1990.html


Abstract

The ISO OSI reference model, which has provided an effective framework for the vendor-independent standardization of data communications, has been developed from the perspective of the communications engineer. Now that standardization is focussing on application development and maintenance, it is essential that the perspective of users be adopted, via their proxies, application software managers. Open Applications Architecture is presented. It is a pragmatically based framework capable of providing a user-oriented reference model for the platform on which application software depends.


Contents


1. The User Need

Information Technology's utility to an organisation derives from application software. The primary focus of IT Managers should therefore be on Application Software Management (ASM), a concept which embraces all phases of the application software life-cycle, from planning, conception and design; through acquisition, customisation and construction; and implementation and use; to maintenance, adaptation and enhancement.

The platform on which application software depends has historically occupied a great deal of the time of IT Managers. The stacks of standards which have been specified within the ISO OSI model are decreasing the uncertainty and instability, and thereby releasing more of the IT Manager's time for tasks of greater and more direct benefit to the organisation. This is timely. Senior executives of organisations no longer have patience with technically oriented IT managers, and have increasing expectations of IT as a source of direct and in some cases decisive support for their organisation's strategic and competitive objectives [37,3,31,28,10,24,40]. Another relevant source of change in the role of IT Managers has been the rapid growth in end-user computing. Reflecting these developments, there is increasing discussion in the business literature of the need for an IS architecture (e.g. [24]).

Organisations already have large portfolios of existing, relatively mundane, but operationally critical, applications. The increased emphasis on strategic and competitive applications is bringing into sharp focus the importance to corporations of protecting their investment in application software. Their systems must be capable of swift adaptation, to enable clients to be provided with new services, and innovative moves by competitors to be countered.

One of the vital elements of Information Systems strategy is the ability to transport applications quickly, cheaply and reliably to alternative computing and communications hardware and software. Portability has a number of different facets, including:

Standardization is vitally important to fulfilling the objectives of portability and interoperability. Valuable as standardization has been to date, however, it still has not reached the point where application software managers are directly served. Not only must data communications aspects of the IT platform be standardized, but so too must a reference model emerge which encompasses other aspects of operating systems, application software delivery vehicles, configuration management, user access software and user utilities.


2. The Engineering Origins of ISO OSI

The ISO OSI (the International Standards Organisation's reference model for Open Systems Interconnection) has been central to the highly effective standardization movement in the area of data communications. Unlike proprietary architectures, especially IBM's Systems Network Architecture (SNA), ISO OSI's purpose was to provide a framework for a family of worldwide, vendor-independent standards.

ISO OSI adopted a 'bottom-up' perspective, appropriate to the work of communications engineers, by beginning with traffic on a channel, and imposing order on it. The perspective appropriate to the user of IT facilities, on the other hand, is 'top-down', viewing computing and data communications facilities merely as the delivery mechanism: a reliable and secure data communications environment is just as much an environmental given for the application software user and developer as twisted-pair cable is to the communications engineer.

It is arguable that the ISO OSI model is intended to incorporate application software, because the seventh layer began life as the Applications Layer (e.g. [43] p.21). As standardization has proceeded, however, that layer's scope has been constrained, and it has become concerned with Application Services (in particular file transfer, messaging and directories) rather than applications themselves.

The technology which supports application software is too complicated to be dealt with in a single layer. To satisfy the needs of his users, the IT Manager must provide his software professionals, and his more 'computer-literate' end-users, with a collection of powerful tools, integrated within a cohesive environment. Query languages, DBMS, 4GLs, free-text .retrieval languages, spreadsheet modellers, security menus, graphic presentation tools, desk-top publishing tools, spelling checkers, mail-box filtering tools and statistical and econometric packages are only some of the portfolio of products which must be available. In short, ISO OSI addresses needs underlying Application Software Management, but not the needs of ASM itself.


3. Standardization of the Application Platform

International standards bodies have recognised the need to move beyond Application Services. One relevant project currently in progress is sponsored by CCITT SG/VII Q19 under the name Distributed Applications Framework (DAF). Due to its parent's constitution, the team will find it very difficult to move away from its communications base, and DAF therefore seems unlikely to address the full needs of management. A similar project is being undertaken by the International Standards Organisation (ISO), whose origins are also in engineering, but whose charter is much broader than communications. Following preliminary work commenced in 1986, ISO has established Working Group JTC1/SC21/WG7 on what it refers to as Open Distributed Processing (ODP). The group held its inaugural meeting in December 1988 in Sydney, Australia, with subsequent meetings in Paris (April 1989), Florence (November 1989) and Seoul (May 1990). During late 1989, steps were taken to coordinate the work of the CCITT DAF and ISO ODP groups.

On the basis of current documents (e.g. [25,26,27,44,11,12]), it appears that ODP is to address the matters which are the subject of this paper. Indeed its scope is a little broader, since it is also intended to address the questions of alternative development methods, and analysis-phase issues such as enterprise-level modelling.

The relationship between ODP and the 7-layer ISO OSI reference model is as yet unclear. Rather than being based directly on the OSI layers model, the preliminary conceptual discussions have concerned five different 'projections', viz. the enterprise projection (at the level of analysis and requirements), the information, processing and engineering projections (which make up the logical and technical design) and the technology projection (which deals with the realisation of that design on the available facilities).

It may be that ODP will be squeezed into Layer 7 alongside the several Application Services for which standards are in the process of being finalised. Alternatively ODP may involve one or more additional layers above ISO OSI Layer 7. There are likely to be some significant political difficulties in the negotiation of ODP's placement, particularly in view of the number of different organisations which have interests to protect. Relevant Working Groups within SC21 include:

Even if the political problems are resolved, it seems very likely that the resulting structure will have been determined from the perspective of data communications, and will be a hindrance to progress from the point of view of the IT and application software managers. For example, there is no apparent place in ISO OSI for application software, for development tools, or even for Operating Systems services other than those related to data communications; and Database Management System (DBMS), Information Resource Dictionary System (IRDS)/Meta-DBMS services, and user and developer access seem to be conceived rather differently from the conventional application software viewpoint.

This paper argues that an extended OSI is unable to satisfy the need that user-organisations have for a perspective entirely different from those of communications and computer engineers, and that it is highly desirable that ISO allow ODP to escape very early in its life from beneath the protective but constraining wings of its OSI parent. Before presenting a framework which it is proposed can meet that need, it is important to abstract from ISO OSI some key concepts which will be of great value in such an exercise.


4. ISO OSI as Meta-Model

The great success of the ISO OSI framework in rationalising the data communications arena suggest that some of the notions which underpin it may provide an appropriate meta-model for OAA. The most critical concept involved is that of 'layers'. This refined form of modularisation uses a hierarchy of nested levels to decrease complexity, quarantine problems and enable parts of the underlying platform to be replaced, with minimal interference with operational performance.

Each layer is:

Each component:

According to Tanenbaum ([43], p. 15 citing [47]), the major principles used in determining the ISO OSI layers were that:

The following section proposes a conceptual framework which takes into account the insights gained from the successful ISO OSI data communications standardization movement, but whose purpose is to offer a basis for internationally standardized protocols which serves the perspective of application software managers, and hence users.


5. Open Applications Architecture

A number of conventional distinctions have long existed in the software development world. For example, systems software has been differentiated from application software, and operating systems from teleprocessing monitors. The proposal which follows takes many of these distinctions into account, uses some, and discards others.

The first distinction which appears essential is between the Operating Environment and the Development & Maintenance Environment. By the Operating Environment (OE) is meant the computer and communications hardware and software platform which delivers functionality to the ultimate users of the application. The Development & Maintenance Environment (DME) is the platform which supports the creation of the apphcation, and its testing, debugging, correction, modification and enhancement. The staff who use the DME require appropriate education and training, and will in most cases be software professionals. End-users themselves are, however, performing an increasing proportion of development and maintenance.

A reference model must make explicit the relationship between the two environments. Application testing and installation are transitional between the DME and OE, and the staff responsible for these activities require some degree of access to both. Moreover, for economic reasons, the OE and the DME platforms are often common at least up to some level. There is no reason in principle, however, why the DME and OE should necessarily run under the same operating system or even on the same hardware. Where the product is developed entirely on a foreign machine, it may be necessary to port it into a small host DME for compilation, linkage and testing. It is, however, entirely feasible for the DME and OE to share no common components whatsoever, provided that a full-functionality Programmer Workbench exists, including cross-compilers or their equivalents (not only for processing code, but also for data schemas and I/O specifications), together with an appropriate test environment.

The layers proposed for the DME and OE have been chosen pragmatically, but with regard to the principles identified in the previous section. It is convenient to group them into hardware, systems software and application software tiers. The application software tiers of the two Environments are quite different, reflecting their very different purposes. The lower layers are similar in function, since they share the purpose of putting the underlying facility to work. Figure 1 shows the two environments and the layers proposed.

The next two sections discuss the various layers of each of the two environments which make up the Open Applications Architecture notion. It is desirable that every Layer be satisfied by one or more standards, which define the functionality that is available, and the interfaces between that Layer and those adjacent to it. No attempt is made in this paper to provide a formal specification.


6. The OAA / OE

The Operating Environment comprises a hardware layer, a systems software tier comprising three layers and an application software tier also of three layers. The application software tier provides the user with the functionality required, and access to it. The systems software tier provides a wide range of services to the application software layers. These include management of the network, such that the higher levels need no knowledge of the physical whereabouts of any of the processors, processes, users, data or storage or I/O devices involved. The hardware tier provides raw storage, processing and communication facilities.

This section presents each of the Layers of the Operating Environment of Open Applications Architecture, commencing at the deepest level. Brief discussion indicates existing products and formal standards which are relevant to each Layer.

6.1 The Networked Hardware Layer

This layer comprises computing and communications equipment. It is arguable that this layer should also be defined to include low-level software which is inherent in the hardware, such as bootstrap routines and Ethernet cards. In this way, a significant amount of the software defined within data communications frameworks such as ISO OSI and SNA (at least up to Layer 3, and probably also Layer 4) can be treated as being at this lowest layer of the platform.

6.2 The Distributed Operating System Layer

There appears to be a considerable degree of unanimity as to the services provided by the operating systems layer. Referring to standard texts such as Peterson and Silberschatz [38] - see also Lister [30] and Calingaert [8] - Marty [33] proposes an internal structure for this layer comprising main-memory management, process management and interprocess communications, low-level input/ouput communications with peripherals, and the management of file systems.

In operating systems whose antecedents are to be found in the mini-computer era (e.g. Unix, VMS and VM), the management of data communications between the workstation and the server is intrinsic to the operating system. Operating systems whose provenance goes back further in time than that (e.g. MVS) are inherently batch-oriented, and data communications management is performed in conjunction with co-requisite software generically called teleprocessing or transaction processing monitors such as CICS [13].

It is proposed that not only low-level communications, but data communications generally, belongs in this Layer, because to end-users data communications is a deep-nested facility. This has the effect of incorporating ISO OSI Layers 5-7 (and its equivalents in other schemes such as SNA) within the OAA Distributed Operating System Layer. This includes the whole of ISO OSI Layer 7 services, including file transfer (FTAM), messaging (X.400), directories (X.500), and in due course office document standards. These ISO OSI Layers define those requirements of the lower levels of OAA which relate to data communications, but by no means all of the requirements.

There have been attempts in the past to establish standards for the whole of the Operating System Layer. VM was a partially successful attempt by IBM to overcome the rift between its many different Operating Systems for 370-series equipment, by providing a 'host OS' under which others could run as guests. Unix is a family of OS which supports fairly easy porting of software because of its implementation (predominantly) in an externally-specified third-generation language. It has become very widespread (so widespread in fact that there have been at least several serious and seriously competitive attempts to fully standardize it: by AT&T - from whom it originally escaped; by the Open Systems Foundation - OSF; and by a European group, COS). The Pick operating system is another 'generic' operating system which has enjoyed considerable success across a wide variety of manufacturers' machines. To complicate the picture, a number of different Unix variants host a number of different Pick variants.

During the late 1980s, there has been a significant trend away from large numbers of incompatible proprietary operating systems towards a small number of semi-standardized operating systems. Major players at the beginning of the 1990s include MS-DOS, Macintosh and gradually OS/2 at the low end; Unix and perhaps gradually OS/2 on workstations; Unix, VMS, and some remaining proprietary products such as PRIMOS and OS/400, in the midrange; and MVS, Unix, and a small number of other proprietary systems at the high end. Most industry watchers are anticipating further concentration during the 1990s.

An alternative approach to standardization within this layer relates to the interface between executable code and the operating system. The POSIX standard appears to be gaining considerable acceptance, which may result in the co-existence of a number of proprietary and industrystandard OS with rather different orientations, strengths and weaknesses.

6.3 The Distributed Database Management Layer

It is arguable that, just as data communications is treated as being embedded within the Operating System Layer, so too should data management. There are a number of popular products which do this, such as the widely implemented Pick operating system, and the IBM S/38 and AS/400 Operating Systems. In the marketplace, however, the popular DBMS have tended to be separate products associated with, rather than intrinsic to, operating systems. This is in part a result of the success of third-party systems software suppliers, who have successfully competed with the hardware/OS supplier and supplied DBMS into, for example, MVS and VMS sites. It therefore seems pragmatically more sensible to identify a separate Database Management Layer.

It is essential that this Layer ensure that the location of data be transparent to the Layers above it. This has been a theoretically solved problem for some years, awaiting the emergence of operating systems (such as Apollo) with at least primitive support for distributed networks, and developers who can apply the capability. Some products, such as Oracle and Ingres, have now gone so far as to offer seamless data management even within heterogeneous networks (i.e. in which some nodes are running a DBMS different from that run by the host, e.g. mixed DB2, SQL/DS, Ingres and Oracle networks).

6.4 The Run-Time Environments Layer

The data and processing specifications which make up application software are in many cases delivered in directly-executable form, such that they may be directly invoked, and require no further processing before providing the intended functionality. Such code is often referred to as 'binaries' or' machine-code', or (misleadingly) 'object code'. In such cases this layer may contain no more than run-time libraries of pre-written and -compiled or -assembled subprograms or subroutines (e.g. [19]). This technique is used in many products to simplify the developer's task, and provide highly execution-efficient code for critical or frequently-used functions. It is also used to provide specific capabilities which have not yet become embedded in the operating system, as is currently the case with graphics libraries.

It is not uncommon, however, for application software to be delivered in a form which requires further processing. One of the common forms is 'fully-interpretive' code (such as that for most implementations of BASIC and Interactive SQL), which requires considerable translation effort by an 'interpreter'. Another class of product has arisen because software developers have striven for portability of their own products, and have taken to delivering software in the form of 'p-code' or 'pseudo-code'. This is a low-level code, designed to be very similar to the instruction sets of the various targetted machines, and hence to require only a relatively simple and efficient 'run-time interpreter'. Another form of non-directly-executable code is common with 4GLs of the 'application generator' type. This involves delivery of the application software as parameter tables. A 'runtime table processor' uses the parameters to specialise a template, and deliver machine-code for execution.

6.5 The Applications Layer

The function of this layer is to provide the functionality which the user requires. When application software is delivered from the Development & Maintenance Environment, it comprises both static components (chiefly data and I/O format descriptions) and dynamic components (particularly programs, but also sub-programs and procedures or scripts). The static components may be delivered embedded within the processing code, but it has been increasingly common for schemas to be delivered separately (see for example [18]). Of course, as object-oriented programming languages reach the market-place, the static and dynamic elements will tend to be merged again.

The Applications Layer does not distinguish between software custom-built for the organisation, packages customised for it, and those installed directly 'off the shelf'. In all cases, a Development & Maintenance Environment exists, and is independent of the Operating Environment. Whether that DME runs on the same machine, another in the same machine-room, another on the same network, or another not even indirectly connected with it, and whether the application software professionals are employed by the same organisation, work under contract to it, or work for an independent company, is a management question, irrelevant to the technical issues which are the concern of OAA.

X/Open is a joint effort by suppliers and major users to define (possibly among other things) a Common Applications Environment (CAE). At present it is endeavouring to do this by nominating a set of national and international standards which it believes satisfy the needs of its constituents. CAE-compliant application software would be guaranteed to run without modification in a wide range of (tightly defined) target hardware and systems software environments. It would therefore provide standardization across several Layers of the OAA Operating Environment. IBM's Systems Application Architecture (SAA) Common Programming Interface (CPI) is a candidate de facto standard for this Layer.

6.6 The Presentation Layer

The purpose of this Layer is to enable the user to dictate the form in which application functionality and data appear.

One cluster of standards is concerned with syntactic translation between different selection languages (such as menu-driven front-ends which translate user requirements into command language), and between different report generators and query languages (e.g. from a front-end graphical query language such as QBE into an underlying and executable language such as SQL).

Another group of standards provides the user with the ability to control data extracted from the application software, and to direct output to different devices (e.g. hard-copy, soft-copy and diskfiles), and to other users. Important among these standards are those which enable data to be interchanged among applications and utilities, such that the functionality available to the end-user appears as close as possible to 'seamless'. Current de facto standards in this area include ASCII and several major WP formats for free-format data, DIF and SYLK for tabular data, Postscript for page description, and ISO ODA for document publishing. Clearly, the original notion of Office Automation (i.e. as an environment distinct from transaction data processing, management reporting and decision support systems) is no longer tenable.

Products which belong to this Layer include spreadsheet modellers, and statistical analysis, presentation graphics and desktop publishing packages. The Layer also includes tools for ad hoc data access, such as report-generators and the various query languages, including structured (e.g. SQL), template-based (e.g. QBE), natural (e.g. English and Intellect) and free-text (e.g. the query component of Status). The historical differentiation between these two classes of product (in that report-generators have been oriented toward the presentation of printed reports, and therefore provide more power in the layout of many records in columnar format and the provision of report-, section- and page-headers and -footers; whereas query languages have been oriented toward powerful selection conditions to display smaller amounts of data on screen) is steadily disappearing.

It is arguable that the various user tools (sometimes referred to as 'user utility software') defined to be in this Layer should really be part of the Applications Layer, or even part of the Development and Maintenance Environment rather than the OE. However, it seems to be on balance better to define the Applications Layer to comprise only the relatively fixed, pre-programmed components, and treat these ' views' (in the very broadest sense) of the application data and processing as part of this separate Presentation Layer.

6.7 The Access Layer

The functions of the Access Layer are user authentication, and provision to the user of means of choosing the particular function he wants, in a convenient and consistent manner. Needless to say, a single environment should give access to all capabilities available to the user, irrespective of the machines on which these run, whether the applications are custom-built or purchased as packages, and whether they are transactionprocessing applications, DSS tools or general-purpose 'Office Automation' products. It appears likely that the mainstream in the early 1990s will be WIMP (Window, Icon, Mouse and Pull-down or Pop-up menus) interfaces and workplace metaphors. It is a fundamental of user organisations, however, that not only must the current mainstream be available, but also future innovations and old-fashioned alternatives.

An additional requirement of this layer is that each individual must be able to customise his environment to suit himself. This involves such simple concepts as the shape, colour and blinkspeed of the cursor, and the delay-time and speed of keyboard auto-repeat. It also involves more substantial variables such as the level of help and error-messages (e.g. beginner, trained, expert), and the language in which text should be displayed (e.g. English, German, Thai or Katakana). Such a 'profile file' needs to be accessible by all levels of the OE, and to recognise pre-set defaults.

In addition to proprietary products (most notably from Xerox and Apple), there have been a number of attempts to establish de facto standards in this area, such as DRI's GEM and more recently Microsoft's Windows in the MS-DOS world, portions of IBM's SAA component Common User Access (CUA) and MIT's X-Windows. Standardization based on X-Windows is proceeding, although the area is currently highly dynamic, with a variety of proprietary variants and extensions on offer. It should be noted that these products and standards generally provide a range of services to the user, and thus straddle the OAA Presentation and Access Layers.

6.8 Conclusions

The Operating Environment comprises all of the components necessary to enable the organisation to take advantage of the available functionality. It is organised into Layers, with the intention of minimising interdependencies and complexity, and enabling tight specification of products at each level, and of the interfaces between them.

Commonly, user organisations acquire hardware and software to satisfy the bottom four OAA/OE levels from specialist suppliers. In some cases it is acquired from the hardware supplier, but this has become less common in recent years, as third party systems software suppliers have addressed particularly the DBMS and Run-Time Environment Layers. There are of course some circumstances in which it can be advantageous for a user organisation to develop or commission a new component or modify an existing one, particularly in the case of run-time libraries. However, unlike the early years of computing, there are today few user organisations which write and maintain their own Operating Systems.

The following section deals with the Environment within which application software is developed and maintained.


7. The OAA / DME

The Development & Maintenance Environment has different objectives to the Operating Environment, but has a similar structure. It comprises a hardware tier, a systems software tier of three layers, and an application software tier with only two layers. The application software tier provides the software professional with the functionality required, and access to it. The systems software tier provides a wide range of services to the software professional and to the application software layers. This includes management of the network, such that the higher levels need no knowledge of the physical whereabouts of any of the processors, processes, users, data or devices involved. The Hardware Layer provides raw storage, processing and communication facilities.

The following sections discuss each of the layers of the Development and Maintenance Environment within Open Applications Architecture.

7.1 The Networked Hardware Layer

This DME layer corresponds directly to the same level of the Operating Environment, and comprises computing and communications equipment, and possibly also embedded systems software. There is no requirement that the DME Networked Hardware Layer for any particular piece of application software be the same as the OE Networked Hardware Layer. However there must be nothing to prevent organisations from running their DME and OE on the same equipment and network.

7.2 The Distributed Operating System Layer

This DME layer also corresponds to the same level of the Operating Environment. It is in practice not uncommon for the actual components at this level of the DME to be different from those at this level of the OE. A common example is the use of VM/CMS or MVS/TSO by software professionals, in organisations which use MVS/CICS and/or VSE/CICS for operations.

7.3 The Distributed Meta-Database Management Layer

This layer corresponds to the OE Distributed Database Management Layer, but the nature of the functions performed are rather different. The development of software involves the description of requirements, the (often only nominally) implementation-independent specification of data and processing to fulfil those requirements, and the expression of those specifications in forms which can cause complexes of IT artefacts to perform the desired actions. A vast amount of information is generated, and this information requires management. For many years that management was undertaken informally, but its importance has been increasingly recognised, under the guise firstly of Data Dictionaries (e.g. [45,16,2]), and later Information Resource Management (e.g. [35,36,42,41, 21,34]) and Software Configuration Management (e.g. [5,7,6,17,1]).

The increasing degrees of discipline that have been brought to the management of information about data is now culminating in what is perhaps best termed 'Meta-Data Management' (e.g. [32]). Further discussion of the central role which must be played by this Layer is to be found in [14].

It is critical to the quality of application software services that development and maintenance be supported by special-purpose MDBMS, as effective in their role as DBMS are in theirs. Currently a great deal of development and maintenance is, in practice, performed on a single machine. However, workstations are emerging as a fully-fledged component of corporate networks, and it is essential that this Layer support transparent distributed networks. An example of the importance of this capability is seen in the inadequacy of early 'upper-CASE' (diagram-drawing) tools, most of which have been designed for standalone PCs, even though several members of the project team need concurrent access to this meta-data.

Standardization processes for Information Resource Dictionary Systems (IRDS), which is a form of such an MDBMS, are proceeding in the context of ISO OSI. However, the degree of acceptance of the ISO IRDS will be unclear for some time, and there seems likely to be competition between a variety of standards. It would therefore be premature to bury the MDBMS concept too deeply in the OAA/OE.

7.4 The Run-Time Environments Layer

Although many of the tools used in development and maintenance exist in directly-executable form, some do not. In particular, one of the key characteristics of prototyping tools is their ability to provide the developer (and the user sitting beside him) immediate and therefore usually 'fully-interpretive', and heavily defaulted, execution of each newly developed data definition, I/O format and processing specification.

7. 5 The Development Tools Layer

This layer comprises a rich set of alternatives. Second-generation languages require editors, assemblers and testing tools. Third-generation languages require editors, compilers, linkers, library management tools and symbolic debuggers. Fourth-generation products are of two major families, those which are built around higher-level languages (sometimes semi-facetiously referred to as '3.5 GLs') and those which support the specification of data and I/O formats and the generation of amendable and extensible programs (sometimes called'application generators, e.g. [20]). Both types of fourth generation product require a considerable kit of additional components.

The alternatives continue to multiply, as new products enable the developer to take even more abstract approaches to specifying the problemsolution, problem or problem-domain. Rule-based systems are having an impact in the market-place, and it has been claimed that, in the not-too-distant future, systems based on neural or connectionist networks will become an exploitable technology.

Although purveyors of fourth-generation products have been slow to appreciate the fact, it is economically desirable for developers to be able to specify different components of application software at any and all of these various levels of abstraction. Mature products in the Development Tools layer should therefore comprise a family of related alternatives.

Since the 1960s, third-generation languages have been subject to early and (thanks originally to the influence of the US Department of Defense) authoritative standardization. However, the many tools which make up a complete development environment have been subject only to accidental standardization (in the sense that the large number of superficially similar editors, text processors and symbolic debuggers acknowledge only a small number of ancestors). Later generation languages continue to be dominated by proprietary products with the single exception of ANSI-standardized SQL. In seeking to impose some order within its own product range, IBM's SAA coined the term Application Enablers for this Layer (although the original 1987 definition of SAA excluded thirdparty products).

It is arguable that the DME should retain its parallelism with the OE and include a Presentation Layer. However, software professionals appear to have less need of such tools than do end-users, and accordingly the layer has been omitted, and its functionality included in the Development Tools Layer. On the other hand, an argument may be readily mounted that this Layer should be broken down into several, to reflect the insights of the literature relating to Software Development Environments and Integrated Programming Support Environments (e.g. [23,22]).

7.6 The AccessLayer

The DME Access Layer has the same purpose as that of the OE, to provide user authentication, and convenient and consistent access to the available functionality. It was for many years assumed that software professionals should have unrestricted access to the operating system. Security considerations overtook that primitive notion, and developers were then constrained to (a very large subset of) the native command language. Some classes of developer are likely to continue to need that degree of power.

As application software technology has matured, however, the skills-level required of its participants has decreased, and many programmers of the 1990s are ill-equipped to work with the full power (and potential danger) of the command language. There has therefore been a trend toward much more structured working environments for software developers, involving menu-driven rather than command language access, and full-screen rather than scrolling prompted dialogue interface. There is even evidence of acceptance by programmers of WIMP (window, icon, mouse and pull-down or pop-up menu) interfaces, particularly in workstation environments.

7.7 Conclusions

The Development & Maintenance Environment comprises all those components necessary to enable the organisation to deliver application software to its client(s). Like the OE, the DME is organised into Layers, in such a way that interdependencies and complexity are minimised, and products at each level, and interfaces between them, can be tightly specified.


8. The Prospects for User-Led Standardization

There are several different strands to the standardization movement in the area of application software. IBM's Systems Network Architecture (SNA) made vital contributions to the progress of IT by establishing a de facto standard in the data communications area, and it is possible that IBM's Systems Applications Architecture (SAA) might make a similar contribution. This might be instead of international standardization movements, in conjunction with them, or in competition against them.

There have been signs of openness beginning to emerge within SAA. In particular, AD/Cycle, the framework for systems life-cycle (SLC) management and computer-aided software engineering (CASE) tools, which was announced in late 1989 for progressive release from late 1990 onwards, recognises the need for exchange between IBM's proprietary software development tool (CSP) and third-party products. The closedness of the 1987 SAA announcements in relation to 'Application Enablers' has thereby been significantly compromised. However, SAA is explicitly oriented toward providing an umbrella whereby applications may achieve some degree of portability and inter-operability across IBM's many existing and highly disparate systems software environments. This seems very likely to preclude it from developing into a comprehensive user-oriented reference model for standards for the application platform.

Meanwhile, OSI and its Working Groups, including WG7 which deals with ODP, appear to be dominated by people with a technical background and orientation. It seems unlikely that the political balance will change sufficiently to enable the perspective of application software managers to be the primary influence in the development of ODP, and it may even be difficult for the user perspective to be incorporated. It is therefore important that alternatives be considered.

OAA was originally conceived as a planning framework for IT Managers and Application Software Managers [15]. The framework appears to have the potential to make a much more substantial contribution to the progress of information technology, by providing a basis for a reference model for the higher layers of the IT platform. There are, however, a number of factors which militate against it. The first of these is the possibility that the proposal may be flawed in important ways. Although it has been developed from long industry experience, and tested informally in a consulting context, it requires more rigorous assessment.

A second area of weakness is that it is pragmatically rather than theoretically derived. As a result it may be insufficiently visionary to accommodate all of the significant developments of the near and middle future. On the other hand, standardization is a cooperative process, entailing negotiation across language and cultural barriers, is therefore slow, and generally lags behind technology rather than leading it.

A further weakness of OAA, also associated with its pragmatic origins, is its explicit recognition of de facto as well as de jure standards. Once again, this may be a necessary feature of a realistic reference model, because at least at present there is a need to reconcile new public standards with pre-existing proprietary standards [46]. There may also be a long-term need for such reconciliation processes, because standardization by its very nature restrains creativity, and there are always likely to be new developments outside the reference model, some of which will meet a need and flourish.

Finally, it may be politically naive to propose an alternative approach at this stage in the standardization process. The institutions, processes and power relationships which determine standards in the IT arena are well-established [9,4], and attempts to change the orientation of such a large ship would be perceived by many people as an attempted hijack.


9. Conclusions

A great deal must be done to test the validity of the claims in this paper, and to confirm that the scope of the proposal is sufficiently broad to fulfil its purpose. One example of a matter in need of careful attention is the question of multi-organisational systems. Implicit in this paper is the calm assumption that OAA is a sufficient strategic planning tool for networks spanning distinct legal entities. It is vital that OAA be assessed in the context of such established and emerging multiorganisational arrangements as travel reservation systems, Electronic Funds Transfer (EFTS), EFT/POS (funds transfer at point of sale, i.e. on merchant's premises), Electronic Data Interchange (EDI which refers to asynchronous transmission of electronic purchase orders, delivery dockets and associated documents between supplier and customer), and Electronic Trading (synchronous communications between trading partners to effect contracts for the sale of goods, particularly commodities).

Effective IT management is dependent on effective Application Software Management. This is in turn only possible if an effective, efficient, flexible and adaptive platform is available. The creation of a platform which will serve the needs of user organisations depends on the emergence of a user-oriented reference model such as OAA.


References

[1] W.A. Babich, Software Configuration Management (Addison-Wesley, Reading, MA, 1986).

[2] BCS, Data dictionary systems working party report, Data Base 9(2) (Fall 1977).

[3] R.I. Benjamin, J.F. Rockart, M.S. Scott Morton and J. Wyman, Information technology: a strategic opportunity, Sloan Management Rev. 25(3) (Spring 1984).

[4] J.L. Berg and H. Schumny, eds., An Analysis of the Information Technology Standardisation Process (Elsevier- North Holland, Amsterdam, 1990).

[5] E.H. Bersoff, V.D. Henderson and S.G. Siegel, Software Configuration Management (Prentice-Hall, Englewood Cliffs, N J, 1980).

[6] E.H. Bersoff, Elements of software configuration management, IEEE Trans. Software Engineering SE-10(1) (Jan. 1984) 79-87.

[7] J.K. Buckle, Software Configuration Management (Macmillan, New York, 1982).

[8] P. Calingaert, Operating Systems Elements: A User Perspective (Prentice-Hall, Englewood Cliffs, N J, 1982).

[9] C. Cargill, A Guide to Information Technology Standardization: Theory, Organizations and Process (Digital Press, Belford, MA, 1989).

[10] J.I. Cash and B.R. Konsynski, IS redraws competitive boundaries, Harvard Business Rev. (Mar/Apr 1985).

[11] E. Chew, Open distributed processing - the next step beyond OSI, Working Paper, Aust. Centre for Unisys Software, Sydney, May 1988.

[12] E. Chew, Beyond OSI - open distributed processing standards for business enterprise, Proc. Conf. New Bus. Applications of Info. Technol., IEAust, Melbourne (26-27 April 1989).

[13] R.A. Clarke, Teleprocessing monitors and program structure, Austral. Comput. J. 14(4) (November 1982).

[14] R.A. Clarke, Application software technology: vital foundation of IR management in the 1990's, Proc. ACC'88, Sydney, Aust. Comp. Soc. (Sept. 1988).

[15] R.A. Clarke, Open applications architecture: a basis for I.S. strategic planning, J. Elec. Electronic Eng. Aust. 9(4) (December 1989) 143-150.

[16] The Data Dictionary/Directory Function, EDP Analyzer (Nov. 1974).

[17] L.E. Edwards, Configuration management: an overview, 33-10-10 (1984) and Configuration management: implementation, 33-10-20 (1985) in I. Auerbach, ed., Systems Development Management (Auerbach, Philadelphia, PA, 1985).

[18] M.L. Gillenson, The duality of database structures and design techniques, Comm. ACM 30(12)(Dec. 1987) 1056- 65.

[19] E. Horowitz and J.B. Munson, An expansive view of reusable software, IEEE Trans. Software Engineering SE- 10(5) (1984).

[20] E. Horowitz, A. Kemper and B. Narasimham, A survey of application generators, IEEE Software (Jan. 1985),

[21] F.W. Horton, Information Resources Management (Prentice-Hall, Englewood Cliffs, N J, 1985).

[22] R.C. Houghton, Software development tools: a profile, 1EEE Comput. (May 1983).

[23] W.E. Howden, Contemporary software development environments, Comm. ACM 25(5) (1982).

[24] Implementing a new system architecture, I/S Analyzer 26(10) (Oct. 1988).

[25] ISO, Report on Topic 1 - the problem of distributed processing, ISO/IEC JTC1/SC21 N2507, Int'l Standards Org., New York, Mar. 1988.

[26] ISO, Working document on Topic 4 - system modelling, ISO/IEC JTC1/SC21 N2510, lnt'l Standards Org., New York, Mar. 1988.

[27] ISO, Draft basic reference model of Open Distributed Processing, ISO/IEC JTC1/SC21/WG7 N4025, Int'l Standards Org., New York, Dec. 1989.

[28] B. lves and G.P. Learmonth, The information system as a competitive weapon, Comm. ACM 27(12) (Dec. 1984) 1193-1201.

[29] H.R. Johnston and M.R. Vitale, Creating competitive advantage with interorganizational information systems, MIS Qtly 12(2) (June 1988).

[30] A.M. Lister, Fundamentals of Operating Systems (Macmillan, New York, 1979).

[31] F.W. Macfarlan, Information technology changes the way you compete, Harvard Business Rev. (1984) 98-103.

[32] S.T. March and Y. Kim, Information resource management: a metadata perspective, MISRC Working Paper 87-15 Uni. of Minnesota, June 1987.

[33] R. Marty, Software integration, Comput. Standards Interfaces 8 (1988) 9-14.

[34] S. Navathe and L. Kerschberg, Role of data dictionaries in information resource management, Inform. Management 10 (1986) 21-46.

[35] NBS, Data base directions: information resource management-strategies and tools, U.S. National Bureau of Standards, 1980.

[36] NBS, Data base directions: information resource management: making it work, U.S. National Bureau of Standards, 1985.

[37] G.L. Parsons, Information technology: a new competitive weapon, Sloan Management Rev. (Fall 1983).

[38] J.L. Peterson and A. Silberschatz, Operating Systems Concepts (Addison-Wesley, Reading, MA, 1983).

[39] B. Schneiderman, Designing the User Interface (Addison- Wesley, Reading, MA, 1987).

[40] L. Steele, Managing Technology: The Strategic View (McGraw-Hill, New York, 1989).

[41] W.R. Synnott, The building blocks of IRM architecture, Inform. Strategy 1(1) (Spring 1985).

[49"] W.R. Synnott and W.H. Gruber, Information Resource Management (Wiley, New York, 1981, 1984).

[43] A.S. Tanenbaum, Computer Networks (Prentice-Hall, Englewood Cliffs, N J, 1981).

[44] J.J. van Griethuysen, Press Release - Working Group 7: Open Distributed Processing, Nederland Normalisatie-instituut (NNI), Kalfjeslaan 2, Delft Netherlands, December 1988.

[45] P.P. Uhrowczik, Data dictionary/directories, IBM Syst. J 12 (1973) 332-350.

[46] D.D. Ward, Using open systems interconnection standards as a basis for a comprehensive distributed systems, Comput.Standards Interfaces 9(2) (1989) 105 112.

[47] H. Zimmerman, OSI reference model - the ISO model of architecture for open systems interconnection, IEEE Trans. Commun. COM-28 (April, 1980) 425-32.


Author Affiliations

Roger Clarke is Reader in Information Systems in the Department of Commerce at the Australian National University in Canberra. Prior to taking up that appointment he spent 17 years in professional, managerial and consulting positions in the information technology industry, in Sydney, London and Zürich. His research and consulting interests are in application software technology and its management, and in economic, legal and social aspects of information technology.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 2 January 2014 - Last Amended: 2 January 2014 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/OAA-1990.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy