Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Version of 24 February 1991
Published in Database 22, 3 (Summer 1991) 23 - 34
© Xamax Consultancy Pty Ltd, 1988-91
This document is at http://www.rogerclarke.com/SOS/SwareGenns.html
The current environment in which software development is undertaken includes a mix of languages at varying levels of abstraction. The goal of this paper is threefold. First, the range of application software technologies is reviewed from a historical perspective. Second, a model based on levels of abstraction summarizes the key differences between these technologies. Third, a contingency model is proposed to guide the selection of the appropriate level of abstraction.
It is argued that the selection of the appropriate level of abstraction should be based on economic criteria mediated by political factors. The level should be chosen not once for the entire application, but for each component within an application. The search should be commenced at the level of abstraction at which the requirements are stated, and the highest level of abstraction chosen at which the particular component can, in the given environments of development and use, be practically implemented.
To support this contingency approach, an application development product must comprise a family of compatible languages and tools, such that developers have the freedom to choose at which level of abstraction they will express their solutions, problems, domain-models or domain empirical data.
The popular conception of the 'hardware generations' (based on relays, valves, transistors, integrated circuits, and successive levels of larger-scale integration) has proven a valuable means of conveying basic information about processor componentry and hence processor architecture. The history of application software technology has also been encapsulated in a series of 'application software generations', in order to communicate the historical background to programming languages.
This paper commences with a review of the software generations, and subsequently proposes an interpretation of them based on levels of abstraction. The paper then applies this interpretation of the application software generations to the choice of development language. It proposes that the choice should be made using a contingency model based on economic criteria mediated by political factors. Moreover, the choice should be made at the level of individual components within an application, rather than once for all components within it. This is only possible if application development products support development at each of the various levels of abstraction. An application development product must therefore comprise a family of compatible languages and tools, such that developers can select the most appropriate level at which to express each of the components of a system.
The paper is written from the perspective of information systems professionals and managers within commercial and governmental organisations, and accordingly gives relatively little consideration to such matters as the computational uses of computers and parallel processing. It therefore cuts across histories and analyses of programming languages, such as Sammet (1969) and Wegner (1989). The paper concerns itself with application software, and hence only with systems software (such as operating systems, database management systems, etc.) to the extent necessary to address the central question.
The early sections of the paper review the chronological unfolding of technologies, identifying the contributions and inadequacies of successive generations. Subsequent sections reinterpret the generations in terms of levels of abstraction, and present a model whereby the appropriate level of abstraction can be selected.
Several interpretations of the application software generations are possible. The conventional representation of the first three generations of software technology has distinguished machine-language from, in succession, assembler and procedural languages. See Exhibit 1.
No. Name The Focus of The Highest Level Human Activity of Machine-Activity 1 Machine expression of the computation Language problem-solution in the machine's terms 2 Assembly, expression of the isomorphic & Macro- problem-solution translation Assembly in mnemonic (assembly) shorthand 3 Algorithmic expression of the non-isomorphic or problem-solution translation Procedural in a natural form (compilation and interpretation)
The potential for data management and symbolic manipulation was not foremost in the minds of the first user-programmers. During the first generation, the challenge was to unlock the computer's computational capability. At first it was necessary to express the problem-solution in terms of the machine's instruction-set. Binary expression gave way to octal and hexadecimal codes for brevity, but otherwise there were few concessions to human frailties.
The increasing power of the machine was soon harnessed to undertake not just computational work, but translation tasks as well. Expressing a problem-solution in assembly and macro-assembly languages was easier than in machine-code because the symbols were mnemonic rather than arbitrary. The form of problem-solutions, however, was still constrained by the machine's instruction-set, because such languages are isomorphic with the machine language, mapping respectively one-to-one and one-to-many onto it.
In the third generation, the new languages enabled users (and the emerging breed of specialists called 'programmers') to express their problem-solutions in a form natural to the particular class of problem. Such languages had to be translated into machine instruction-sets which were constrained by electronic component technology and computer design norms. This translation task was difficult because it involved the resolution of a structure-clash (Jackson 1975), by which is meant that the mapping between the two patterns was not straightforward, but involved a reconciliation between two quite differently conceived sets of primitives.
The genesis of the third application software generation occurred in the early 1950's, with the major theoretical advances associated with Fortran (around 1952 to 1954) and Algol (around 1958 to 1960). The impact on commercial and administrative applications came much later, particularly in the period after compilers conforming with the USASI 1968 COBOL standard began to become available after about 1970. Structured programming, whose theoretical basis was established by Böhm and Jacopini (1966), was popularised by the debates about the harmfulness of the Go To statement (Djikstra 1968), but became a practical issue only around the mid-1970's.
During the 1970's, third generation languages underpinned considerable progress in the development of application software for data and transaction processing systems and MIS. Productivity was significantly higher than with assembly languages. Commentators suggested that the number of lines of code produced per day were comparable with earlier generations, and that development productivity was therefore (loosely) related to the language's 'power' (in the sense of the number of implied machine-instructions per high-level command). The skills requirements of programmers were not as high. Software quality improved, despite the fact that increasingly difficult and large projects were undertaken. Projects were more predictable and manageable, a smaller percentage of failures was experienced, and a degree of flexibility in staff assignment and in product portability was achieved.
Nonetheless, by the late 1970's, shortcomings of procedural languages were giving rise to considerable dissatisfaction, and it was apparent to information systems managers that the third generation was not sufficient. The following section examines the areas of concern.
With third generation technology, even non-trivial programs continued to require significant amounts of both human and computer resources. Moreover, the third generation's successes had resulted in more ambitious projects being conceived, and as projects became larger, additional levels of supervision and management were necessary to coordinate the larger numbers of people at the workface. The increased functional complexity was therefore exacerbated by increased organisational complexity: the addition of layers of supervisors and managers demanded more communication, and resulted in a reduction in the proportion of time that could be spent productively.
A wide variety of difficulties were identified (e.g. McCracken 1978, Ewers & Vessey 1981, Ginzberg 1981). Because third generation technology involved much more than a mere programming language and translator, flexibility was compromised. New staff required an appropriate education to provide the basis for understanding; specific training to enable them to cope with the particular language, systems software, tools, methods and systems life-cycle; and on-the-job experience to enable them to understand local jargon and values. Because educational institutions adapted themselves to the market's needs too slowly, it was difficult to hire the right people. It also took considerable elapsed time and management effort before new employees reached a satisfactory level of technical capability and hence productivity. Moreover it was difficult to keep staff long enough to capitalise on their training.
Far higher quality standards were required of the newer, larger, more complex, more highly integrated, and more organisationally important applications. There were difficulties in reading, understanding and modifying existing code (e.g. Kim & Westin 1988); both technical and user documentation left a great deal to be desired; and a lingual and cultural gulf existed between users and technical staff. Finally, the ever-growing software portfolio demanded that ever more of 'development' staff's time be invested in maintenance.
These features of third generation application software technology led to very slow delivery of new and modified functionality, and user-staff skepticism about the claims of the ever-growing information systems empires. The result of these problems was summarised (and somewhat trivialised) as 'backlogs' (Martin J. 1982).
MIS managers' concern about these issues was reflected by the results of a succession of surveys which showed U.S. managers ranking 'system development' as the first of seven Critical Success Factors (Martin E. 1982); 'software development and quality' as fourth of 19 issues, with 92.5% of respondents including it in their top 10 issues (Dickson et al 1984); and 'software development' as third of 21 issues with an importance score of 3.3 on a scale of 1 thru 4 (Hartog & Herbert 1986). Australian managers similarly ranked 'improving the effectiveness of software development' fourth of 36 factors (Watson 1989). It is noteworthy that in a subsequent survey of U.S. MIS managers 'software development' was ranked only twelfth of 20 factors, with only 24% including it in their top 10 (Brancheau & Wetherbe 1987). It is far from clear, however, that this apparent change was due to improvements in application software technology. For example, the results may have reflected increasing dominance of managerial over technical considerations, and the emergence of strategic and competitive applications of IT.
Despite many attempts, it gradually became apparent that improvements within the third generation would not overcome the difficulties. For example, it was demonstrated that there had been relatively little change in productivity during the entire third generation, and that it was not possible to reliably associate superior performance with any particular practice or practices (Lientz & Swanson (1981), Behrens (1983), Jeffery & Lawrence (1985) and Jones (1986)). The following section discusses the various approaches whereby a fourth generation of software development products emerged.
Commencing in the mid-1970s, a variety of suggestions were put forward as to what the defining characteristic(s) of the fourth generation should be. Exhibit 2 summarises the main contenders.
Productivity
End-User Development
Prototyping Capability
Relational Data Modelling
Non-Procedural Programming
Program Generators
Very High Level Languages
Reusable Software
Object-Oriented Programming Languages
Application Generators
The productivity criterion was expressed by a leading industry commentator as follows: "A language should not be called fourth generation unless its users obtain results in one-tenth of the time with COBOL, or less" (Martin J. 1982, p.28). This casual comment was elevated by many marketing interests to the status of a definition. It was very optimistic, since improvements of this order applied only to the construction phase, and were not achievable except for very simple procedures (many of which were already migrating out to standalone 'PCs'), and even then only when compared with a very naive developer (e.g. Rudolph 1983, Harel & McLean 1985, Jeffery 1986). Such an approach was in any case unhelpful, since it could only be used ex post facto, no benchmarks were proposed, and there were no indications as to the source of the productivity gains.
Another claim was that '4GLs' must enable end-users to undertake their own software development. It was argued that telephone operators had been necessary for only 50-80 years, and that, by analogy, programmers would only be necessary as an interim arrangement (McLean 1979, Martin J. 1982). This approach overlooked the fact that the telephone system has always performed essentially the same, relatively simple function. Application software addresses an enormous variety of different functions, and with a continuing trend toward integration both within and between organisations, it continues to become more complex. End-user development could only be a partial solution; the fourth generation needed to support application software professionals as well (Benson 1983, Rockart & Flannery 1983, Guimaraes 1983, Davis 1984, Rivard & Huff 1984, Clarke 1984, and Rivard & Huff 1985).
Prototyping capabilities were claimed by some to be essential to a fourth generation language. A good deal of attention has been paid to prototyping in the literature (Benson 1983, Rockart & Flannery 1983, Guimaraes 1983, Davis 1984, Rivard & Huff 1984, Clarke 1984, and Rivard & Huff 1985). Some commentators blamed the problems of the third generation on the Software Development Life-Cycle and the bureaucratic structures and processes associated with it. Some of them even proposed prototyping as the basis of an alternative methodology (e.g. Jenkins & Naumann 1982).
Clearly prototyping is desirable for some aspects of some applications, including data definitions and processing logic in DSS, data schemas in MIS in uncertain domains, and I/O formats in a wide variety of applications. However it requires little reflection to recognise that some problems are amenable to disciplined analysis, and that some solutions must be more precise than the vagaries of prototyping could ever achieve. Prototyping's applicability to commercial transaction processing systems is largely confined to the human-computer interface. Furthermore, managerial concerns about the efficacy of prototyping have emerged (e.g. Ince & Hekmatpour 1987). Prototyping is too partial a solution to provide the definitional focus of the fourth generation. It appears more likely to influence methodological evolution than to constitute the desired revolution.
Some claims have been made (although mainly by gurus and marketers) that relational data modelling capability in general (Codd 1970), and SQL in particular, are fundamental to the fourth generation. Vital though those contributions are to contemporary application software technology, there is little evidence to support the idea that flat files, tables or even relations are intuitive, or provide a natural or sufficiently rich modelling language for the real world. Moreover, inverted files and hierarchical and network DBMS can also be satisfactorily used to implement Third Normal Form or Optimal Normal Form data models. Post-relational DBMS are beginning to emerge, to cope with textual, graphical and sound data as well as mere conventional, 'structurable' data. Even if Relational DBMS do temporarily dominate the data management marketplace during the 1990's, they cannot in themselves constitute a useful definition of the fourth generation of application software technology.
The following section examines the remaining alternative defining characteristic of 4GLs, that based on a move away from the construction of procedures.
The most common term used to capture the essence of the Fourth Generation has been 'non-procedural programming'. That this term should have been so freely and unquestioningly used is remarkable, since 'non-procedural' merely defines a complement. Hence an assembly language is literally non-procedural (and so too are Swahili and Creole). Rather than arising out of academic work, the fourth generation has been developed pragmatically, and the 'non-procedural' concept has been subjected to very little critical consideration (see, however, Leavenworth 1977 and Haskell & Harrison 1980).
To clarify the essence of the notion, it is necessary to consider several more-or-less parallel streams of thought. One is the emergence of program-generators. Many developers noticed how much sameness there was in much of the work that they performed. The potential for parameterising sort and merge programs had been realised in the early 1960's. Report programs were addressed soon afterwards by RPG and similar languages (Peterson 1976).
Even on-line enquiry and data entry/update programs only manifest themselves in a small variety of forms, and during the 1970s other classes of program were addressed informally by programmers throughout the world. By the end of that decade, program-generators were successfully offering general solutions to particular classes of common application problems (Roth 1983, Rudolph 1983, Horowitz et al 1985). The more primitive, 'one-pass' generators were only of assistance during the construction phase, whereas the more powerful products enabled maintenance to be undertaken on the original source, and the program re-generated (9). In general, such products were conceived as aids for application software professionals, and enabled labour savings in the program specification, and the program-structure coding and testing tasks.
In another line of development, referred to as Very High Level Languages (VHLL's), it was claimed that productivity could be greatly enhanced by providing more abstract commands. Some products (such as Focus, Mapper, Natural and RAMIS) offered entirely new languages, but their usefulness was stunted by their lack of an underlying theory (as exemplified by the lack of distinction between blocks of procedural and non-procedural instructions, and their syntactic inconsistencies), compounded by their pragmatic, undisciplined growth. In other products, the 'higher-level' was achieved in much the same way as 3GLs and even macro-assemblers had done, by identifying and addressing some of the instances of redundancy in contemporary programming practice. For example, the success of CICS COBOL replacement products like Mantis, GENER/OL and ADS/OL was founded on this approach. Because most of the opportunities for additional, more abstract commands are restricted to particular hardware or systems software contexts, or to particular functional contexts (such as financial computations), the contribution which the VHLL movement could make, while in some cases worthwhile, was limited.
Several families of 'declarative', 'functional' and 'applicative' programming languages exist as research tools (e.g. Hudak 1989). These are distinguished from 'imperative' languages (of which procedural or algorithmic languages are the dominant type) in that they do not depend upon an 'implicit state' (i.e. the current contents of intermediate working variables), but rather on the evaluation of expressions. LISP and APL are well-known (although early and impure) examples of this class. From the viewpoint of commercial application software development, such languages can be perceived as VHLLs.
A further, related development was the 'reusable software' movement. During the mid-1970's it was seriously proposed by many people that programming would become a succession of calls to previously written and tested routines. This approach has gained currency in academic programming circles, has become embedded in the 'software development environment', 'programming support environment', and most recently 'computer aided software engineering' (CASE) notions, and has significantly influenced the development of languages (e.g. Howden 1982, Houghton 1983, Horowitz & Munson 1984, Jones 1984 and Lanergan & Grasso 1984). Some operating systems (e.g. Unix, VMS and PRIMOS) encouraged systems software developers to work in this manner, by providing a source-code library of system routines, and convenient invocation mechanisms. However, it has not, or at least not yet, become the norm in commercial programming except to the extent that some '4GLs', usually unbeknowns to the application software developer, generate a large number of calls to pre-compiled or pre-assembled routines.
Pre-packaged application software is arguably a close relation to both the ultra-high level language and reusable software approaches, since it involves identifying high-level functions which are common to a number of organisations. The considerable commercial success enjoyed by packages in the last decade appears likely to continue, at least in respect of support functions which are not critical success factors (typically administrative and routine bookkeeping activities), and in operational support where adherence to industry or government standards is crucial (such as customs agents, dealers' systems interfacing to stock, currency, commodity and other exchanges, and EDI interfacing software).
A further line of development has been data-driven languages, and, more recently, object-oriented programming languages. The Nordic language Simula, which was influential in the 1960s and 1970s, embodied the concept of data acting as a trigger for procedures, which was a different notion from the conventional one of procedures receiving data for processing. Beginning in the early 1970s, the Xerox PARC language Smalltalk took this idea further, by providing for objects, which encapsulate both local procedures and data, and pass messages among themselves, each message triggering appropriate behaviour in the recipient object. The additional properties of generic (or class) objects and instances, polymorphism, and inheritance by subclasses of generic properties, have provided powerful development tools, which, it is claimed, result in more readily and reliably maintainable applications (e.g. Cox 1987). Object-oriented programming languages and database management are predicted to have a significant impact on commercial and administrative applications in the 1990s.
In the context of the application software generations, object-oriented languages enable the software developer to build, and to depend upon, a more sophisticated library of pre-written software. As with earlier forms of reusable software, its applicability depends on disciplined use by teams of application programmers, and its efficacy in the real world of large corporations is only now being tested.
Another group of Fourth Generation products, popularly called 'application generators', comprise sets of parameterised program models, skeletons or templates, which can be specialised to produce the majority of programs needed for the more straightforward classes of application. In some cases, such as LINC, it is not necessary for the developer to nominate which model or models are to be used, because the product is able to infer the model(s) from the parameters supplied (Rudolph 1983). Many early application generators had such restricted functionality that they were constrained to simple applications, such as those developed by and for individual computer-users and 'work-groups' on single-user PCs and small multi-user micros. The approach has been harnessed, however, for developing much more powerful applications. It is increasingly common for what used to be called the 'mainline', 'program control' or 'program structure' components of programs to be generated from parameters, and for only the more detailed coding (e.g. validation, complex matching and selection conditions, and referential integrity assurance) to be hand-coded in procedural language.
The parameterisation of program structure has been a natural accompaniment to the parameterisation of data schemas (using structured dialogs and pre-formatted screens), I/O formats (via screen- and report-painters), and menus. Given sufficiently clear requirements statements, quite substantial applications can now be developed with very little procedural programming, by staff with relatively modest computing (as distinct from information systems) education and training.
Since the late 1970's, many products have been based on various combinations of program-generation, higher-level languages, re-usable components and parameterised program models. The essence of all of them is that the 'programmer' is to some extent freed from the technicalities of the procedural approach, and may therefore focus on 'what' the problem is, rather than on 'how' it is to be solved. Accordingly, it is concluded that the characteristic which identifies an application software product as Fourth Generation is that it frees software developers from the strictures of a procedural or solution orientation, and thereby enables them to concentrate on a statement of the problem which they wish to have solved. Rather than 'non-procedural', an alternative descriptor such as 'functional' or 'declarative' would be preferable, even if such terms would risk confusion with related but much more precise usages in computer science.
There can be several levels of both procedural and non-procedural solutions. To co-opt an example used by James Martin during seminars in the early 1980's, it is 'non-procedural' to ask a taxi-driver to drive to a particular cinema, but it is also (or even more?) 'non-procedural' to ask to be driven to any cinema showing a particular film. Extending Martin's example, a 'procedural' set of instructions might specify the turns which the driver should make to reach the destination, but it would be also (or even more?) 'procedural' to specify the foot and hand movements necessary to operate the pedals and steering wheel.
Moreover, the 'what' versus 'how' distinction is blurred, because even the earliest 'procedural' languages contained 'non-procedural' components. For example, Fortran contained functions such as SQRT in the 1950s; and the COBOL of the early 1960s had sub-program CALLs, and PERFORMs of blocks within the same compile-unit, which were (albeit primitive) information-hiding mechanisms. These early language features involved explicit delegation by the programmer of the 'how' of a function to a piece of code whose internals were unknown, or, at least at that moment, ignored. Hence the concepts of 'procedural' and 'non-procedural' appear not to be distinct notions, but rather to exist on a continuum. Despite the imprecision of these concepts, however, both participants in and observers of software development practice have, during the emergence of the fourth generation of application software technology, adapted their focus from 'how-oriented' solution-specifications toward 'what-oriented' problem-descriptions.
During the period 1975-1990, Fourth Generation products have made considerable progress in addressing some of the weaknesses of previous generations, and have come to be used for a considerable proportion of application software development (e.g. Sumner & Benson, 1988). Despite their promise, however, (although in part because of it), '4GLs' have been subjected to widespread criticism. The following section identifies the reasons.
Some products have been very wasteful of computing resources, particularly at execution-time. For occasional use, the penalty may be acceptable, but frequently-used applications can become resource-hogs. Despite warm encouragement by marketers and gurus, most organisations just could not keep expanding machine-resources every time processors became overloaded. As the price/performance ratio of machines continues to decrease, and as software and then hardware architectures are modified to reflect these new demands, the machine-efficiency problem diminishes, but to date it has remained important.
This machine-efficiency issue was understandable, yet seldom properly understood. There are a variety of operating modes available to '4GL' designers (see Exhibit 3A), each with its own advantages and disadvantages (see Exhibit 3B).
Fully interpreted code provides fast response, but at a high cost in run-time machine resources. Program generators which produce procedural code for submission to a compiler are relatively execution-efficient, but slow and resource-hungry during development and maintenance. Some application generators produce (highly portable) p-code by utilising a run-time interpreter which is less execution-efficient than executable code, but this relatively small sacrifice provides portability and hence cost-sharing across multiple platforms. Other application generators store the parameters in tables and provide quick response to developers, but at the cost of deferring processing until run-time. In all cases, the use of a run-time library can very significantly improve run-time performance.
An organisation which takes the trouble to understand the nature of its requirements, and the operating modes of products under evaluation, can anticipate the potential impact of each product on its facilities, and take the trade-offs into account when selecting an application software development product. Many organisations have been less than professional in this regard, and some have paid the consequences.
Another area of disappointment has been the narrow focus of most products on the construction phase of software development. It was increasingly apparent from the late 1970s onward that the construction phase was consuming a smaller proportion of project resources, and that the path to both improved life-cycle productivity and product quality lay in more effective support for the analysis and design phases, and in more effective management of the whole. This problem is being actively addressed by the Computer-Aided Software Engineering (CASE) and Integrated Programming Support Environment (IPSE) movements, and by the various approaches to meta-data management, including ISO's Information Resource Dictionary System (IRDS).
----- Strategy -------- --------------- Outcome ---------------- Development Execution Proto- -Dev'ment-Time- -Execution-Time- typing Resp'se Machine Resp'se Machine Speed Effic'cy Speed Effic'cy Program Executable very very very very very Generator Code slow slow low high high Program p-Code & very very very high high Generator Run-Time slow slow low Interpreter Application Executable slow slow low very very Generator Code high high Application p-Code & slow slow low high high Generator Run-Time Interpreter Application Tables & slow very very high low Generator Run-Time high high Table-Proc'or Fully Fully very very very low very Interpretive Interpretive fast high low low
A further group of criticisms, mainly levelled by software professionals, was that 4GLs embodied constraints which worked against elegant, efficient designs and conflicted with established programming styles. For example, some products lacked multi-user capabilities such as concurrent access control and re-entrancy. In many cases, such 'advanced' features as multi-dimensional tables and in-memory sorts were not available. Recovery and security features tended to be regarded as the exclusive responsibility of the application developer, with vague assurances of some support in future product releases. Many of these technical limitations were specific to particular products or classes of product; others were posturing by data processing staff, who were resisting new ideas in the same (understandable, human, but nonetheless frustrating) way that they had opposed some aspects of structured programming and software engineering. Others were justifiable criticisms of 4GLs generally.
Finally, 4GLs were criticised for their incompatibility with existing data and application software, and with established development tools and methods. The majority of 4GLs were conceived to address the need for new, stand-alone applications. Even where interfaces to existing data and software were provided, they were often inconvenient and resource-intensive (e.g. Holmes 1982). As a result, some of the hard-won productivity gains had to be re-invested in bridging the gaps between new applications and the existing software portfolio.
The more evolutionary 4GLs, such as the program generators, have always provided a considerable degree of scope for developers to integrate 3GL with 4GL approaches, and it is increasingly recognised that application development products must address this need. Some 4GLs, however, were and remain relatively closed products, which require a particular approach be taken to development and maintenance. Many also require the use of special tools, particularly editors, and exclude the use of standard tools.
Underlying many of these problems is an important and widely overlooked factor: for critical systems, with high transaction volumes, complex data structures and/or complex processing, non-proceduralness by itself is not enough. Applications generally comprise two classes of components: a majority which are satisfactorily addressed by non-procedural approaches, using pre-written general solutions; and a minority whose complexity of data structures or processing, or potential for run-time inefficiency, demands a procedural approach using a powerful, flexible and well-structured programming language. 4GL suppliers quickly found that they needed to provide the developer with access to conventional procedural languages, or at least to their own procedural language, at particular points within the application.
Two important second-order effects of these deficiencies have been noted. One is that information systems professionals have only slowly developed enthusiasm for 4GLs (Doke 1989, Khosrowpour & Lanasa 1989). The second is that the applicability of 4GLs to tasks undertaken by professional staff has been limited, mainly to prototyping and the development of small systems. They are much more commonly applied by end-users, especially for ad hoc data retrieval, which does have the effect of relieving professionals of a proportion of this kind of work (Lehman & Wetherbe 1989, Khosrowpour & Lanasa 1989). This type of application requires an infrastructure of well-managed, standardized data, which is costly and problematic from both technical and organisational perspectives (Goodhue et al 1988).
It is clear that 4GLs represented progress, but they did not make the previous generations of technology redundant. The following section discusses further generations beyond the fourth.
A philosophy of application software technology and management must take into account current and foreseeable future developments. During the 1980s, there has been a movement toward a fifth generation of computers and computing. This is an outgrowth of the long-standing Artificial Intelligence movement, and was most successfully projected by a Japanese project (ICOT, 1982), and by lobbyists for domestic equivalents, particularly Feigenbaum & McCorduck (1983). In the fifth generation, machine architecture should reflect the structure of the chosen systems programming language, which should in turn reflect the processes of (machine) reasoning. It should therefore involve an integration of hardware, systems software and application software and therefore a merger of generations. It would be, of necessity, entirely revolutionary, discarding or at least subsuming the long-standing von Neumann model, and hence absorbing conventional software development models as sub-sets or special cases.
The outcomes of the various Japanese, U.S. and European Fifth Generation initiatives have generally fallen well short of expectations. Because the future shape of application software technology is greatly affected by any fifth generation hardware and systems software developments, it too remains far from clear.
Active research continues in, and there is some limited commercial use of, logic programming, expert systems, and natural language understanding. These languages take various approaches to simplifying the programmer's task. The most well-developed among them is expert systems, the dominant form of which is based on production-rules. The expert systems approach is idealised as the expression of domain-specialists' knowledge about a particular problem-domain (usually with the aid of an analyst/programmer, referred to in the liturgy as a 'knowledge engineer'), in some formalism (commonly as a set of rules and domain-specific data), suitable for processing by a pre-defined 'inference engine'. In a consulting session, a user provides data describing a particular circumstance or case, and the program provides its decision or advice (in a form such as a diagnosis or a prognosis). It is anticipated that pre-defined knowledge-bases will also embody general-purpose rules and reference data ('world knowledge'), as a surrogate for 'common sense'. Expert systems development may use logic programming, impure logic programming languages such as Prolog (e.g. Minch 1989), or 'expert system shells'. The word 'expert' is misleading, and a term such as 'knowledge-based technology' would be more meaningful.
Difficulties have been encountered in applying expert systems thinking to business problems. For example, the original conceptions of knowledge-elicitation and knowledge-engineering have required adaptation; expert systems modules have had to be implemented within existing applications, rather than as free-standing systems; and careful conception of the environment of use has been needed (e.g. Braden 1989 at 466-7, Prerau 1989 at 97-140 and Harmon & Sawyer 1990 at 72-4). Nonetheless, many organisations have sensed significant potential for addressing hitherto intractable sub-problems, and have invested in software, training and pilot projects, and some organisations have claimed significant gains in effectiveness and efficiency, or valuable support for strategic and competitive objectives.
Knowledge-based technology provides software developers with the means whereby they can focus on a description of an entire problem-domain, rather than on individual problems which arise within that domain, or the procedures and data underlying the solution of those problems. Developers, having captured and tested that description, can make the 'software' available for use without concerning themselves about the particular problems which may be posed by the users, or the particular solutions that the inference engine, based on the contents of the knowledge-base, may proffer to the users.
The early sections of the paper have presented a chronological review of the generations of application software development technology, highlighting each generation's contributions and inadequacies. The remaining sections reinterpret the generations from the perspective of levels of abstraction, and propose a model whereby the level appropriate to a given task can be selected.
It is proposed that the most convenient manner by which the application software generations can be interpreted is as levels of abstraction from the raw computing hardware. From the first to the third generations, the human has needed to define the solution to a problem, with successive generations enabling the expression of the solution in a form increasingly convenient to the developer. The fourth generation enables the human to delegate the solution to the computer, provided that a problem-definition is provided in an appropriate form.
Fifth generation technology enables developers to operate at a yet higher level of abstraction. 'Knowledge-Based Technology' relieves the software developer of the responsibility of formulating an explicit definition of the problem. If it exists at all, that definition is not within the program, nor the knowledge-base of rules and domain-specific data, nor the case-specific data, nor the inference engine: it is implicit within a particular process, which is an ephemeral combination of elements of all of them.
The shape of at least one further generation is emerging from the mists. Connectionist or neural machines, whether implemented in software or using massively parallel hardware architectures, involve a conception of knowledge yet more abstract than knowledge-bases containing production rules. In essence, such a knowledge-base contains empirical data expressed in some common language, but stored in a manner very close to its original form, rather than in a summary form such as rules. The application contains no pre-packaged solutions to pre-defined problems (as third generation technology requires), no explicit problem-definition (as is necessary when using 4GLs), and does not even contain an explicit domain-model (as is the case with knowledge-based technology). With sixth generation application software technology, the human 'software developer' abdicates the responsibility of understanding the domain, and merely pours experience into the machine. Rather than acting as teacher, the person becomes a maintenance operative, keeping the decision factory running.
Exhibit 4 summarises this interpretation of the fourth, fifth and future sixth generation languages as being three further levels of abstraction from the original notion of battling with a bare machine.
No. Name The Focus of The Highest Level Man/Machine Human Activity of Machine Activity Relationship 6 Facilitative collection and development of delegation provision of an implicitly by an empirical pre-defined operative experience domain-model to a and description 'black box' of the situation 5 Descriptive definition of a application of delegation domain model an implicitly by a teacher and description pre-defined to a of the situation problem-solution 'black box' 4 Declarative definition of the application of delegation problem to solve an explicitly by a teacher pre-defined to an problem-solution educated clerk 3 Algorithmic expression of the non-isomorphic instruction or problem-solution translation by a teacher Procedural in a natural form (compilation and to a interpretation) dumb clerk 2 Assembly, expression of the isomorphic use by a & Macro- problem-solution translation specialist Assembly in mnemonic (assembly) of an shorthand extended tool 1 Machine expression of the computation use by a Language problem-solution specialist in the of a tool machine's terms
The fourth generation did not replace or subsume the previous generations. Current formulations of the fifth generation also appear unlikely to do so. The reason that no one generation dominates the others is that the physical machine may be addressed at any of a variety of logical levels. This paper has argued that software development may be undertaken at any of this range of different levels of abstraction.
In any given application, there is no apparent reason why just one level of abstraction should be used; it would seem highly advantageous for the developer to be able to select the level of abstraction not once for the whole application, but once for each component within it. In this manner, components which are straightforward and 'state-of-the-art' (such as batch update programs, on-line record-displays, and menus) may be described and generated, rather than laboriously programmed; while genuinely novel, 'leading-edge' components can be expressed in algorithmic languages; and components which for functional or efficiency reasons require low-level programming can be implemented in that manner.
The question arises as to what criteria the developer should apply in selecting the level of abstraction which is most appropriate to the task. Because the development of software is a resource-intensive exercise, economics appears to provide the appropriate framework for analysing the choice. A broad interpretation of economics is necessary, however, to encompass not only current resource costs, but also effectiveness during the long term of the application's life, and the risks inherent in its use, such as the contingent costs of system errors and failures.
In general, the higher the level of abstraction at which a component is implemented, the smaller the size of the source-code, and the lower its development, maintenance and operational costs. There are many circumstances, however, in which the higher levels of abstraction are not practicable, at least within the context in which the application is being developed, or is to be used. In some cases the constraints will be in the functionality of the software development environment, while in others the operational inefficiency of the implementation would be unacceptable. In addition, user requirements may be expressed in a fairly low-level manner (e.g. in the form of an invoice-discount formula or a character-conversion table), and in these circumstances it would be uneconomical to re-express detailed requirements in more abstract form.
It is therefore proposed that the level of abstraction appropriate to any given component within an application be chosen in the following manner: commencing at the level of abstraction at which the requirements are stated, choose the highest level at which the component can, in the given environments of development and use, be practically implemented. The notion of a 'practical implementation' is intended to encompass the possibility of development, the productivity of development, and the resulting product's operational efficiency, quality, readability and maintainability.
Hence:
In practice, 'pure' economic considerations will inevitably be mediated by the political factors which make a manager's job more than a merely mechanical application of economic principles. For example, the different perspectives of different participants are likely to give rise to different priorities:
Of course, no one of these perspectives is 'right'. All of them need to be recognised, and weighted according to organisational objectives and constraints. The choice of the level of abstraction for any given component should therefore be made using a contingency model based on economic criteria, mediated by political factors.
Such a contingency approach to the selection of the appropriate tool has direct implications for suppliers of application software development products. 4GLs are being used not to replace 3GLs but to complement them (e.g. Lehman & Wetherbe 1989). Software development products must comprise a family of compatible languages and tools, such that components written at different levels may co-exist within the same menus and the same job-streams, may invoke the same sub-modules, and may access the same data via the same data definitions. To use contemporary examples, expert systems shells must operate within the context of a broader software development environment, and application generators must be able to incorporate 'own-code' in any available procedural language, and invoke subsidiary modules written in those languages.
Application software managers must seek such capabilities from suppliers of software development products, and must incorporate the expectation of such facilities into their own strategic plans.
The concept of generations of programming languages and application software technology has been popular, but has lacked concrete and useful definitions. This paper has provided an interpretation of the application software generations, based on the concept of levels of abstraction from the bare hardware. It has been argued that these levels, rather than being competitive, are complementary.
This interpretation of the application software generations has then been applied, to develop a contingency model whereby the particular level of abstraction can be determined which is appropriate to each component within an application. The model has implications for application software developers, suppliers of application software development products, and MIS Managers.
The comments of an anonymous referee were instrumental in the clarification of the paper's thesis, and improvements in its presentation.
1. Alavi M. 'An Assessment of the Prototyping Approach to Information Systems Development' Comm. ACM 27, 6 (June 1984), 556-563.
2. Bally L., Brittan J. & Wagner K.H. 'A Prototype Approach to Information System Design and Development' Info & Management 1,1 (Nov, 1977) 21-26.
3. Behrens C.A. 'Measuring the Productivity of Computer Systems Development Activities with Function Points' IEEE Trans. S'ware Eng. SE-9, No.6 (1983) 648-652.
4. Benson D.H. 'A Field Study of End User Computing: Findings and Issues', MIS Quarterly 7, 4 (December 1983) 35-45.
5. Böhm C. & Jacopini G. 'Flow Diagrams, Türing Machines and Languages with only Two Formation Rules' Comm ACM (May 1966) 366-71.
6. Braden B., Kanter J. & Kopcso D. 'Developing an Expert Systems Strategy' MIS Qtly 13,4 (Dec 1989).
7. Brancheau J.C. & Wetherbe J.C. 'Key Issues in Information Systems Management' MIS Qtly 11,1 (Mar 1987) 23-45.
8. Buddy R., Kuhlenkamp K., Mathiassen L. & Zullighoven H. (Eds.) 'Approaches to Prototyping' Springer-Verlag 1984.
9. Clarke R.A. 'A Background to Program-Generators for Commercial Applications', Austral. Comp. J. 14,2 (May 1982).
10. ______ 'The Management of Fourth Generation Software Development' Proc. Joint Int'l Symp. Info. Sys., ACS/IFIP, Sydney, April 1984.
11. Codd E.F. 'A Relational Model of Data for Large Shared Data Banks' Comm ACM 13,6 (June 1970) 377-387.
12. Cox B.J. 'Object-Oriented Programming' Addison-Wesley, Reading Mass., 1987.
13. Davis G.B. 'Caution: User Developed Systems Can Be Dangerous to Your Organisation' Proc. JISIS Sydney April 1984.
14. Dickson G.W., Leithheiser R.L., Wetherbe J.C. & Nechis M. 'Key Information Systems Issues for the 1980s' MIS Qtly 8,3 (Sep 1984).
15. Djikstra E. 'Go To Considered Harmful' Comm ACM 11,3 (March, 1968).
16. Doke E.R. 'Application Development Productivity Tools: A Survey of Importance and Popularity' J. Inf. Sys. Mgt 6,1 (Winter 1989) 37-43.
17. Ewers J. & Vessey I. 'The Systems Development Dilemma - A Programming Perspective' MIS Qtly 5,2 (Jun 1981) 33-45.
18. Feigenbaum E.A. & McCorduck P. 'The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World' Michael Joseph 1983.
19. Ginzberg M.J. 'Key Recurrent Issues in the MIS Implementation Process' MIS Qtly 5,2 (Jun 1981) 47-59.
20. Goodhue D.L., Quillard J.A. & Rockart J.F. 'Managing the Data Resource: A Contingency Perspective' MIS Qtly 12,3 (September 1988) 373-392.
21. Guimaraes T. 'Managing Application Program Maintenance Expenditures' Comm. ACM 26, 10 (1983).
22. Harel E.C. & McLean E.R. 'The Effects of Using a Nonprocedural Computer Language on Programmer Productivity' MIS Quarterly 9,2 (June 1985).
23. Harmon P. & Sawyer B. 'Creating Expert Systems for Business and Industry' Wiley 1990.
24. Hartog C. & Herbert M. '1985 Opinion Survey of MIS Managers: Key Issues' MIS Qtly 10,4 (Dec 1986).
25. Haskell R. & Harrison P.G. 'System Conventions for Non-Procedural Languages' Comp J 23, 2 (May 1980).
26. Holmes F.W. 'Myths for Our Time' J. Sys. Mgt (Jul 1982).
27. Horowitz E. & Munson J.B. 'An Expansive View of Rousable Software' IEEE Trans. on S'ware Eng Vol SE-10, 5 (1984).
28. Horowitz E., Kemper A. & Narasimham B. 'A Survey of Application Generators' IEEE Software (Jan 1985).
29. Houghton R.C. 'Software Development Tools: A Profile' IEEE Computer (May 1983).
30. Howden W.E. 'Contemporary Software Development Environments' Comm ACM 25, 5 (1982).
31. Hudak P. 'Conception, Evolution and Application of Functional Programming Languages' in Wegner (1989), pp.359-411.
32. ICOT 'Outline of Research and Development Plans for Fifth Generation Computer Systems' Tokyo, Inst. for New Generation Computer Technology May, 1982.
33. Ince D.C. & Hekmatpour S. 'Software Prototyping: Progress and Prospects' Info. & Software Technology 29,1 (Jan/Feb, 1987).
34. Jackson M. 'Principles of Program Design' Academic Press 1975.
35. Janson M.A. & Smith L.D. 'Prototyping for Systems Development: A Critical Appraisal' MIS Quarterly 9, 4 (Dec, 1985) 305-316.
36. Jeffery D.R. & Lawrence M.J. 'Managing Programming Productivity' J. of Systems and Software 5,1 (1985).
37. Jeffery D.R. 'A Comparison of Programming Productivity: The Influence of Program Type and Language' Proc. ACC'86 A.C.S. (Sep, 1986).
38. Jenkins A.M. & Naumann J.D. 'Prototyping: The New Paradigm for Systems Development' MIS Quarterly 6,3 (September 1982) 29-44.
39. Jones T.C. 'Reusability in Programming: A Survey of the State of the Art' IEEE Trans on S'ware Eng. Vol SE-10, 5 (1984).
40. ______ 'Programming Productivity' Prentice-Hall 1986.
41. Khosrowpour M. & Lanasa J.M. 'MIS Professionals' Attitudes Toward and Perceptions of 4GLs' J. Inf. Sys. Mgt 6,4 (Gall 1989) 51-57.
42. Kim C. & Westin S. 'Software Maintainability: Perceptions of EDP Professionals' MIS Qtly 12,2 (Jun 1988).
43. Kraushaar J.M. & Shirland L.E. 'A Prototyping Method for Applications Development by End Users and Information Systems Specialists' MIS Quarterly 9,3 (Sep, 1985) 189-197.
44. Lanergan R.G. & Grasso C.A. 'Software Engineering with Reusable Designs and Code' IEEE Trans. on S'ware Eng. Vol SE-10,5 (1984).
45. Leavenworth B.M. 'Non-Procedural Data Processing' Comp.J. 20, 1 (1977).
46. Lehman J.A. & Wetherbe J.C. 'A Survey of 4GL Users and Applications' J. Inf. Sys. Mgt 6,3 (Summer 1989) 44-52.
47. Lientz B.P. & Swanson E.B. 'Problems in Application Software Maintenance' Comm ACM 24,11 (November 1981) 763-9.
48. McClean E.R. 'End Users As Application Developers', MIS Quarterly 3,4 (December 1979) 37-46.
49. McCracken D.D. 'The Changing Face of Applications Programming' Datamation 24,12 (Nov 15, 1978) 25-30.
50. Martin E.W. 'Critical Success Factors of Chief MIS/DP Executives' MIS Qtly 6,2 (Jun 1982) 1-19.
51. Martin J. 'Application Development Without Programmers' Prentice-Hall 1982.
52. Mason R.E.A. & Carey T.T. 'Prototyping Interactive Information Systems' Comm. ACM 26,5 (May 1983) 347-354.
53. Minch R.P. 'Logic Programming as a Paradigm for Financial Modeling' MIS Qtly 13,1 (March 1989) 65-84.
54. Peterson N.D. 'COBOL Generation of Source Programs and Reports' Software - Practice and Experience 6,11 (Jan - March 1976).
55. Prerau D.S. 'Developing and Managing Expert Systems' Addison-Wesley 1990.
56. Rivard S. & Huff S.L. 'User Developed Applications: Evaluation of Success from the DP Department Perspective' MIS Quarterly 8,1 (March 1984) 39-50.
57. ______ 'An Empirical Study of Users as Application Developers' Information & Management 8,2 (February 1985).
58. Rockart J.F. & Flannery L.S. 'The Management of End User Computing' Comm. ACM 26,10 (October 1983) 776-784.
59. Roth R.L. 'Program Generators and Their Effect on Programmer Productivity' Proc. NCC, Houston (June 1983) AFIPS.
60. Rudolph E. 'Productivity in Computer Application Development' Uni. of Christchurch Working Paper No 9 (March 1983).
61. Sammet J. 'Programming Languages: History and Fundamentals' Prentice-Hall, Englewood Cliffs NJ, 1969.
62. Sumner M. & Benson R. 'The Impact of Fourth Generation Languages on Systems Development' Info. & Mgt 14,2 (February, 1988).
63. Watson R.T. 'Key Issues in Information Systems Management: An Australian Perspective - 1988' Austral. Comp. J. 21,2 (Aug 1989) 118-129.
64. Wegner P. (Ed.) 'Special Issue on Programming Language Paradigms' Comp. Surv. 21,3 (September 1989).
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 26 July 2000 - Last Amended: 26 July 2000 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/SwareGenns.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy