Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2016
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
Version of 26 February 1997
© Xamax Consultancy Pty Ltd, 1997
Contributed Paper to European Conference in Information Systems (ECIS'97), Cork, Ireland, 19-21 June 1997
This paper is at http://www.rogerclarke.com/SOS/ECIS97.html
This paper assesses the application of Cost/Benefit Analysis (CBA) to computer matching projects undertaken by government agencies in the U.S.A. and Australia. On the basis of field research, it was established that CBA was either avoided, or manipulated to comply with predetermined courses of action. In politicised contexts, CBA has been used for justificatory rather than evaluation purposes.
Rather than abandoning CBA, information systems researchers need to develop a theoretical framework that distinguishes justificatory from evaluative applications, reflects the complexities of qualitative costs and benefits in large-scale information technology projects, and assists in the surfacing and reconciliation of conflict among stakeholder interests.
The evaluation of prospective information systems projects is a perennial problem for those who must make decisions about the commitment of resources. Decision-makers evaluate proposals in order to decide whether to proceed with the development or acquisition of a system, or to select one alternative in preference to another. Cost/Benefit Analysis (CBA) is an important technique that is used to assist in these decisions. CBA involves the comparison of the costs of a project against the benefits it generates. A project will proceed if the net benefits are positive or, in the case of alternatives, where it has the greatest surplus of benefits over costs.
Application of CBA to projects involving significant investment in information technology has, however, been strongly criticised: "cost/benefit analysis of information systems is still in virgin territory, without theoretical and practical foundation. The results of these analyses are highly sensitive to initial assumptions and might depend on the method chosen and the technicalities of the calculations performed" [Ahituv et al. 1994, p 474]. The interpretation of what is meant by CBA is a point of contention. Some authors apply the term in a narrow sense, as the evaluation of the economic feasibility of a project, as exposed by a financial evaluation. This paper adopts a much more expansive view of CBA, encompassing all costs associated with and all benefits arising from a project, whether or not they have direct and measurable financial effects, and whether or not they are capable of being reduced to financial terms.
This paper's purpose is to assess the extent to which CBA is used in the evaluation of a particular type of information systems projects: computer matching programs undertaken by government. It commences with a review of the process of CBA, and a discussion of some key issues associated with that process. It then presents the results of field research into the use of CBA in computer matching programs, including a deep case study. Finally, implications are drawn for information systems practice and research.
Cost/Benefit Analysis (CBA) is a technique, well-grounded in microeconomic and management accounting theory, for assessing the net value of a project [Dasgupta & Pearce 1972, Mishan 1977, Sassone & Schaffer 1978, Thompson 1980, Gramlich 1981, Sugden & Williams 1985]. It has also been described in government publications [APSB 1981, DoF 1991, 1992,] and in documents designed expressly for use in relation to information technology applications [Senn 1987 pp. 654-663, Hochstrasser 1990, Farbey et al 1992, Willcocks 1992, DoF 1993, Martin et al. 1994 pp. 333-336, 1994, Kendall & Kendall 1995].
CBA is a widely varying technique which ranges from simple comparison of directly and readily measurable financial factors to multifaceted techniques that incorporate tangible and intangible factors within a framework of system type or purpose [eg. Hochstrasser 1990, Ward 1990, Parker et al 1988], and whilst the simple approach is still generally favoured in practice [Farbey et al 1992], "Doctrines which claim that Simple cost/benefit analysis suffices, are increasingly being recognised as inadequate and restrictive in evaluating sophisticated IT installations" [Hochstrasser 1990].
CBA consists of three basic steps:
The following sections provide an analysis of each of these stages and identify the key issues arising in each.
The first stage of a cost/benefit analysis involves the identification of the costs and benefits which are associated with the project. In principle, CBA demands that the perspective of all stakeholders be reflected in the analysis of a proposal. This demand, however places the analyst in an invidious position, because it asks the impossible - the appreciation and encapsulation of many ideas, some of which are quite foreign to the analyst's frame of reference, and uncertainty as to whether and how much each particular interest is really affected. It is difficult to determine the boundaries of the effects of the project.
The nature of costs and benefits differs fundamentally: costs are both inputs to, and arise from, of a project; whereas benefits can only arise from project. Any negative benefits that result from a project are regarded as costs.
Hochstrasser  divides costs into two classes, direct and non-direct. Similarly Willcocks [1994, p 19] defines costs as being hard or hidden. Direct or hard costs are "those agreed by everyone to be directly attributable to the project, and which are easily captured by accounting procedures .... Hidden costs are more difficult to identify and quantify, but still need to be attributed to the investment for measurement purposes" [Willcocks 1994]. Hidden or indirect costs include human and organisational costs.
A benefit can be seen as an advantage or good that accrues from a project. Parker et al.  link benefits to the intention of the project:
The identification of costs and benefits of a project are difficult at best. Hochstrasser  notes that the use of simple, financial driven CBA makes it especially difficult to pinpoint the exact correspondence between a particular benefit and a particular IT installation.
Once the costs and benefits that are associated with a project are identified some measurement of them must be made. This brings forth a whole set of challenges. Willcocks  identifies these difficulties as the major constraints on IS investment, in both the private and public sectors.
The key issue relating to the measurement of costs and benefits is tangibility. Tangible or hard measures are those that can be readily expressed in financial terms. These include, in decreasing `desirability':
In the case of factors that are less tangible (soft) and which are difficult or meaningless to express in financial terms, it is nonetheless desirable that reasonable efforts be made to quantify their impact. This gives rise to further forms of measurements, which include, in decreasing `desirability':
The difficulties that arise in measuring the less tangible costs and benefits has often led to their exclusion from the analysis. It has, however, been recognised that the most significant costs or benefits of a project may well lie in these factors [e.g. Parker et al. 1988]. This has also be recognised in the public sector: "inevitably, some costs and benefits resist the assignment of dollar values. These ... are separately presented to the decision-maker ... with as much descriptive information as possible ... for assessment in conjunction with the quantified estimate of the net social benefits of the activity" [DoF 1991, pp. 1,9]. Numerous measurement methods have been proposed for less tangible costs and benefits, including arbitrarily interpolating dollar-values [Litecky 1981], the use of Bayesian analysis to estimate values [Couger 1987], and Strategic and intangible benefit profiling [Coleman & Jamieson 1994]. Increasingly, attention appears to be focused on techniques that incorporate intangible benefits within CBA.
Other important issues concerning the measurement of costs and benefits include the treatment of opportunity costs, i.e. "the benefits foregone from not having done something else with the resources" [GAO 1986b, p.27], and the treatment of 'joint' costs and benefits, where attribution of costs and benefits across multiple projects is difficult.
Appropriate and meaningful measurement of the costs and benefits attributable to a project is vital to the effective allocation of scarce IT resources. The measurement process is difficult and will probably become more so as systems become more interconnected, sophisticated and embedded in the infrastructure of organisations.
The net value of a project is used to determine whether the project proceeds. Numerous techniques exist whereby the net impact of the financial aspects of a project can be calculated. Many of these techniques are traditional capital expenditure models that can take into account the extended period over which the project delivers benefits, the time-value of money, and inflation. Payback Period, Net Present Value and Return on Investment (ROI) are examples, of which ROI has been found to be popular [Farbey et at 1992, Coleman & Jamieson 1994].
Willcocks  found that these techniques are problematic when used to analyse IT investments as they largely exclude risk, exclude costs and benefits that are difficult to quantify, involve difficulties in setting appropriate rates (discount, hurdle etc.), and encourage low risk investment.
With traditional investment appraisal techniques being seen as inadequate, authors have sought to develop methods that provide a more meaningful determination of the net benefits of IT investment. These methods generally provide more comprehensive frameworks that seek to incorporate qualitative data (such as contributions to strategic goals) along with the handling of risk. Examples include Keen , Parker et al 1988, Coleman & Jamieson  and Hochstrasser .
A number of organisational issues arise when undertaking CBA. Firstly, any Cost/Benefit analysis must be adequately resourced. The issues discussed above highlight the difficulties faced when performing CBA, and overcoming those difficulties could result in a lengthy and costly exercise. Hence CBA itself must be subject to some form of cost-benefit analysis. The objective of a CBA exercise is an not unassailable report, but a reasoned judgement as to whether the activity is worthwhile to the sponsor and other stakeholders. Resources and elapsed time must be provided that support a sufficient depth of analysis and refinement. Research indicates, however, that a sufficient depth of analysis is generally not being undertaken [e.g. Willcocks & Lester 1994, Farbey et al. 1992]. It is suggested that it may be not the cost of undertaking CBA, but rather the difficulty of the analysis, that is deterring organisations.
Secondly, the imprecise nature of CBA makes it vulnerable manipulation to serve vested interests. The potential effects of information systems on organisational structure, and on the power of individuals and groups is well documented [e.g. Markus 1983]. In principle, CBA implicitly assumes a politically neutral stance. As previously discussed, however, accommodating the perspectives of all stakeholders is extremely difficult and there is scope for sponsors or powerful stakeholders to manipulate the analysis. The selection of the particular outcome may be determined by the analyst's own judgement or biases, or may result from the influence of an executive or senior manager. The difficulties that arise in all phases Cost/Benefit Analysis could have the effect of providing a smoke-screen that obsures the manipulation.
The research reported on in the next section provides insights into the conduct of CBA within governmental organisations.
Computer matching is the comparison of sets of machine-readable records containing personal data about many people, in order to detect cases of interest. Computer matching became technologically and economically feasible in the mid-1970s, as a result of developments in information technology. The technique has been progressively developed since then, and is now widely used, particularly in government administration [Clarke 1994b]. The United States of America and Australia are the two leading nations in the use of computer matching [Clarke 1995a]. Empirical research in this area has been hampered by a general policy of non disclosure by government agencies about their practices. In 1988, however, unrelated federal legislation passed in both countries legitimised agency practices, and the agencies became more open to external scrutiny [Clarke 1994a].
This section summarises a study of these practices in both countries. The research includes a case study of a computer matching program commenced in 1989 by an Australian government agency.
The application of CBA in the context of government differs in a number of aspects from that of the private sector. Firstly it is not performed from the point of view of any particular individual, group or organisation, and hence it considers a wide range of gains and losses, regardless of to whom they accrue. Secondly its aim is to ensure efficiency in the allocation of resources to society's aims.
Computer matching is an interesting context in which to assess the application CBA as the programs can offer the potential of high returns for moderate effort, but the costs and benefits frequently involve significant non-quantifiable elements, such as deterrent effects of programs. Other important characteristics of computer matching programs are as follows:
The first major matching program was undertaken in the U.S.A. in 1976-77 [Clarke 1994b]. The Government assigned oversight responsibility to the Office of Management and Budget (OMB). OMB's original 'Guidelines to Federal Agencies on Conducting Automated Matching Programs' [OMB 1979] required that CBA be undertaken prior to a program being commenced. Lobbying by agencies and pressure from the President's Commission for Integrity and Efficiency led to a shorter set of Guidelines being issued. These lessened the need for cost/benefit justification [OMB 1982]. Subsequently, one key agency is known to have promulgated comprehensive internal guidelines [SSA 1990a].
A number programs have been hailed as dramatic successes, but generally by the agencies that conducted them. Very limited supporting data has been provided to allow substantiation of these claims, and reviews by independent parties have been rare. Anecdotal evidence, however, questions the efficacy of many programs [Reichman & Marx 1985, OTA 1985, Early 1986, Laudon 1986 p.331-3, OTA 1986, p.42-3, GAO 1987, pp.3, 12, 83, Flaherty 1989, p.354]. Less negative reports are to be found in Kusserow  and Greenberg & Wolf [1984, 1985, 1986].
The (since-disbanded) congressional Office of Technology Asseessment concluded that few programs had been subjected to prior cost/benefit assessment and, "as yet, no firm evidence is available to determine the costs and benefits of computer matching and to document claims made by OMB, the inspectors general and others that computer matching is cost-effective" [OTA 1986, pp.37, 50-53]. It is important to note that the primary role of OMB is to stimulate efficiency in government. Its orientation has been towards the facilitation of matching programs rather then their evaluation.
The General Accounting Office (GAO) has undertaken many investigations into matching and privacy in specific contexts [Clarke 1995a]. Reports emanating from the Human Resources Division have tended to be strongly supportive of computer matching programs, whilst reports emanating from the Program Evaluation and Methodology Division have adopted a more circumspect attitude, and have sought a balance between administrative efficiency and other interests [GAO 1986, Clarke 1995a].
The Privacy Act (1974) and the OMB Guidelines, which lack enforcement mechanisms [Flaherty 1989, pp.349-350, 357], appear to have provided little restraint on programs between 1974 and 1988. The Computer Matching and Privacy Protection Act 1988 introduced a number of fair information practice provisions to programs. The Act, however, is not applicable to many categories of programs (for example, approximately half of the programs conducted by the Social Security Administration are exempt - SSA 1990). Internal data integrity boards can also waive the need to undertake CBA. An independent assessment found that, in 40% of cases, agencies failed to perform a meaningful CBA, and of those performed the quality was low [GAO 1993].
Computer matching has been intensively used in Australia since the late 1970s. Information concerning the justification of computer matching programs has only become publicly available since the late 1980s [Clarke 1994b]. The Privacy Act of 1988 created a permanent Privacy Commissioner whose responsibilities include to "research into and monitor developments in ... data-matching" (s.27(1)(c)).
In 1990, the Privacy Commissioner discussed matching programs with five agencies and concluded: "whether cost/benefit analyses have formed part of the original decision to commence a matching program has been difficult to ascertain in many circumstances ... A definitive cost for distinct programs is often difficult to obtain" [PCA 1990, p.16]. Between 1990 and 1992, the Privacy Commissioner developed a set of voluntary guidelines relating to data-matching. Those Guidelines have been largely ignored and, in any case, the requirements regarding prior cost-benefit analysis are diffuse [PCA 1992, pp.9, 16 and 20]. It is apparent that CBA has not been used in an effective manner to evaluate the worth of computer matching programs in Australia and the Privacy Commissioner appears to have as yet had no impact in this regard.
The Australian Government's 'Parallel Data Matching Program' (PDMP) involves data from five major 'source agencies' being provided to the Data Matching Agency (DMA) within the Department of Social Security (DSS). The program is well documented, as, in recognition of its privacy-invasiveness, it was the subject of special legislation which established a set of regulatory requirements. The research on which this section is based is a detailed, longitudinal analysis of this case [Clarke 1994a, 1995a].
The aim of PDMP is to identify discrepancies in the patterns of benefit payments and declared income of benefit recipients. The five agencies provide data to DMA, which checks its validity, extracts the Tax File Numbers (TFNs) of persons in receipt of government benefits, and forwards this data to the Australian Taxation Office (ATO). ATO extracts the identify and income details associated with each TFN and returns this data to DMA. DMA then compares this data to that from the five source agencies and two assistance agencies. The source agencies receive information regarding any discrepancies found.
The data for this case study was gathered from a number of sources including the initial 4-page proposal document provided to Federal Cabinet in 1989-90, published annual reports [DSS 1991, 1992, 1993], meetings with the Department, and a report by the Australian National Audit Office [ANAO 1993].
The inadequacy of the initial proposal was revealed in the 1992 Annual and 1993 Audit reports which disclosed that the program was faring far less well than had been projected. In 1991-92, the gross savings from the anticipated 70,000 cancellations and 40,000 downward variations were not the estimated $290 million, but $17 million, from only 3,682 cancellations and 5,613 variations [ANAO 1993, p.xi, p.17]. A variety of reasons were nominated for this large shortfall [DSS 1992, pp.43-45, 51, 73-75; Clarke 1993, p.4-5]. The Audit Report limited its criticism to "savings from the program have not been as significant as initially predicted" [ANAO 1993, p.viii]. Despite its very poor initial performance the program was allowed to continue.
The 1992 Report highlighted some serious flaws in the application of CBA principles including:
The omission of staff costs was despite the report stating: "the real cost has been in the time and effort of staff administering the program" and "the reporting requirements are stringent and a lot of time and effort is needed to comply with them" [DSS 1992, p.13 & p.12].
Concerns were expressed by the Privacy Commissioner (private communication, 29 July 1993) and the Australian National Audit Office, which criticised the quality of the CBA undertaken, and pointed out that the Act "requires the tabling of a comprehensive report in both Houses of Parliament ... Sufficiently comprehensive cost/benefit information had not been included in either Report ..." [ANAO 1993, p.4], suggesting that the legislative requirements had not been fulfilled.
The 1993 third Annual Report concluded that, in 1992/93, the PDMP had achieved benefits of $70 million against costs of $13 million, for a net benefit of $56 million. In subsequent years, the Department projected almost double the benefits, with unchanged costs, and imputed additional benefits from voluntary compliance ranging from $40 to $90 million per annum. The net benefits from 1993/94 onwards were accordingly shown as $110-115 million per annum [DSS 1993, pp.79-91, 93-97, 115-129]. On the basis of their own analyses, the other agencies involved in the scheme had not reached break-even point, and in at least some cases appeared not to anticipate any benefits at all.
That report also outlined the manner in which Department had applied CBA. This served to highlight the significant errors made, including:
The errors in undertaking the CBA cast doubt on the report. Table 1 shows the progressive revision of benefits projections by the agency, and estimates made by the first-named author, based on the above criticisms.
Benefits 90-91 91-92 92-93 93-94 94-95 95-96 Projections - Oct 1990 65.0 >290. >300.0 >300.0 >300.0 >300.0 - Oct 1992 - - 107.3 114.1 81.2 99.1 - Oct 1993 - - - 35.8 39.8 40.2 'Actual' 0.0 15.0 30.8 Outcomes Author's 0.0 11.2 26.9 27.2 29.7 30.0 Estimates
Table 2 shows the net benefits claimed by DSS, together with adjusted figures after making allowance for exaggeration of benefits and more reasonable levels of opportunity cost.
90-91 91-92 92-93 93-94 94-95 95-96 DSS's estimates - Benefits - 15.0 30.8 35.8 39.8 40.2 - Costs 8.0 13.3 13.7 14.7 13.8 13.3 - Net benefits -8.0 1.7 17.1 21.1 26.0 36.9 Author's estimates - Benefits - 11.2 29.9 27.2 29.7 30.0 - Costs 8.0 25.7 25.3 23.8 22.1 22.1 - Net benefits -8.0 -14.5 1.6 3.4 7.6 7.9 Discount factor 1.0 0.926 0.857 0.794 0.735 0.681 Net value -8.0 -13.4 1.4 2.7 5.6 5.4 Cumulative N.V. -8.0 -21.4 -20.1 -17.3 -11.7 -6.3
This case study demonstrates how the original benefits estimates, which inveigled the Government and the Parliament into approving the program, were enormously overstated. Moreover, the agency has continued to abuse CBA in order to provide the appearance of a positive outcome, with the desired result that the Parliament has on two occasions authorised continuation of the program.
Appropriately applied, CBA would be expected to prevent programs commencing unless they were financially viable, or the net qualitative benefits were judged to outweigh the net financial costs. Moreover, it would be expected that schemes would be abandoned which were implemented for political, ideological, party-platform or strategic reasons (i.e. despite inadequate justification), and which failed to fulfil their anticipated potential. The field research reported on in this paper provides evidence of schemes being implemented, and sustained, with scant attention to economic factors, let alone to broader social considerations.
The earlier discussion in this paper highlighted a number of key issues associated with the application of CBA, including problems with the identification of cost and benefits, the measurement of costs and benefits, and the computation of net benefits. All of these problems are evident in the application of CBA to computer matching programs.
The over-estimation of benefits has involved both a failure to understand the nature of the benefits and poor measurement. The assumption appears to have been routinely made that all losses from error and fraud would be fully avoidable if computer matching were implemented. Reasons for shortfalls from that ideal have included poor quality data giving rise to spurious 'hits', poor quality data that would not secure a conviction or support an administrative determination, cases that involved amounts too small to make pursuit worthwhile, cases not pursued for humanitarian or other policy reasons, and people that were dead or untraceable. Another consistent error has been the failure to correctly apportion benefits among the various programs that give rise to them, and hence the double-counting of benefits.
The identification of costs involved with projects has been poor, with instances occurring of the omission of the costs of investigation, prosecution and collection, and the costs of other agencies and contractors such as solicitors. The measurement of costs has also suffered from a number of important weaknesses such as the poor attribution of costs amongst agencies and inadequate treatment of opportunity costs. As a result, costs have been consistently understated, making the programs appear of greater net value than they should. The failure to properly apply discounting to future cash flows has further inflated the already overstated net values.
There is no doubt that computer matching can be instrumental in the detection of a proportion of the errors, abuse and fraud in a variety of government programs. The application of CBA has been so poor that considerable doubt exists as to whether net financial benefits are arising from data matching programs, let alone whether the net financial benefits are adequate to justify the considerable qualitative costs incurred by some of the stakeholders.
Well-performed CBA can be expensive and time-consuming, and evidence indicates that the process may not be well understood. It is, however, a popular technique that has been described in government publications [e.g. DoF 1991, 1992, 1993], and which is required to be undertaken on government projects.
It is difficult to see how such an important technique could be so poorly applied in the absence of wilful behaviour by government executives. Government programs are commonly launched as strategic measures. CBA is at best a justificatory exercise and at worst an unnecessary hindrance. Powerful agencies have abused CBA, and manipulated regulatory agencies and Parliaments to relieve themselves of the onerous duty of performing CBA.
The empirical research reported on in the previous section does not represent a comprehensive view of the application of CBA to information systems, and hence great caution is needed in generalising from the outcomes. Nonetheless, the results, seen in the context of the discipline as a whole, suggest some implications for both researchers and practitioners of information systems.
There are several ways in which organisations can exploit cost/benefit analysis. In some circumstances, the technique delivers 'hard', financial information to support decision-making, and establishes a basis for accountability by the project manager for the use of resources, and the delivery of benefits [Keen 1991].
Such a mechanistic approach is inappropriate, however, where the estimates underlying the financial evaluation are perceived to be of poor quality, the non-quantifiable factors are seen to be dominant, the environmental factors are changing so quickly that the pre-implementation analysis requires constant revision, or the project manager has only indirect control over key strategic variables. On the other hand, even in such, more complex situations, it can bring discipline to the process of decision-making, and enable all parties to better understand the project's impacts, and the assumptions underlying estimates.
Non-quantifiable factors must not be valued at zero (which is the effect of ignoring them). Intangible costs and intangible benefits should both be dealt with, and dealt with in similar ways. The most straightforward manner in which they can be coped with is to regard the financial evaluation as only one portion of the CBA. Adjacent to the financial evaluation should be a risk analysis, and a concise, textual report, which identifies and briefly discusses each of the qualitative costs and benefits.
The short outlines provided in information systems texts are insufficient to assist organisations to apply CBA effectively. Guidelines are needed to enable systems professionals to undertake systematic analytical processes, and deliver products of value to executives and managers, comprising financial evaluation, textual analysis of qualitative factors, and sensitivity and risk analyses. Such Guidelines should embody checklists of costs and benefits which tend to arise in common kinds of IT projects.
In addition, with increasing tendencies towards outsourcing, in both government and the private sector, it is essential that consultants be capable not only in financial evaluation, but also in CBA more generally, and appreciative of the differences between the corporate and government contexts.
Dissatisfaction with CBA has seen it fall from favour in the information systems literature. This study has reinforced some of the negative attitudes; and hence it could be concluded that the technique should remain a bywater, mentioned in text-books but seldom the subject of serious research.
The authors suggest an alternative, however. What may be needed is a comprehensive theoretical framework for CBA, which extends beyond the limited economic basis on which financial evaluation is founded, and incorporates individual and organisational behaviour, and more careful stakeholder analysis, reflecting both intra-corporate and inter-organisational politics.
Such a theoretical framework would necessarily embody the differences between public sector and corporate contexts. In the case of corporations, for example, all stakeholders have a common interest in the company's survival and hence profitability; whereas in the public sector, it is often necessary to recognise many more stakeholders, and there is greater divergence among their interests.
There may also be differences in the incentives for analysts and managers in the two contexts; for example: "One [finding] is that top level (political appointee) public managers are less inclined to develop new information technologies than middle level (career) public managers. In contrast, MIS Success in the private sector is closely tied with support of upper level executives" [Bozeman & Bretscheider, 1986, pp.475-487, cited in Caudle et al. 1991].
The purpose of Cost/Benefit Analysis is the evaluation of project proposals. Some scepticism has been expressed in the information systems literature that the technique is so easily manipulated that it is of very limited practical use as an evaluation tool. The results of this study appear to be entirely consistent with that sceptical viewpoint.
It is contended, however, that to regard CBA as a discredited technique would be an over-reaction. It is important that managers and professionals alike recognise when a political imperative exists, and the technique is being employed essentially for justificatory purposes. In these circumstances, it would be futile to seek to impose formal standards on the undertaking. It would be morally appropriate to refer to the process and the resulting document as a 'cost/benefit justification'.
There remain many project proposals, however, which organisations wish to assess, and whose political context is not heavily dominated by a pre-decision by a powerful participant. In these circumstances, it is entirely appropriate to undertake a formal evaluation process. Where the scale of investment is limited, or the primary impacts can be readily expressed in financial terms, financial evaluation is satisfactory.
The scale and the impacts of many contemporary information technology investments are considerable. CBA is the appropriate mechanism for expressing in financial terms the quantifiable costs and benefits, and juxtaposing against them the non-quantifiable factors. The process is not a decision-making system, but rather a decision support mechanism that enables expression of the key considerations in a compressed format, the surfacing of the various stakeholders' perspectives and values, and the investigation of compromises.
It is accordingly concluded that Cost/Benefit Analysis is a vital element in the armoury of the information systems manager and professional, but that its limitations and applicability must be appreciated.
Ahituv N., Neumann S. & Riley H.N. (1994). Principles of Information Systems Management, 4th Ed. Business and Education Technologies, Dubuque USA.
ANAO (1993). Audit Report No. 7 of 1993-94: Efficiency Audit: Department of Social Security - Data Matching. Australian National Audit Office, Canberra, October.
APSB (1981). A Guide to Cost-Effectiveness Analysis of ADP Systems. Australian Public Service Board, Aust. Govt. Public Service, Canberra.
Bozeman B. & Bretschneider S. (1986). Public Management Information Systems. Public Administration Review 46.
Caudle S., Gorr W. & Newcomer K. (1991). Key Information Systems Management Issues for the Public Sector. MIS Qtly (June).
Clarke R.A. (1992). The Resistible Rise of the Australian National Personal Data System. Software Law Journal 5(1).
Clarke R.A. (1994a). Matches Played Under Rafferty's Rules: The Parallel Data Matching Program Is Not Only Privacy-Invasive But Economically Unjustifiable As Well. Policy 10(1).
Clarke R.A. (1994b). Dataveillance by Governments: The Technique of Computer Matching. Information Technology & People 7(2).
Clarke R.A. (1995a). Computer Matching by Government Agencies: The Failure of Cost/Benefit Analysis as a Control Mechanism. Information Infrastructure & Policy 4(1), pp 29-65.
Clarke R.A. (1995b). A Normative Regulatory Framework for Computer Matching. Journal of Computer and Information Law. XIII(2).
Coleman T. & Jamieson M. (1994) Chapter 10: Beyond Return on Investment in Information Management: The evaluation of information systems investments. Leslie Willcocks editor, Chapman & Hall, London.
Couger J.D. (1987). Techniques for Estimating System Benefits. Chapter 21 in Galliers R.D. (Ed.) Information Analysis: Selected Readings. Addison-Wesley, Sydney.
Dasgupta A.K. & Pearce D. (1972). Cost Benefit Analysis: Theory and Practice. Macmillan, London.
DoF (1991). Handbook of Cost-Benefit Analysis. Dept of Finance, Canberra.
DoF (1992). Introduction to Cost-Benefit Analysis for Program Managers. Dept of Finance, Canberra..
DoF (1993). Value For Your IT Dollar: Guidelines for Cost-Benefit Analysis of Information Technology Proposals. Dept of Finance, Canberra.
DSS (1991). Data Matching Program (Assistance and Tax): Report on Progress. Dept of Social Security, Canberra.
DSS (1992). Data Matching Program (Assistance and Tax): Report on Progress - October 1992. Dept of Social Security, Canberra.
DSS (1993). Data Matching Program: Report on Progress - October 1993.
Dept of Social Security, Canberra.
Early P. (1986). Big Brother Makes a Date. San Francisco Examiner, 12 Oct.
Farbey B., Land F. & Targett D (1992). Evaluating investments in IT. Journal of Information Technology 7, pp 102-122.
Flaherty D.H. (1989). Protecting Privacy in Surveillance Societies.
University of North Carolina Press, Chapel Hill.
Fraud (1987). Review of Systems for Dealing with Fraud on the Commonwealth.
Australian Government Public Service.
GAO (1986). Computer Matching: Assessing Its Costs and Benefits. General Accounting Office, GAO/PEMD-87-2, 102 pp.
GAO (1987). Welfare Eligibility: Deficit Reduction Act Income Verification Issues. General Accounting Office, GAO/HRD-87-79FS, 93 pp.
GAO (1993). Computer Matching: Quality of Decisions and Supporting Analyses Little Affected by 1988 Act. GAO/PEMD-94-2.
Gramlich E.M. (1981). Benefit-Cost Analysis of Government Programs. Prentice-Hall.
Greenberg D.H. & Wolf D.A. (1984). An Evaluation of Food Stamp and AFDC/Wage Matching Techniques: Final Report. SRI International for the U.S. Dept. of Agriculture, Food and Nutrition Service.
Greenberg D.H. & Wolf D.A. (1985). Is Wage Matching Worth All the Trouble? Public Welfare, pp13-20.
Greenberg D.H. & Wolf D.A. (with Pfiester J.) (1986). Using Computers to Combat Welfare Fraud: The Operation and Effectiveness of Wage Matching. Greenwood Press Inc.
Hochstrasser B. (1990). Evaluating IT investments - matching techniques to projects. Journal of Information Technology 5 pp 215-221.
Hochstrasser B. (1994). Chapter 8: Justifying IT investment in Information Management: The evaluation of information systems investments. Leslie Willcocks (Ed), Chapman & Hall, London
Keen P.G.W. (1981). Value Analysis: Justifying Decision Support Systems. MIS Qtly (March).
Keen P.G.W. (1991). Chapter 6: Managing the economics of information capital, in Shaping the Future: Business Design Through Information Technology. Business School Press, Boston.
Kendall K. & Kendall J. (1995). Systems Analysis and Design, 3rd Ed. Prentice Hall, Englewood Cliffs, New Jersey.
Kusserow R.P. (1984). The Government Needs Computer Matching to Root out Waste and Fraud. Communications of the ACM 27(6), pp 542-545.
Laudon K.C. (1986). Dossier Society: Value Choices in the Design of National Information Systems. Columbia U.P.
Litecky C.R. (1981). Intangibles in cost/benefit analysis. Journal of Systems Management 32(Feb).
Markus L. (1983). Power, Politics and MIS Implementation. Communications of the ACM 26(6) pp 430-444.
Martin E.W., DeHayes D.W., Hoffer J.A. & Perkins W.C. (1994). Managing Information Technology What Managers Need to Know, 2nd Ed., Macmillan, New York.
Mishan E.J. (1977). Cost-Benefit Analysis, 2nd Ed. Allen & Unwin, London.
OMB (1979). Guidelines to Agencies on Conducting Automated Matching Programs. Office of Management and Budget, Washington DC.
OMB (1982). Computer Matching Guidelines. Office of Management and Budget, Washington DC.
OTA (1985). Federal Government Information Technology: Electronic Surveillance and Civil Liberties. OTA-CIT-293, U.S. Govt Printing Office, Washington DC.
OTA (1986). Federal Government Information Technology: Electronic Record Systems and Individual Privacy. OTA-CIT-296, U.S. Govt Printing Office, Washington DC.
Parker M. & Benson R. (1987). Information Economics: An Introduction. Datamation 33.
Parker M., Benson R. & Trainor H (1988). Information Economics. Prentice Hall.
PCA (1990). Data Matching in Commonwealth Administration: Discussion Paper and Draft Guidelines. Privacy Commissioner, Human Rights & Equal Opportunities Commission, Sydney, 56 pp.
PCA (1992). Data-Matching in Commonwealth Administration: Report to the Attorney-General. Privacy Commissioner, Human Rights Australia, Sydney.
Reichman N. & Marx G.T. (1985). Generating Organisational Disputes: The Impact of Computerization. Proceedings of Law & Society Association Conference, San Diego, June 6-9 1985.
Sassone P.G. & Schaffer W.A. (1978). Cost-Benefit Analysis: A Handbook. Academic Press.
Senn J.A. (1987). Information Systems in Management, 3rd ed. Wadsworth Publishing Co, Belmont.
SSA (1990). Guide for Cost/Benefit Analysis of SSA Computer Matches, Office of the Chief Financial Officer, Office of Program and Integrity Reviews, Social Security Administration, March 1990.
Sugden R. & Williams A. (1985). The Principles of Practical Cost-Benefit Analysis. Oxford U.P.
Thompson M. (1980). Benefit-Cost Analysis for Program Evaluation. Sage.
Ward, J (1990). A portfolio approach to evaluating information systems investments and setting priorities. Jrnl of Information Technology 5 pp 222-231.
Willcocks, L. (1992). Evaluating Information Technology investment: research findings and reappraisal. Journal of Information Systems 2 pp 243-268.
Willcocks, L. (1994). Chapter 1: Introduction: of capital importance. in Information Management: The evaluation of information systems investments. Leslie Willcocks (Ed), Chapman & Hall, London.
Willcocks, L. & Lester S. (1994) Chapter 3: Evaluating the feasibility of information systems investments: recent UK evidence and new approaches in Information Management: The evaluation of information systems investments. Leslie Willcocks (Ed), Chapman & Hall, London.
Yetton P. (1994). Converting Information Technology to Performance Improvement 1. Australian Journal of Public Administration 53(3).
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.
Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 16 May 1997 - Last Amended: 16 May 1997 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/ECIS97.html