Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Principal, Xamax Consultancy Pty Ltd, Canberra
Visiting Fellow, Department of Computer Science, Australian National University
Version of 25 November 1994
© Xamax Consultancy Pty Ltd, 1994
This is chapter 6 of a multi-part Monograph whose contents-page is at http://www.rogerclarke.com/II/NetNation.html
This document is at http://www.rogerclarke.com/II/NetN6.html
The purpose of this Chapter is to investigate the manner in which the Information Infrastructure should be financed. Payment for the services which will be available over the infrastructure is considered first, followed by funding for the infrastructure itself.
In Chapter 2, a distinction was drawn between the services which provide direct benefits to individuals and organisations, and the infrastructure necessary to support those services. The services may be further analysed into:
Charging for the infrastructure is dealt with in a later section. In order to support an analysis of costs and charging alternatives for the services available over the infrastructure, three different levels need to be distinguished:
Charges can be associated with any and all of these levels, e.g. by charging for the establishment of a connection to the network or registration with the particular service, for the time that the person spends connected to the network or a particular service on it, or for the data traffic the person directly or indirectly generates.
This pattern is quite different from the manufacturing company assumed in cost accounting text-books. This is not to say that inter-networking is alone in this regard: it has some similarities with electricity, water and telephone services (although their infrastructures support only a single service); and with libraries (although, until very recently, library services were not essentially telecommunications-based - see, however, Steele & Barry 1991). A further important theme which recurs in the following analysis is that the marginal costs of uses are very small, but the fixed costs significant. Hence an arbitrary cost allocation exercise is inherent in any charging scheme.
Given an appropriate infrastructure, the fixed costs involved in providing each particular service are relatively modest. Depending on the context, they may include the host machine, storage devices, network connection, systems software including database management system, accounting, collection and management capabilities, and management, maintenance and marketing. The variable costs associated with each particular customer and customer-access are very low.
Where fees are to be charged, the question of revenue collectibility arises. This depends on several elements. The first is the existence of a legal framework within which investment will give rise to some form of protected enclave, or monopoly space, within which revenue can be charged at a considerably higher rate than marginal cost. The primary conventional approach to providing protection to investors in innovation involves the various forms of intellectual property (patent, copyright, designs, trademarks, and sui generis, or specific-purpose, constructs such as chip designs). Difficulties have been noted with applying the concept of ownership to software (e.g. Clarke 1984, Clarke 1989) and to data (e.g. Wigan 1992). Other forms of encouraging investment exist, including research and development grants, export development grants and targeted subsidies such as bounties and taxation concessions.
The second pre-condition for exploiting investment is that the services must be able to be denied to potential customers unless they pay the price. Difficulties are being encountered by software suppliers, and by data providers of all kinds, in collecting the monies that should be due to them under intellectual property law. Both data and software are readily replicable. The various strategems to restrict replication to those people who have been issued with a special hardware device (e.g. a master diskette or a 'dongle'), special software, or special data (e.g. a login id and password) have commonly proven insufficiently effective, or have restricted legitimate use and been unpopular among consumers. Although suppliers, particularly of software, are pursuing high-profile miscreants through the courts in an attempt to lift the level of morality among their customers, there has to be doubt about the medium-term effectiveness of such approaches.
One measure being increasingly adopted by electronic publishers is to embed a unique 'signature' code within every copy of a document or software product which is distributed. This is intended to enable the tracing of unauthorised copies back to their source, and hence to support court action, and provide a significant deterrent against abuse by purchasers of the conditions of the licence. Such signatures must not affect the integrity of the data or software. To be effective, they cannot simply be inserted 'in clear', or even in a consistent position; they need to be undetectable, in order to guard against active attempts to circumvent the protection.
Unless some such scheme proves effective, the new publishing context greatly increases the scope for leakage of ideas, and hence the ability to manage intellectual property is at risk of diminishing to vanishing point. An organisation cannot realistically expect to recover its investment in data by locking it up and denying access. Yet the only point at which control can be exercised is the point of access to the first copy of the data, because as the number of copies increases, the traceability of renegade, second-generation and later copies decreases. The recent trend in data-gathering agencies towards 'user-pays' may prove unenforceable (Wigan 1992). This raises the serious question as to what economic incentives will survive, or can be newly devised, to ensure that raw data continues to be captured and stored, and that acceptable quality levels are sustained.
Alternative approaches to collecting revenue focus less on legal incentives and disincentives, and more on positive, service-related factors. Suppliers can contrive to offer a stream of future improvements which will not be available to a one-time, illegitimate copier, nor even to a one-time purchaser who fails to retain contact with the supplier. Admittedly a dedicated cheat can arrange for repetitive copying of successive versions, but that costs money and effort, and involves a lag. Versions of software products can be released in quick succession, offering new or enhanced features, but with upward compatibility from the previous ones (i.e. with no deleterious effects when converting from one version to another).
In addition, value-added and support services can be bundled into the price and denied to non-subscribers; for example, search capabilities may be made available; training can be offered at significantly discounted prices; libraries of useful examples can be made available; and on-line or telephone support can be provided for loyal, i.e. registered and repetitively-paying, customers.
There is a fairly high level of preparedness among commercial organisations to pay for appropriate levels of quality, reliability, support, progressive enhancement and the right to sue someone for sub-standard products and services. The same cannot be said as confidently for individuals, not only in the sense of private consumers, but also workers in education and research, and students. For example, many research staff, particularly in Universities, have little or nothing in the way of discretionary budgets, and some are opportunistic rather than consistent in the research topics they address, and hence less willing to invest in software, data and services. Small numbers of very large institutions tend to employ large numbers of researchers, rather than each research group being a separately incorporated body. Moreover, these large institutions act with some degree of coordination. And there has been a long tradition of teaching-and-research institutions receiving favourable consideration from suppliers of information technology. The result is that buyers of research-related software, data and services have significant power, and use it to ensure that low prices are paid to suppliers, sometimes even below the incremental cost.
There are other difficulties. Conventional economics is applicable to circumstances in which resources are scarce, and those scarce resources are controlled by someone. Difficulties arise where there are qualifications to that notion, as in the distribution of 'public goods' such as air, water and sunshine which are not readily amenable to control. Information economics relaxes the assumption of scarcity of resources, but other traditional assumptions remain (v. Hayek 1945, Machlup & Mansfield 1983). One such assumption is that it is feasible to infer a utility function which sufficiently accurately describes the consumers' preferences in the use of a resource. With any complex technology, that is a dubious proposition, and economists are forced to depend on aggregate profiles, without probing for insights as to how that profile is made up from large numbers of decisions.
With sophisticated information technology, the aggregate profiles are of little use, because the perceptions of the resources differ so much between individuals that any assumption about additivity is unjustifiable, and hence aggregration is meaningless. In effect, there are very large numbers of markets for very many different kinds of resources, rather than a small number of markets for moderately homogeneous resources.
The argument in this section has so far addressed only data access. It is important to extend the analysis to communications services, because, at least initially, the primary uses of generally available wide area networking have been in person-to-person communications and in personal access to processing services. In addition, considerable growth can be anticipated in person-with-group communications (Antonelli 1992).
Personal communications by physical post are labour-intensive, expensive and slow. Faster service for urgent matters costs a premium for courier services or telephone calls. Electronic analogues of physical mail and of telephone calls to a 'voice-mail' 'mail-box' therefore save an appreciable amount of direct cost, but it is necessary to make a not insignificant investment in internal and external network connections, processor-time, disk space, and the creation or acquisition and then maintenance of specialised software. The variable costs are close to zero. One reason is that the telecommunications, processor and disk-space costs present as stepped fixed-cost functions, i.e. a need for faster lines, a faster processor and more disk-space, arising because of a general increase in activity. Moreover, many of these are already in place (e.g. the internal network connections) and others are needed for other purposes anyway (e.g. the external network connections) or are available gratis (e.g. the specialised software); hence the actual investment is relatively small, and largely invisible.
The question of the collectibility of revenue was considered above in relation to database access, and found to be problematical. Electronic messaging services cost money when they are provided to government, corporate and private individuals by specialist service-provider companies, or by corporations' own internal cost-centres and profit-centres. Researchers have to date ignored such complications. To most, and perhaps all, employees of research institutions, such services are gratis at the personal, project, Departmental and Faculty or Research Unit level. The costs are generally being absorbed at the highest level of the organisation, through the information technology services unit. This may change over time, and some form of charge-out may be undertaken, as occurs in some institutions with, for example, telephone and even electricity. The variable costs are, however, almost zero, and it will be necessary to allocate the fixed costs across users according to arbitrarily chosen criteria.
In the meantime, third party providers appear unlikely to be able to sell electronic messaging services to researchers based on cost-competitiveness, and so will have to offer substantially superior features. The quickly emergent and potentially vast market among corporations, government agencies and the public for email and related services will be segmented, with some seeking simple and cheap offerings, and others demanding considerable value-added, and prepared to pay a premium price.
Far more critical than the cost and revenue considerations, however, are the service factors. Physical mail removes the element of spontaneity from inter-personal communications, because each despatch of an idea or information involves simple but in total significant, and seemingly irrelevant labour (finding letterhead, printing, finding the address details, preparing an address-label, enveloping, and a trip to the administrative office), and substantial delays through a logistic chain within and beyond the organisation.
Telephone communications are excellent for short interactions (such as checking possible dates for a meeting), and can be a reasonable means for leaving messages; but they are only effective for serious discussion if both parties are available and prepared. Rather than party-calls, many-to-many conversations are frequently reduced to a succession of two-person calls. The notion of sending a copy of a conversation to third parties seems not yet to exist. A great deal of valuable time and focus is wasted because of interruptions by relatively trivial calls. Yet a significant proportion of the population is now at least competent with a keyboard. A typed message can be re-read, revised and re-structured before transmission, unlike a hand-written or spoken message; and it can be sent to multiple people. It can be transmitted and received into an electronic mailbox for subsequent inspection, avoiding insistently ringing bells and unnecessary interruptions to conversation, thought and work.
Beyond these basic features, the Internet enables the discovery of, and communication with, people who would not otherwise be within an individual's sphere of mutual influence. Although it supports far less rich personal and intellectual relationships than is possible with physical contact, the Internet enables many people to create and sustain intellectually profitable relationships in a cost-effective manner, which their local finance managers do not need to constrain. It enables speedy development of ideas and documents, in ways that are simply not practicable through other messaging mechanisms. For people who are remote from one another but have an established relationship, it provides a cost-effective means of maintaining the relationship between visits to one another's sites and meetings at other locations. Through these new electronic support mechanisms, new ways of working are emerging, and a quantum shift in 'intellectual productivity' is in prospect, to make up for the much-lamented lack of white-collar productivity improvement during the last few decades.
From the viewpoint of corporate information systems theory, the investment in networks and networking software is strategic, rather than a mere short-term economic decision. It is driven by competitive necessity, because if the organisation does not invest, it will fall behind its peers. The tools of conventional economic analysis are of limited assistance in understanding and coping with these changes in communications patterns. Analysis of the policy issues needs to draw heavily on the insights of political economy (e.g. Smith 1980), business and competitive strategy (e.g. Porter 1980, 1985), innovation diffusion (e.g. Rogers 1983) and technology assessment (e.g. Berleur & Drumm 1991).
It will be feasible for some of the impacts of the networked nation to be held at bay for a time. For example, the Commonwealth government's publishing arm, the Australian Government Publishing Service, has reportedly stated that it will refuse to publish documents in hard-copy form if they are also being published on the Internet. In the short term, while the accessibility of the Internet is less than universal, this may act as a significant disincentive to official electronic publication, because many agencies recognise a responsibility to make materials publicly available on an equitable basis. Similarly, conventional publishers may be successful in a rearguard action to slow the onrush of electronic publishing, and attract authors to use the publishers' distribution channels to bring their data and documents to market. The difficulties of ensuring that revenue is gained from disk-based and electronic publications seems likely to lead to imaginative schemes to offer timeliness and updatedness of data collections, and value-added and support services.
It is important that an appropriate business model be adopted for electronic network services. A rich range exists of alternative ways of organising and financing the undertaking. They vary in terms of who bears the risks, who pays and who benefits; and how much and what kinds of governmental intervention and regulation are involved. At one extreme lies a fully government-funded approach, and at the other an uncontrolled free market. At the heart of the issue is the appropriate balance between competition and collaboration. Consideration must be given as to where the boundaries between infrastructure and market forces should be defined.
It is suggested firstly that the conditions are appropriate for services to be generally left to the dynamics of competitive markets, but that action will be necessary to ensure the availability of some kinds of services, and of services for some kinds of organisations and individuals. Among the sectors of the economy which are likely to plead special cases are education, research and health. There is scope for application of the conventional social welfare arguments relating to people with disabilities, the aged and the unemployed. The purpose of this paper extends to arguing the need for such special cases to be recognised, but not to pursuing specific claims.
In the research arena, there are many circumstances in which data collection and maintenance are already undertaken on a non-commercial basis; for example many directories are maintained by individuals or research institutions as a convenience for all members of a discipline or applications area. A commercial sponsor (typically a book publisher, software publisher or other IT supplier) generally needs to be found to support the publication and dissemination of such directories, but with electronic services the cost of those steps may also fall sufficiently low that such projects can proceed without recourse to an external sponsor.
Many forms of research data, however, cannot be so readily collected and maintained. The community of research institutions has to seriously consider defining some forms of research data and related software as being a shared resource, and not part of the commercial realm. One instance of this is data collected under a national research grant, which should arguably by available to the public generally, and the research community in particular. The suggestion is more contentious in the case of data collected under projects which are jointly funded by government, one or more commercial sponsors and/or one or more research institutions: there is a partly moral and partly economic argument that the investment should result in some proprietary interest in the data as a means of recovering the costs of entrepreneurial activity. Research institutions may make bilateral, bartering arrangements to provide access to one another's collections; or act multilaterally by forming collectives.
A particularly thorny issue is the question of government data collections, especially where the data source is obligated under law to provide the data. A national policy decision is needed as to whether the national statistical bureau and other data-collecting agencies should charge for data on a user-pays basis, or should be financed entirely from the public purse as part of the national infrastructure, or should charge for the costs of publication only. The third alternative is used in the U.S.A., and appears to be the most effective mechanism for ensuring the availability of important data.
An important inference from the argument in this paper is that the nation needs to recognise the breadth and the importance of the infrastructure under-pinning electronic network services. As in transport systems (such as rail freight and road haulage, and seaport and airport facilities), it is necessary to conceive of some elements as national system infrastructure, serving the needs of the community, and conducted on a collaborative rather than a competitive basis.
The services which large-scale inter-networking is capable of delivering are dependent on an underlying infrastructure, which was discussed in Chapter 2 and depicted in Exhibit 2. To assist in assessing who should pay the costs of establishing and maintaining the information infrastructure, it is necessary to distinguish:
The whole point of the information infrastructure is to assist the flow of data, and thereby overcome scarcity of information. Data and software, once located, are replicable for costs which are infinitesimal in comparison with the costs of production; and the copies are indistinguishable from the original. The relevance of conventional economics to a networked environment is heavily qualified, because the primary resource, data, is not scarce.
As a result, there are difficulties in applying straightforward cost accounting principles. The variable costs of infrastructure (such as network management and support, energy and accommodation) represent a relatively small proportion of the total costs, because the expensive cost-items are transmission and switching capacity. These are not purchased on a piecemeal and progressive basis, but in large lumps. As a result, there are very high fixed costs in each period (the leasing or depreciation costs for the lines and equipment). The variable costs of an additional message are so small as to be effectively incapable of measurement. The cost profile is therefore almost entirely stepped-fixed in nature.
Another consideration is that transmission bandwidth cannot be treated as a resource in the same way as can iron ore and air. The bandwidth is available at all times, and is not consumed by use. It is merely briefly occupied by each message (or, more correctly, each packet). Hence the conventional analysis based on scarcity can be used, but only if it is repeated every few milliseconds, rather than every 1-2 years as in the case of expanding secondary manufacturing capacity, or 5-10 years as in the case of a new mine.
It is important to note that the saturation of node or arc capacity does not have direct financial effects. It costs no-one any money; rather, the traffic at the time suffers a delay in transmission, which may have an effect on the service's effectiveness or perceived quality. Much of the traffic on the information infrastructure is unaffected by small delays, but some must reach its destination within a tolerance interval, to prevent, for example, moving images appearing to lurch from frame to frame.
Almost all of the costs are incurred at the level of the infrastructure, and almost none at the levels of the delivery of the messages, or the uses to which they are put. Yet all of the benefits derive from the messages and their use, not from the infrastructure. The marginal financial cost of an additional transmission is effectively zero, in all circumstances, even if one or more of the arcs and nodes involved is heavily used and queues are growing. There may be non-financial costs, of course, in particular in the forms of delay, disruption and user frustration, and even the occasional node-failure due to a queue exceeding memory capacity.
An important corollary is that, as the network approaches saturation, there is simply no inherent cost-based equilibrating mechanism. There may be a reduction in demand as a result of users becoming impatient and deferring their transmissions until later. However, given that a large proportion of users are buffered from the net by processes running in their workstations and servers, this will only be significant if 'patience' is programmed in (as it is in token-based Local Area Networks).
When congestion occurs, because of the absence of an intrinsic equilibrating mechanism, the net becomes subject to 'the tragedy of the commons': the efficiency and effectiveness of a shared patch of land (or of LAN) is in everyone's interests, but in no one's interest; i.e. there is no strong mechanism motivating responsible behaviour, only shared wisdom, vision or morality. Those are hard enough to sustain in geographically localised communities surrounding common grazing or arable land; they are likely to prove much more problematical in geographically dispersed electronic communities.
If a supplier, or a collection of suppliers, has any degree of market power, then it is entirely feasible to charge people for the use of network resources, as though they were depletable goods. Charging bases which may be used include:
and rates do not need to be fixed, but can be variable based on, for example:
It may also be possible to base some or all charges on key characteristics of the services which are provided over the infrastructure; for example, some services depend on prompt transmission of single streams of data (such as audio in telephone conversations) and others on prompt and synchronised transmission of multiple streams of data (such as audio and video in video-telephone, video-conferencing, MUDDs and distributed real-time visualisation services).
Generally, the simpler services are less disturbed by delays. Email, bulletin boards and file transfer are impervious to sub-second delays, and can absorb much longer delays if necessary. Many nominally on-line services can also tolerate moderate forms of delay; for example, when users are logged-in to remote servers, transmission delays of the same order of magnitude as the processing time to deal with the most recent command is generally entirely acceptable; and a 'network chat' facility (i.e. the display on each participant's screen of text as it is typed in by all other participant) is not seriously affected by uneven transmission patterns, and may even be helped by some degree of spasmodic activity.
The question arises as to whether and how many of the users who suffer delays due to congestion would be prepared to pay to overcome it. In the case of asynchronous services such as email and even ftp, it seems fairly unlikely that many users (or their financial masters) would be prepared to pay very much at all for sustained low-delay service, especially if the charge or surcharge is only applied during relatively brief periods of congestion: the delay in transmission of a message across the globe could blow out from the minimum practical 0.2 seconds, to 2 seconds to 10 seconds or even 2 minutes, but few email users would even know, let alone be concerned about the delay.
For many transmissions, e.g. between Australia and the East Coast of North America (which have virtually no overlap between their working days), 2-hour delays and even 8-hour delays, are often of no consequence, and a fee would be tenable where priority delivery was required. There is therefore a great deal of scope for the information infrastructure to use surcharges for urgent delivery during peak periods, as a means of rationing resources on those occasions when they are indeed in short supply.
There have been various proposals to construct a mechanism for charging some or all of the packets transmitted over the Internet backbone. The MMV 'smart market' proposal seeks to retro-fit into the Internet's design a form of equilibrating mechanism (McKie-Mason & Varian, 1993a, 1993b). Under this scheme, one factor affecting the price would be the priority the sender selects (with email typically being assigned the lowest priority and messages requiring prompt delivery and synchronicity the highest). The other factor would be the degree of congestion on the network. Each priority-level would carry with it a maximum price which the sender is committed to pay if the congestion on the network warrants it. In this way, a 'smart market' would exist, in which the bulk of the funding of the network would be collected from organisations which demanded urgent delivery of significant volumes of data; and there would be an incentive for users (or at least for the organisations footing the bill) to smooth the traffic flow by using services which were potentially expensive at off-peak times.
There is a well-established economic argument that the user of resources should pay for them. The rationale is that, in general, the resource will, as a result, only be used where its use provides sufficient payback, and hence the resource will not be wasted. This is appropriate in the case of limited resources, but not necessarily in the case of resources which, within bounds, may be sufficient and whose excessive usage may have little or no cost, and likely benefit.
Moreover, it is only effective in respect of scarce resources if the parts of organisations that are actually wasting them feel the brunt and adjust their behaviour. In many organisations, for example, the pain associated with electricity, floor-space, mail and even telephone expenses is not transferred to the people who gave rise to them. Electronic network infrastructure costs are currently being absorbed by universities at the highest budget level (in effect, as a cost of being in the education-and-research business). In corporations and government agencies as well, they might also be absorbed at a similarly high level, blunting their potential equilibrating effect.
Where there is any portion of a fee which reflects the volume of usage, there is a need for the allocation of fixed costs across that usage. There are many possible bases that could be used, including direct measures, such as:
and indirect ones, such as:
All allocation bases are arbitrary, and therefore always arguably unfair to one party or another. The most obvious rationale is traffic-volume in peak periods and across bottlenecks, because those are areas of quite apparent resource limitation. Such an approach requires considerable data-collection, however, in order to be able to perform the necessary accounting. Depending on the approach taken, the data required to support cost allocation might include:
Some of these may be readily available, but some would be likely to require substantial design, programming and storage (Ray 1993). The data-gathering activity is expensive in terms of delays to messages. It also generates data which can in some circumstances be sensitive, because it represents a surveillance trail of net activities by identified organisations and individuals.
Moreover, the more detailed the level of cost allocation or pricing, the greater the amount of data which must be gathered and analysed, and the greater the effort needed to communicate debts to customers, and to collect the debts. This has potentially significant implications for traffic volumes on the network, for the complexity of the software running on switches, and for the staffing of support organisations.
A further difficulty with the application of traffic-based pricing is that it may result in organisations incurring large and largely unexpected bills. It would appear unfair to those organisations who generated that kind of traffic that they should be required to fund the entire infrastructure; and hence a considerable degree of opposition could be anticipated, from organisations that probably have a reasonable amount of power.
In commercial and governmental activities, the mission and tasks are moderately well-structured and plannable, and the full battery of economic and cost accounting tools are available to assist in ensuring efficiency of operation. These ideas can still be applied in a qualified manner to product development and industrial research and development. But the further one proceeds along the spectrum towards 'pure research', the less structured and less plannable the activities become, to the point where fiscal prudence has to be heavily compromised by freedom and serendipity. At least in the context of the Internet, the breakthroughs typically come from renegades and misfits, not from be-suited corporate executives or the restrained professionals they employ. In the research arena, and to a lesser extent the education field as well, within the boundaries of fiscal prudence, it appears to be essential that a charging scheme be devised which has the effect of encouraging, not discouraging, experimentation and inventiveness.
A further question which must be addressed is whether basic services (by which is meant those which are tolerant of modest delays and require no synchronicity) should be treated as being universally accessible. This approach mirrors the notion of 'universal service', long used in the telephone industry to refer to that which must be made available to the private subscriber at an accessible price. It has, of course, been a moving target, with new services and service features (e.g. STD, ISD) gradually becoming mainstream. As recently as mid-1994, initiatives were launched in Australia to address the needs of the 1 million Australians who perceive the threshhold costs of installing a telephone in their place of residence to be too high.
A number of further difficulties arise in identifying who 'the user' is who should pay for services and/or a share of infrastructure costs. At the level of an email message, should it be the sender or the receiver or both? The physical postal services have functioned on the basis of sender-pays, but for some electronic services this may be unreasonable. In the case of a message sent to a mailing-list for re-transmission to hundreds of receivers, there are not just two but three parties who might share the costs in various ways. In the case of file-transfers and database accesses, should the charges be levied on the originator of the document, the owner of the repository (which is the actual sender of the bulk-message), or the organisation or person fetching a copy of it?
To make a rational choice, consideration must be given to the effect of each possible charging scheme on the behaviour of all parties involved. Do we wish to tax people who send rubbish messages to mailing-lists? Most of us who are on mailing lists would probably say 'yes', but we would all have varying approaches to defining 'rubbish messages'. Do we wish to charge people who put documents in internationally accessible databases? We might prefer not to create a blanket payment scheme (because that would create an incentive for junk-documents), but we might, as a community, prefer not to create a disincentive. There have been long-standing proposals to create an electronic environment in which people are paid for documents which others access.
An alternative to the physical-post charging scheme is that used in voice telephony, which charges the initiator of a call, subject to some circumstances in which the cost can be diverted, at the recipient's choice, to their account. The public switched telephone network, however, is a connection-based scheme, i.e. a series of arcs are committed for the duration of the session, and the unused capacity represented by lulls in the conversation is wasted. The Internet, and to the extent that the Internet model is adopted, the global information infrastructure, operates on a packet-switching basis, whereby the channel capacity is available to the next user who explicitly or implicitly causes a packet to be despatched.
Superficially attractive though the 'user pays' principle may be, there are considerable difficulties in applying it to payment for network-based services and the underlying infrastructure.
The argument pursued in this paper is that the vast majority of costs, particularly of the central and regional infrastructure, are fixed, and hence, within the capacity of the channels, equipment and software, vary hardly at all with the volume of traffic.
There are three broad approaches which could be adopted to the funding of the investment in the information infrastructure:
Infrastructure could be funded at a global level by a sponsor, conventionally by a government. At the level of the central infrastructure (the primary backbones and switches), this is entirely feasible, because the costs involved may be relatively small (perhaps of the order of some tens of millions of dollars during the first 2-4 years). This was the original ARPANet/Internet model, with the U.S. Department of Defense originally playing the primary funding role, and the National Science Foundation (NSF) subsequently picking up the baton.
A variety of possible funding agencies exist. In respect of the Australian Academic and Research Network (AARNet), which has been the initial area of growth, significant contributions have been made by:
The majority of the funding, however, has been from universities and research institutions (including the CSIRO), in the form of levies proportional to the capacity of the gateways installed between their internal network and AARNet.
In relation to the accessibility of network-based education services, it is clear that the major responsibility rests with:
and, indeed, a project relating to distance learning has been in train for some time.
In relation to the health care sector, a responsibility rests with:
and an initiative called the Health Communications Network has been under development in this area for several years. It is, however, envisaged that the Commonwealth would make only relatively small grants towards its implemention.
In relation to commerce and industry, a strategic perspective could be adopted by:
It is also feasible that the Department of Defence (DoD) could see fit to contribute, on the grounds that it has substantial interactions with industry, and that these could be much more effective, efficient and adaptable if they were migrated onto an appropriately secure network.
In relation to usage by Commonwealth government agencies, the agencies primarily responsible are:
Agencies of State Governments should not be excluded from such considerations. Another possibility is that some specific off-setting savings may be able to be identified within particular portfolios, enabling transfer of budget towards the information infrastructure; for example, the reduction in duplicate acquisitions by libraries under the Distributed National Collection initiative should free substantial sums of money.
The possibility should not be overlooked of a corporation funding significant elements of the infrastructure. Potential justifications include the promotional benefits, and the market-share and profits it judges that it can achieve in markets which are dependent on the infrastructure being in place. This possibility was explicitly addressed by an Expert Group in Research Data Networks (ASTEC 1994), which considered it entirely feasible that, under appropriate conditions, a consortium of companies may be prepared to fund a network for at least research.
Within the global-funding approach, two further sub-alternatives exist. The sponsor(s) could absorb the costs without seeking to recover them directly. If the sponsor is government, this would represent public investment from taxpayers' funds (or as public servants are wont to call it, 'consolidated revenue'), and would reflect a strategic decision that the public wanted and/or needed the infrastructure.
Alternatively, the sponsor could seek to recover the funds in some fairly direct manner, from organisations or individuals who connect to or use the infrastructure. The possibilities include:
These levies and fees might be charged in a non-discriminatory manner, or some groups could be singled out for no fees, lower fees, higher fees, or all the fees; or they could be charged on a sliding scale, with either increasing or decreasing rates.
Compared with the costs of central infrastructure, much greater difficulties arise at the level of regional infrastructure. Estimates of the cost of laying high-capacity fibre-optic cable to every front-door in Australia, for example, have ranged up to $40 billion. These are probably quite unrealistic, because the capacity of reasonable-quality copper cable, which is already widely-installed, continues to increase, and fibre-optic cable may only be installed in industrial and commercial areas, plus those suburban areas judged by providers to be capable of returning, from 'info-tainment', a profit on investment. The primary services which would remain difficult to deliver to most households are those which involve high-quality video. Such estimates do, however, provide an indication of the scale of the regional infrastructure challenge.
Finally, the infrastructure could be funded entirely from levies or usage charges. The levies might be on large organisations (e.g. all government agencies and/or corporations above a nominated size), or by a specific surcharge on some form of payments to government (i.e. taxes of various kinds), or by deduction from some form of payments by governments. Usage charges would have a stultifying effect on demand, and any significant reliance on them would therefore conflict with the notion of the information infrastructure as a source of national strategic advantage.
As was argued earlier in this paper, to the extent that infrastructure funding is to be recovered, it is appropriate that a significant proportion of the levies or charges be applied to users of those services which generate significant volumes of synchronised traffic at peak periods, and which therefore determine the maximum capacity which needs to be installed and maintained.
This section's purpose has been to establish a rational basis for discussion of the economic management of the information infrastructure. Key elements of the discussion are:
This is chapter 6 of a multi-part Monograph. The remainder is at http://www.rogerclarke.com/II/NetN9.html.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 15 October 1995 - Last Amended: 28 October 1998 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/NetN6.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy