Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Principal, Xamax Consultancy Pty Ltd, Canberra
Visiting Fellow, Department of Computer Science, Australian National University
Version of 28 November 1993
Presented at Networkshop'93, Melbourne
© Xamax Consultancy Pty Ltd, 1993
This document is at http://www.rogerclarke.com/II/AARNetEcs.html
The Australian Academic and Research Network (AARNet) is a victim of its own success. It has been financed on a goodwill-and-understanding basis during its first three years, but its growth has quickly driven it to a scale at which it can no longer be treated as a precocious grandchild: the research community is being forced to recognise it as important shared infrastructure, and to manage and finance it accordingly.
The alternative models for managing and financing AARNet are being discussed, avidly, and in many different contexts. AARNet is too important for these discussions to remain ad hoc. This paper endeavours to provide a framework within which they can take place, by applying the basic notions of micro-economics and cost accounting. Unfortunately, the nature of service provision over inter-connected networks is such that some sophistication is required.
It is naive to expect AARNet to continue to be funded essentially by annual levy on the major research institutions. There is a strong economic and national strategic case for the infrastructure to be substantially centrally funded. There is also a strong economic argument for usage charging to be imposed, but only in relation to those services which generate the greatest traffic at peak periods, and which therefore determine the maximum capacity which needs to be maintained.
The Australian Academic and Research Network (AARNet) is a means for inter-connecting the networks of the organisations making up the Australian research community. It uses the TCP/IP (Transmission Control Protocol / Internet Protocol) suite of standards, although it also supports smaller volumes of DECnet, Appletalk and X.25 transmissions. It functions as the Australian segment of the worldwide 'Internet', which is the set of networks inter-connected using TCP/IP.
TCP/IP networks use a connectionless, packet-switching mechanism. This means that there is no committed connection between the two end-points of a transmission, and that data does not travel in a single, unbroken stream from sender to receiver. Instead, the data is broken into packets (of about 200-2,000 bytes) and sent from node to node along the network.
Packet-switched networks are distinctly different from connection-based networks such as the Public Switched Telephone Network (PSTN) and on-line sessions between workstations and servers. These dedicate a series of arcs to make up a connection, and commit them for the duration of a session or call, rather than just for the time it takes to send a single message. The packet-switching approach denies priority to messages within pre-arranged conversations, and as a result real-time transmissions such as voice and video are likely to sound and appear jerky to the receiver. In return, packet-switching makes much more efficient use of the arcs and nodes in the network, provides much greater total throughput, and is accordingly much cheaper for a given volume of traffic.
Each node on the network (roughly speaking, that means the gateway processor on each organisation's internal network) co-operates with the remainder by acting as a switch or exchange, and passing data along towards its target. In order to do this, it has to maintain a set of routing tables which determine where, depending on its ultimate objective, each packet has to be sent next.
The Internet emerged during the 1980s from the Defence Advanced Research Project Agency's ARPANet. Since AARNet was implemented in 1990, it has grown at a dramatic rate, initially in terms of the number of corporate networks it connected, and continuously in terms of the volume of data it carries. It is essentially a non-hierarchic network, i.e. there is no central controlling point (although there is a central service point for management and trouble-shooting). Its functioning depends on co-operation among many processors and hence also among the organisations which own them.
To the end-user, AARNet is almost transparent. It provides an extended address-space of machines and users, which comprises the entire worldwide Internet. It has high resilience to downtime on individual nodes and arcs, and very high reliability and efficiency. The delivery time for a message is well under a second anywhere within Australia, and usually under a second anywhere in the world. This is rather faster than physical mail, and even fax (Kahin 1992, Huston 1993). The speed and convenience of data transmission results in greater immediacy in human communications. This has unleashed an explosion of new ideas in a wide array of disciplines, to the extent that Clarke (1993) argued that AARNet, and the Internet generally, are bringing about a revolution in research practice.
To date, these services have seemed, from the perspective of the individual researcher, to be gratis. This is because the costs have been almost entirely paid for from the central budgets of research institutions, commonly through the computer services centre.
The costs of operating AARNet are becoming increasingly visible. This is partly because the number of individuals, and the volume of data transmission that each is generating, are growing steeply. It is also because some uses inherently demand much higher bandwidth. In addition, the scale of investment needed is passing the threshhold of bearability by University's central budgets. As a result, there is a quickly decreasing willingness on the part of research establishments to pay the annual levies which is how the Australian inter-network was established, and how to date it has been primarily sustained. It is urgent that a way or ways be found in which the ongoing investment and running costs can be funded.
The purposes of this paper are:
The paper commences by outlining the elements of micro-economics and cost accounting, and then applying them to the patterns of costs arising in the operation of AARNet. The paper then examines several alternative models for the financing of AARNet, and discusses the impact of uses beyond the central one of research. The analysis is tentative, because the field of inter-networking economics is emergent and highly dynamic. Some tentative conclusions are drawn.
Conventional capitalist economics is concerned with the allocation of scarce resources. It is presaged on the belief that scarce resources are most efficiently allocated in circumstances in which the price is set in a marketplace of informed purchasers. This general principle is based on a number of assumptions, and a variety of imperfections mar its application.
Some organisations choose to set a fixed price for their goods and services. Some calculate their prices on a cost-plus basis, i.e. by adding some margin to their average costs; whereas others set a price according to what they estimate the market will bear. In many kinds of markets, however, sellers are not in a position to set fixed prices, and can only indicate the level of an offer that they would be happy to accept. The actual price depends on a bidding or bargaining process, and is a point at which both seller and buyer are sufficiently satisfied.
Consumers may be impervious to the prices charged by suppliers (at least within a broad price range - witness the prices cheerfully paid for drinks at sporting and entertainment events). If much the same amount is traded irrespective of the price consumers have to pay, demand is said to be inelastic. In most cases, however, the particular costing and pricing policies chosen will have a noticeable influence on consumer behaviour and hence on total turnover. Most producers are therefore not able to simply decide on production capacity and production level, and set prices to reflect those decisions; it is necessary to assess the likely impact of the prices on demand.
Where organisations offer goods or services which are under-supplied, the margin of the selling price above the seller's average cost can be considerable. In many circumstances this advantage is likely to be quickly neutralised as other suppliers offer comparable products for a lower price. Under certain conditions, however, super-profits may be able to be gained on a long-term basis. This is particularly the case where a monopoly exists, and it is not practicable for any other organisation to offer equivalent goods or services.
It is a tenet of conventional economics that for scarce resources to be efficiently allocated, the people who make the allocation decisions must enjoy the benefits and suffer the costs. The user-pays principle is intended to ensure that this occurs. The justification is not moral; it is that, in this way, rational decision-makers will only use goods and services which have a marginal benefit to them, and hence wasteful goods and services will wither and die, and resources will be used efficiently. In a perfect market, the price balances supply and demand: in cybernetic terms, markets are a mechanism for achieving dynamic equilibrium or homeostasis.
The approach taken to pricing reflects the motivation of the seller (or, more precisely, to the price-setter). The business motivation is profit (qualified by such related factors as status, market share and position-taking, and subject to the time-horizon in which profit-making is desired); hence the purpose of pricing is to produce as high a margin as practicable between revenue and costs. In a community-based, not-for-profit, or service-motivated environment, on the other hand, the primary purpose is efficiency in the use of resources, and equitable distribution of costs (which may include discounts below cost for disadvantaged customers, free goods for the volunteers, and super-prices for the better-off among the community).
Organisations which provide multiple goods and services may choose not to ensure that the revenue from each product covers the costs of producing it; for example, loss-leaders attract customers into shops; and the introductory period for new products is paid for from established products. Where an organisation chooses to under-charge on some of its basket or portfolio of products and over-charge on others, the practice is referred to as a cross-subsidy.
Conventional economics is applicable to resources which are scarce, which are owned or at least controlled by a legal entity, and whose essential characteristics are capable of being expressed in financial terms. Many kinds of resources do not satisfy those conditions; for example, air, water and sunshine may not be scarce, and are generally not controlled by anyone. The conventional solution by economists to the problem of non-financial costs (sometimes referred to as 'social' costs) is to dub them 'externalities', and then contrive ways of having them reflected in the market price.
Another important weakness in conventional economics is the implicit assumption that an aggregate utility function can be inferred, i.e. that the trade-offs which many different consumers make between price and acquisition of a good or service can be expressed in a single diagram or formula. In practice, goods and services are perceived quite differently by their potential buyers; and each consumer's preferences change continually, depending, for example, on how long they've been shopping, and how persistent the salesman is.
A class of exceptions of especial importance to this discussion is those resources addressed by 'information economics'. This sub-discipline arose because some kinds of resources (such as data and software) can be replicated effectively and very cheaply; so the notion of scarcity changes, and the mechanisms for over-coming it change too. Control of such resources, and hence price-setting, also function differently.
Conventional economics regards the market as not only the mechanism whereby resource allocation efficiency is achieved, but also the means whereby individuals are motivated to develop and offer new goods and services. The pure concept of markets has proven to be applicable to only very few real-world situations. The many imperfections result in the need for various forms of intervention by organisations and individuals who have market power, or legal or de facto authority.
Micro-economics' most directly useful expression is in the discipline of cost accounting. Prices may be set by the marketplace, and thereby be out of the organisation's control. Costs, on the other hand, can be managed, provided they are understood and monitored. This section briefly introduces the key terms. Readers with experience in the area will prefer to skip to the next section.
The first distinction which cost accounting makes is between one-time costs and recurrent costs. One-time costs occur at the commencement of a project, whereas recurrent ones arise at intervals. The intervals may be of any length, such as per-message, per day, per month or per annum. These ideas are different from the notions of investment in an asset and payment of expenses: many one-time costs (such as the purchase of a workstation for the AARNet network controller) give rise to an asset which is depleted only relatively slowly and which is therefore treated by financial accounting as an investment to be amortised or depreciated over time; but some one-time costs (such as the current fortnight's salary for the network controller) give rise to only very short-term benefits and are regarded as expenses. Investment and expense are concepts of use in financial accounting, and have been harnessed as a basis for calculating taxable income. But they are of little benefit to management decision-making, which instead focuses on cash flows and the timings of cash flows.
The next distinction is between fixed costs and variable costs. Fixed costs have to be expended in order to make a facility available and cannot be directly associated with any particular use of the facility; whereas variable costs can be directly associated with a particular good or service produced using the facility. In practice, this distinction is inadequate, as many costs are fixed only within a particular range of production. The idea of stepped-fixed costs reflects the fact that additional expense is needed to enable a facility to, for example, expand its widget-manufacturing capacity from 10,000 to 100,000, or the peak transmission throughput of an arc from 700K to 1.4MB/sec.
In order to make decisions about the pricing of goods or services, a supplier must generate revenue large enough to cover all of its costs. There are various ways in which this revenue can be derived. A benefactor may exist, which will provide funds as needed. Alternatively, a small number of major users may cover all costs, such that minor users get the goods or services gratis. Most commonly, however, the costs are charged out to users on some more or less 'fair' basis.
Unfortunately there is seldom just one basis which is obviously fair, and some degree of arbitrariness is inherent in any such mechanism. A common approach is to measure the goods and services used by each customer, and then charge customers the proportion of the total costs corresponding to their share of the goods or services. This implicitly allocates the fixed costs on the basis of the final destination of the goods or services. The allocation process is complicated by differences among the goods or services provided by a single supplier, which have to be expressed in some common term (e.g. an oil refinery produces not only high-grade petrol but also lower quality fractions and heavy oil byproducts at the bottom of the column).
Generally, once an allocation basis has been selected, some customers will regard it as unfair, e.g. because they are further away from the source of supply and receive a lower quality of good or service (e.g. stale vegetables). Commonly also, some customers will have sufficient power to resist the formula and demand discounts. This is not necessarily naked aggression. They may generate such a large volume of demand that they make it possible for the factory to invest in equipment which makes the variable costs relatively very small, and reduces the average cost of each unit of production much lower than it would otherwise have been.
This gives rise to the notion of variable costing as distinct from full costing. Full costing assumes that all of the costs are to be shared equally among the customers, and divides the total cost by the total production. Variable costing reflects the fact that the facility would normally be used to produce a particular volume of goods and services, and that the normal users have already covered the fixed costs; hence additional, abnormal purchases can be priced based on their incremental cost rather than their average cost. This assumes that the basis on which the goods or services are sold is not an entirely stable and predictable market, but that instead a spot market exists for them. Such situations are highly dynamic, difficult to manage and seems inherently unfair to the customers who have taken the bulk of the organisation's production. On the other hand, it results in highly efficient use of scarce resources.
Naturally the collection and analysis of cost data is itelf costly, and the benefits of the activity must be balanced against those costs. In some circumstances permanent gathering of detailed cost data is justifiable, while in others (such as copyright payments by Universities), periodic sampling is undertaken.
A further consideration is working capital. Some organisations are fortunate enough to be able to collect their revenue before they pay their expenses, but in the more comment circumstances of late receipts and early payments, the organisation must have access to a pool of financial resources to tide it over the intervening period. In addition to maintaining working capital, an organisation must have funds available for the maintenance of its productive capacity, and to meet any growth in demand for its goods and services. Quickly growing organisations need access to considerable amounts of investment funds.
There are, broadly speaking, three sources of capital: profits which have been made in the past and have been retained by the organisation rather than paid out to investors; capital or equity investment (on which dividends will have to be paid from future profits); and borrowings (on which interest will need to be paid, and which will need to be progressively re-paid). In the case of borrowings, the interest and repayment of capital can only be achieved if they are added to the cost calculation, and revenue is sufficient to cover them as well. In the case of dividends and capital repayment, it is conventional to exclude them from the cost calculation, seek to generate revenue in excess of total costs, and pay the dividends from this residual or profit.
This is not the place to enter into a description of AARNet; see, however, La Kehoe (1992), Krol (1992), Quey (1992) and Huston (1993). In order to support the intended economic analysis, four different levels need to be distinguished:
In the following paragraphs, it will be argued that this pattern is quite different from the manufacturing company assumed in cost accounting text-books. This is not to say that inter-networking is alone in this regard: it has some similarities with electricity, water and telephone services (although their infrastructures support only a single service); and with libraries (although, until very recently, library services were not essentially telecommunications-based - see, however, Steele & Barry 1991).
One way in which the conventional model ill-fits the AARNet environment is that there is a distinct shortage of rational decision-makers, among both users and their managers. This is, admittedly, a feature which is shared with most other marketplaces. More unusually, networks provide services based on data and/or software. The whole point of the Internet is to assist the flow of data, and thereby overcome scarcity of information. Data and software, once located, are replicable for costs which are infinitesimal in comparison with the costs of production; and the copies are indistinguishable from the original. The relevance of conventional economics to AARNet, presaged as it is on scarcity of resources, is heavily qualified.
There are also difficulties in applying straightforward cost accounting principles. An inspection of the elements listed in Exhibit 1 suggests that the interfaces and the rights are inherent in the cooperative nature of the Internet. The network management and support arrangements represent a quite small proportion of the total costs, and are in any case relatively fixed during periods of 2-3 years. The bulk of AARNet's costs, and hence of the discussions about AARNet's economics, have to do with transmission capacity and switching capacity.
In general, transmission and switching capacity are not purchased on a piecemeal and progressive basis, but in large lumps. As a result, there are very high fixed costs in each period (the lease costs on the lines and equipment). Meanwhile, the variable costs of an additional message are so small as to be effectively incapable of measurement. Particularly in view of the growth rate of AARNet traffic, the cost profile can be considered as almost entirely stepped-fixed in nature.
Another consideration is that transmission bandwidth cannot be treated as a resource in the same way as can iron ore and air. The bandwidth is available at all times, and is not consumed by use. It is merely briefly occupied by each message (or, more correctly, each packet): the conventional analysis based on scarcity can be used, but only if it is repeated every few milliseconds, rather than every 1-2 years as in the case of expanding secondary manufacturing capacity, or 5-10 years as in the case of a new mine. It is important to note that saturation of node or arc capacity does not have direct financial effects: it costs no-one any money; but rather the traffic at the time suffers a delay in transmission.
Almost all of the costs are incurred at the level of the inter-network infrastructure, and almost none at the levels of the services, the uses or the messages. Yet all of the benefits derive from these three layers, not from the infrastructure itself. The marginal financial cost of an additional transmission is effectively zero, in all circumstances, even if one or more of the arcs and nodes involved is heavily used and queues are growing. There may be non-financial costs, of course, in particular in the forms of delay, disruption and user frustration, and even the occasional node-failure due to a queue exceeding memory capacity. Hence, as the network approaches saturation, there is simply no cost-based equilibrating mechanism. There may be a reduction in demand as a result of users becoming impatient and deferring their transmissions until later. However, given that a large proportion of users are buffered from the Internet by processes running in their workstations and servers, this will only be significant if 'patience' is programmed in (as it is in token-based Local Area Networks).
When congestion occurs, because of the absence of an intrinsic equilibrating mechanism, AARNet becomes subject to what economists call 'the tragedy of the commons': the efficiency and effectiveness of a shared patch of land, or of lan, is in everyone's interests, but in no one's interest; i.e. there is no strong mechanism motivating responsible behaviour, only shared wisdom, vision or morality. Those are hard enough to sustain in small physical communities surrounding common grazing or arable land; they are likely to prove much more problematical in electronic communities.
In addition to transmission bandwidth, there are other areas in which common assets exist and must somehow be protected. One is the Internet address space (i.e. the 32-bit identifiers used by software, or the 4-12-digit numeric identifiers used by Internet-savvy scientists, rather than the alphanumeric descriptors used by normal people). This address-space is rapidly becoming exhausted, and that perception is leading some network administrators to hoard addresses, exacerbating the problem. Unfortunately there is no economic disincentive to counter the temptation to hoard (Geoff Huston, personal communication, 1 November 1993). There is a related factor involving the routing tables carried by each node: the creation of a new address costs the creator nothing, yet the cumulative effect is explosion in the size of the routing tables in thousands of processors.
If a supplier, or a collection of suppliers, has any degree of market power, then it is entirely feasible to charge people for their use as though it were a depletable good. Charging bases which may be effective in achieving the organisation's objectives include:
and rates do not need to be fixed, but can be variable based on, for example:
Different services place different values on the consistency of transmission speed and the synchronicity of parallel data streams. Some services depend on prompt transmission of single streams of data (such as audio in telephone conversations) and others on prompt and synchronised transmission of multiple streams of data (such as audio and video in video-telephone, video-conferencing, MUDDs and distributed real-time visualisation services).
Generally, the simpler services are less disturbed by delays. Email, bulletin boards and file transfer are impervious to sub-second delays, and can absorb much longer delays if necessary. Many nominally on-line services can also tolerate moderate forms of delay; for example, when users are logged-in to remote servers, transmission delays of the same order of magnitude as the processing time to deal with the most recent command is generally entirely acceptable; and 'network chat' facilities (i.e. mutual, immediate and progressive text-display) are not seriously affected by bursty transmission.
The question arises as to whether and how many of the users who suffer delays due to congestion would be prepared to pay to overcome it. In the case of asynchronous services such as email and even ftp, it seems fairly unlikely that many users (or their financial masters) would be prepared to pay very much at all for sustained low-delay service, especially if the charge or surcharge is only applied during relatively brief periods of congestion: the delay in transmission of a message from an AARNet-connected location to points on the other side of the globe could blow out from 2 seconds to 10 seconds or even 2 minutes, but few email users would even know, let alone be concerned about the delay.
There have been various proposals to construct a mechanism for charging some or all of the packets transmitted over the Internet backbone. The MMV 'smart market' proposal seeks to retro-fit into the Internet's design a form of equilibrating mechanism (McKie-Mason & Varian, 1993a, 1993b). Under this scheme, one factor affecting the price would be the priority the sender selects (with email typically being assigned the lowest priority and messages requiring prompt delivery and synchronicity the highest). The other factor would be the degree of congestion on the network. Each priority-level would carry with it a maximum price which the sender is committed to pay if the congestion on the network warrants it. In this way, a 'smart market' would exist, in which the bulk of the funding of the network would be collected from organisations which demanded urgent delivery of significant volumes of data; and there would be an incentive for users (or at least for the organisations footing the bill) to smooth the traffic flow by using services which were potentially expensive at off-peak times.
One approach to applying micro-economics to AARNet is to ask who should write the cheques to pay for it. Even this is a many-headed question; for example, at the level of an email message, should it be the sender or the receiver or both. In the case of a message sent to a mailing-list for re-transmission to hundreds of receivers, there are not just two but three parties who might share the costs in various ways. In the case of FTP and other database accesses, should the originator of the document, the owner of the repository, or the organisation or person fetching a copy of it, be regarded as liable for any marginal and/or the allocated stepped-fixed infrastructure costs?
To make a rational choice, consideration must be given to the effect of each possible charging scheme on the behaviour of all parties involved. Do we wish to tax people who send rubbish messages to mailing-lists? Most of us who are on mailing lists would probably say 'yes', but we would all have varying approaches to defining 'rubbish messages'. Do we wish to charge people who put documents in internationally accessible databases? We might prefer not to pay them for it (because that would create an incentive for junk-documents), but we might, as a community, prefer not to create a disincentive.
Those questions are at the low level of messages or services, whereas, as has been argued in this paper, the costs which have to be covered arise at the infrastructure level. The following paragraphs identify several different ways in which the costs of inter-networking infrastructure can be covered, including brief indications of the various alternatives' advantage and disadvantages.
Under this alternative, AARNet would be perceived as a free good by Universities, because some supra-ordinate government agency would provide it. Possible funding agencies include:
This was the original ARPANet/Internet model, with the U.S. DoD originally playing the primary funding role, and the National Science Foundation (NSF) subsequently picking up the baton. The economic rationale for this alternative would be along the lines that AARNet offers enormous potential benefits, but market forces alone will not be sufficient to bring it to fruition, because the beneficiaries are small, highly dispersed and unable to pool their budgets.
This approach is attractive, because God appears in the form of the fairy godmother or rich uncle. But it carries with it the risk that God may turn out to be a demanding landlord, or to have put some awkward fine print in the will. Researchers would not take kindly to having overt controls and covert monitoring of their use of the net.
This assumes that the Australian Vice-Chancellors' Committee (AVCC) is something more than a club of University Vice-Chancellors, or, more kindly, an industry association of University Chief Executive Officers. That is all that it is. It has no budget other than that which it can extract from its constituent universities in the form of a levy. It has been, and can only be, a vehicle.
This has been the dominant mechanism used to date to fund AARNet, and in the absence of some significant change, it is the default alternative for the future. It embodies two problems: as the cost increases, the enthusiasm of universities to fund AARNet decreases. And there is the now-familiar allocation problem: on what basis is the cost to be split:
Unlike the two previous alternatives, this alternative creates a form of equilibrating mechanism, because there is a motivation for each organisation to impose some degree of control over use of AARNet, or at least of those services which result in greater costs to the organisation. That in turn depends on the kinds of allocation bases which are used, at any point in time, to calculate each organisation's fee.
This option creates further challenges in relation to allocation (how are the costs to be allocated to sub-units within each organisation?), and to measurement (how much is to be spent on measuring usage in order to support the allocation calculation?). Unsurprisingly, it does not appear to have been adopted to date by any organisation connected to AARNet.
The repressive effect of budgettary control is now much closer to each individual user, because the Head of the local unit is the one whose budget is affected. To some extent this effect is dissipated, because many Heads are fairly casual in their management style. With the progressive replacement of the collegiate approach to University governance by the management model (VC as CEO, Dean as Vice-President, Head as Departmental Manager), the pressure on individuals to conform their usage to the cost-model imposed on them is increasing (how many researchers have already been subjected to restrictions on telephone and fax usage?).
This, quite distinct, approach, sees AARNet becoming dominant in the market for at least some kinds of services, and charging companies and government agencies fees sufficient to cover the complete infrastructure costs (including the additional capacity necessary to support the additional, non-research, paying clients).
If the services are attractive enough (and such facilities as gopher, World-Wide-Web and X-Mosaic might well satisfy that requirement), it may prove feasible to charge fees approaching those of commercial value-added networks, but sustain a lower cost-structure, thereby maintaining gratis or extremely cheap services for the research community. This would require a substantial increase in the commercial sophistication of the AARNet organisation, contracts with The Internet Society and with many services suppliers, and the resilience to withstand competitive and political counter-attacks from value-added networks.
A variant to this alternative is to deflect participating organisations' fax, and perhaps also telephone, budgets into the AARNet pool. Fax, in particular, would be a very easy service to provide over AARNet, because it is not urgent to the same degree as services requiring synchonicity, and can be usefully run on a store-and-forward basis. This could be confidently predicted, however, to attract virulent and probably very effective opposition from the country's largest employer, Telecom.
The Internet community developed with the intention of achieving effective inter-connection, and measurement and pricing were virtually ignored. Times have changed, volumes have increased, and financial managers in research institutions want a rational basis for management and budgetting. Hence, irrespective of which approach is selected, there is a need for a measurement mechanism to support cost allocation.
Depending on the approach taken, the data required to support cost allocation might include:
Some of these may be readily available, but some would be likely to require substantial design, programming and storage (Ray 1993).
Moreover, the more detailed the level of cost allocation or pricing, the greater the amount of data which must be gathered and analysed, and the greater the effort needed to communicate debts to customers, and to collect the debts. This has potentially significant implications for traffic volumes on the network, for the complexity of the software running on Internet switches, and for the staffing of the AARNet organisation.
There seems little doubt that the participating organisations would concur that the primary purpose of AARNet is support for research activities in academically oriented institutions. This clearly does not mean universities alone, but includes government organisations such as CSIRO and DSTO and commercial ones such as TRL. It is clear that government and private sector participants in Cooperative Research Centres, and research regulatory bodies such as the ARC and the Chief Scientist's Office must also be included.
In formulating charging and control mechanisms for AARNet, it is essential not to lose sight of the realities of the research process and the practicalities of creativity. In commercial and governmental activities, the mission and tasks are moderately well-structured and plannable, and the full battery of economic and cost accounting tools are appropriate to ensure efficiency of operation. These ideas can still be applied in a qualified manner to industrial research and development (IR&D), and directed research and development programmes. But the further one proceeds along the spectrum toward 'pure research', the less structured and less plannable activities become, to the point where fiscal prudence has to be heavily compromised by freedom and serendipity. At least in the context of the Internet, the breakthroughs come from renegades and misfits, not from be-suited corporate executives or the restrained professionals they employ.
Within the boundaries of fiscal prudence, it appears to be essential that a charging scheme be devised which has the effect of encouraging, not discouraging, experimentation and inventiveness. In order to do that, the scheme must not penalise constructive communications, in the various senses of synchronous and asynchronous interactions, and of targeted and broadcast messages.
From its very early days, the Internet community has recognised the necessity for permitting connection and use by more than just recognised research organisations. For one thing there are many active, but lone researchers, employed within organisations whose primary purposes are profit-making and government service, policy or regulation. Moreover, the strong push to achieve a more direct flow of the outcomes of research into the areas of business and government which can put it to use, dictates that there be seamless interfaces among research and non-research organisations.
The most closely related area of use to research would appear to be education, and indeed a modest proportion of AARNet usage may already be by undergraduates, particularly for email, and for access to news groups. This form of use is unlikely to be a major factor in congestion or expansion plans, and in any case can be readily rationalised as being training for neophyte researchers.
The greater risk is that education-related use of services involving high-bandwidth and urgent transmission will clog the networks, and spoil the level of service for all users. The primary services which raise this spectre appear to be distance education and video-conferencing. There appear to be three alternative solutions to this particular problem:
It is worth noting that multiple backbones were implemented long ago in the (original) U.S. segment of the Internet, and that the U.S. supercomputer centres are in the process of being connected by their own high-speed network, separating the high-volume and apparently very urgent communications of the high-science community from the more banal transmissions of the rest of us.
Another risk arises in relation to use of AARNet for the transmission of commercial and governmental messages unrelated to research; for example, on one important Internet mailing list (edi_l@uccvma.bitnet), there has been active discussion during the fourth quarter of 1993 regarding the use of Internet email and MIME to support electronic data interchange (EDI - structured documents such as purchase orders).
Much of this kind of traffic requires levels of reliability and security greater than that offered by the Internet. Moreover, the loose architecture of the Internet militates against commercial use, as does the fact that a great many of the services on the Internet are themselves research objects, and therefore unstable and often financially unsupported. Hence commercial value-added networks (VANs) seem likely to be able to attract the heavy majority of such traffic, despite their higher costs. Nonetheless, there is the possibility that AARNet may be seen by the VANs as a threat that needs to be countered.
Faced with such a wide range of considerations, it is difficult to adopt strong positions at this stage. Nonetheless, some tentative conclusions can be sketched.
It is naive to expect AARNet to continue to be funded essentially by annual levy on the major research institutions. On the other hand, creativity and structuredness are antithetical, and the imposition of narrow micro-economic and cost accounting models on research generally, and on AARNet in particular, risks cooking the golden goose. It is in the interests not only of members of the AARNet community, but also of the nation as a whole, for the costs to be absorbed at a high level in the hierarchy.
Basic services (by which is meant those which are tolerant of modest delays and require no synchronicity) should continue to be gratis to the ultimate user. Discussions of charging should focus on those services which generate significant volumes of synchronised traffic at peak periods, and which therefore determine the maximum capacity which needs to be maintained.
At least during the coming turbulent decade, it would be most unsatisfactory to rely on commercial value-added network providers for inter-networking. Unless the cost of switching continues to plummet, it may be necessary to choose between establishing and maintaining multiple AARNet backbones, and precluding some classes of traffic. It may also prove important to continually refine the 'research-purposes' constraint on the classes of traffic carried.
Although the five models discussed in the section 'Who Pays?' were treated as though they were distinct, there is scope for a mixed payment model. A tenable configuration might have the following components:
This paper's purpose has been to establish a rational basis for discussion of the economic management of the Australian Academic and Research Network. It is critical that this valuable resource, now emerging from its foundation period, mature into the foundations of the Australian information society. That is only possible if the financing of AARNet is placed on a sound and equitable basis which is sensitive to the special needs of the research community.
In developing this paper in snatched hours during a one week period immediately before the conference, I have benefited greatly from the thoughts and prompt reactions of a dozen people. Like a lot of other things, this paper would not have existed without AARNet, and the electronic community that AARNet supports.
Clarke R. (1993) 'Electronic Support for Research Practice: The Inadequacy of Economic Analysis in a Time of Revolutionary Change' in Mulvaney J. & Steel C. (Eds.) 'Changes in Scholarly Communications Patterns' Aust. Academy of the Humanities, Canberra, 1993, pp.84-102. Revised version forthcoming in The Information Society 10,1 (March 1994)
Faulhaber G.R. (1992) 'Pricing Internet: The Efficient Subsidy' In Kahin (1992)
Huston G. (1993) 'Trends in Communications Technologies: AARNet and the Internet' in Mulvaney J. & Steel C. (Eds.) 'Changes in Scholarly Communications Patterns' Aust. Academy of the Humanities, Canberra, 1993, pp.71-83
Ishida H. & Landweber L.H. (Eds.) (1993) 'Internetworking' Commun. ACM 36,8 (August 1993) 28-77
INET'92 (1992) Proc. Conf. INET'92, Kobe, Japan, August 1992, The Internet Society
IAWG (1992) 'Internet Accounting: Usage Reporting Architecture' Internet Accounting Working Group, July 9, 1992
Kahin B. (Ed.) (1992) 'Building Information Infrastructure' McGraw-Hill Primis, New York, 1992
Kehoe B.P. (1992) 'Zen and the Art of the Internet' Prentice-Hall, 1992
Krol E. (1992) 'The Whole Internet' O'Reilly, Sebastapol CA, 1992
LaQuey T. (1992) 'The Internet Companion: A Beginner's Guide to Global Networking' Addison-Wesley, 1992
MacKie-Mason J.K. & Varian H.R. (1993a) 'Some Economics of the Internet' Technical Report, Uni. of Michigan, version of June 1993, available from the authors at jmm@umich.edu, halv@umich.edu
MacKie-Mason J.K. & Varian H.R. (1993b) 'Pricing the Internet' Technical Report, Uni. of Michigan, version of June 1993, available from the authors at jmm@umich.edu, halv@umich.edu
Ray D (1993) 'Network Usage Monitoring' Proc. Networkshop'93, 30 November - 2 December 1993, Melbourne, AARNet
Steele C. & Barry T. (1991) 'Libraries and the New Technologies' in Clarke R.A. & Cameron J. (Eds.) 'Managing the Organisational Implications of Information Technology' Elsevier / North Holland, Amsterdam, 1991
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 25 July 2000 - Last Amended: 25 July 2000 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/AARNetEcs.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy