Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2014
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
© Xamax Consultancy Pty Ltd, 1998-2004
Available under an AEShareNet licence or a Creative Commons licence.
Version of 29 January 2004
This document is at http://www.rogerclarke.com/II/OzI04.html
This paper is a significantly extended version of 'A Brief History of the Internet in Australia' (1998-2001), at http://www.rogerclarke.com/II/OzIHist.html
This is one of very few attempts at the topic. A variant was published as 'The Emergence of the Internet in Australia: From Researcher's Tool to Public Infrastructure' Chapter 3 of 'Virtual Nation: The Internet in Australia' Goggin G. (ed.), UNSW Press 2004
An authorised Ukrainian translation is at
The Internet emerged in the U.S. engineering research community between 1969 and 1983, an outgrowth of the marriage between computing and communications technologies. Australian computing researchers had less advanced but cost-effective mechanisms in place at the time, and adopted the Internet protocols only when they had reached a level of maturity. Rapid progress was made from 1989 onwards.
By 1993-94, the U.S. Internet backbones were in transition from an academic infrastructure to a more conventional business model. Australian use by individuals, business and government grew almost as fast as it did in the fastest adopting countries, the U.S.A. and Scandinavia. As a result, a new business model was implemented in Australia in 1994-95.
The rapid maturation since then has placed Australia and Australians in a strong position to exploit the information infrastructure that the Internet represents, and to participate aggressively in the inevitably rapid change of the next 5-10 years. Unfortunately, the country's future is being undermined by the Government's failure to re-structure the telecommunications sector. Instead of a hierarchy of service layers with effective competition in the upper layers, the Internet industry is still dominated by a single, massive, vertically-integrated corporation.
Australian historians have yet to turn their attention to the Internet; engineers care little for recording their activities for posterity; and there is as yet no powerful organisation that wants a court history. As a result, there is remarkably little documentation of the first decade of the Internet in Australia. This paper draws on available resources in order to provide an outline of that history that is sufficiently detailed to support strategy and policy discussions.
Commentators on cyberspace behaviour and regulation are at dire risk of making unfounded assumptions about the Internet, because of the many myths embedded in the metaphors that have been used to explain the Internet. In order to provide an antidote against ill-informed discussion, this paper includes background information on the technology and its governance institutions and processes.
The paper commences with some relevant aspects of the history of computing, and of communications. It then reviews the emergence of the Internet in the U.S.A. from 1969 onwards. The early history of academic use of the Internet in Australia is traced, prior to the first watershed in June 1989, and the second in May 1994. This is followed by an overview of the history of the open, public Internet in Australia, and assessments of the infrastructure, the industry structure and governance at the beginning of 2004, and likely near-future directions.
This section provides a brief review of the very early days of computers, and of the changes that resulted in convergence between computing and message-transmission.
Computers were invented around 1940. They were initially intended to perform computational tasks such as ballistics, but their numerical processing capabilities were soon applied to more general logical manipulations, and were augmented with data storage. They have been applied to the processing of administrative data since 1952.
Initially, the few computers that existed were entirely independent of one another. Data was captured into machine-readable form and fed into the machine. Processed data was output in both machine-readable form (for re-use in subsequent operations), and in human-readable form. The early computers were the size of a room, and required very substantial investment; so computing initially tended to support centralisation, hierarchy and authority.
Component technologies developed quickly, however, through several generations of capacity-growth and size-reduction. The progressive miniaturisation of processors, combined with mass production efficiencies, resulted in small devices becoming available which had increasingly significant processing capabilities. The 1970s and 1980s saw large-scale and then very-large-scale integration (LSI and VLSI), which enabled the production of first micro-computers and then computers on a small chip.
Much of the history of computing has been written by Americans. See, for example, CMHC (2000-), Lee (1998), Ceruzzi (1998), and Annals of the History of Computing (IEEE 1979-). More balanced histories reflect the fact that, for all of the U.S.A.'s dominance from about 1950, the early developments occurred on both sides of the Atlantic (e.g. Cardwell 1994, pp. 419-423, 467-484 and Mowery 2003). For the history of computing in Australia, see Bennett et al. (1994).
Computers have multiple components, which communicate with one another by means of wires and cables. These carry data extraordinarily quickly over very short distances. Originally, people had to be in the same location as computers in order to use them. Then input-output (I/O) devices were placed in rooms adjacent to the room in which the expensive computer was located. By the mid-1960s, means were found to enable access by people in locations remote from the machine.
Communications between devices long predate the invention of computers. Many different transmission media have been utilised. The first few technologies relied on low-voltage electric current: the telegraph (from the 1840s), the telephone (from the 1870s) and telex (from the 1920s until the 1980s). Later, electromagnetic radiation was applied, particularly in the form of radio (from the 1910s) and television (from the 1920s). Light transmission along glass fibres has been harnessed since the 1970s.
In the 1960s, when computer manufacturers turned their attention to communications, the most widely available form of device-to-device communications was the telephone network, commonly referred to as the Public Switched Telephone Network (PSTN). This had required ingenuity in order to carry the sound of human voice, which is a finely variable longitudinal wave in a fluid medium. Telecommunications engineers devised an analogue to sound that could be carried over cable in the form of a low-voltage signal, and that could be reconstituted as sound in the receiving device.
Computers were intrinsically digital, and hence the form of the data to be transmitted needed to be converted from digital to analogue form, and then back again at the other end. The device invented to do this was called a modem (from modulator-demodulator). Between about 1975 and 1995, the transmission capacity supported by modems grew 100-fold, from hundreds of bits per second (bps) to 56Kbps, which is the maximum achievable with that technology over conventional twisted-pair copper wiring.
The initial applications involved a powerful computer at the hub, connected by telephone lines to 'dumb terminals' at the extremities. These were originally tele-typewriters (TTY), designed as terminals for the telex network; but cathode ray tubes were soon combined with keyboards to produce what were originally called 'glass teletypes' and later visual display units (VDUs). Because of the shape that results when such networks are diagrammed, this topology is generally referred to as a 'star network'.
While an organisation was using cable within its own property, it had considerable freedom. But once it sought to connect to remote devices, it moved into the realm of telecommunications (i.e. message transmission over distance). This was a well-established industry, made up of very large and very powerful corporations. In addition, the industry utilises shared infrastructure, and is consequently subject to supervision by a powerful regulatory body. The fledgling computer industry was a very small fish in a very large pond.
Most countries used what was referred to as the PTT model (which has various interpretations, but generally stands for post, telegraph and telephone). A PTT is a government agency that provides infrastructure and services, and regulates itself and its users. In the U.S.A., on the other hand, the services have always been run by private sector organisations, referred to generically as 'telcos' (for telecommunications corporations).
From 1901 until the mid-1990s, Australia used the PTT model. Until 1975, the agency was the Post-Master General's Department (PMG). Then the telecommunications segment was moved into a separate and differently constituted agency called the Australian Telecommunications Commission, trading as Telecom Australia. From 1946, a smaller agency was responsible for international links, the Overseas Telecommunications Commission (OTC), but it was folded into Telecom Australia in the early 1990s.
A switch from the PTT to the telco model was implemented progressively between 1989 and 1995, by converting Telecom into a corporation, Telstra; and by opening the market to competition progressively between 1991 and 1997. A body, called since 1997 the Australian Communications Authority (ACA), was established to perform the regulatory function. The Australian Government has sold 50% of its shares in Telstra, in two tranches in 1997 and 1999 (Telstra 2003). For the history of telecommunications in Australia, see Moyal (1984), Budde (2003b) and Caslon (2003a). The effectiveness of the removal of the Telecom/Telstra monopoly remains in some doubt: several competitors have collapsed, and most remaining competitors have achieved only limited market-share and/or are financially ailing.
From the mid-1970s onwards, micro-computers became increasingly available for individual use in the form of personal computers (PCs). Through the 1980s, connections were achieved among the proliferating PCs, using modems and voice-grade telephone lines connected through the PSTN. The purposes were primarily for transmitting messages between humans, posting messages to bulletin boards for later download by humans, file-exchange and software exchange. These linkages were used primarily by technology-enthusiasts, and to some extent by social activists.
During the course of a mere decade, local links were upgraded to more sophisticated technologies and topologies. Relatively small number of computers in close proximity to one another were linked using local area networks (LANs). During the period since 1985, Ethernet has been the dominant LAN technology, offering nominal transmission capacity of initially 10Mbps (which was already far greater than telecommunications via modems and the PSTN), then 100Mbps and later 1Gbps.
Connections among greater numbers of computers over longer distances were supported by wide-area networks (WANs). Large corporations ran private networks, leasing the necessary facilities from Telecom. From the 1970s to the early 1990s, a number of companies offered so-called 'value-added network' services, and hence came to be known as VANs. These provided services to corporations, transferring their messages over long distances, both as unstructured human-to-human (email) and as structured machine-to-machine documents (EDI). VANs leased cable from Telecom, and transmitted data using proprietary protocols or variants of emergent standards. They were by nature 'closed', and attempts at achieving inter-connection and 'inter-operability' never succeeded, partly for technical reasons, and particularly because the VANs thought they had too much to lose.
Smaller and less well-funded organisations had to make do with dial-up arrangements and piggy-backing on sponsors' networks. For example, during the 1970s, computer scientists in Australian universities clubbed together to achieve a degree of inter-operation. This culminated in the Australian Computer Science network (ACSnet).
Networks specifically designed for data emerged in the early 1970s, in the form of the Common User Data Network (CUDN), and became accessible (but expensive) from the mid-1980s in the form of the Digital Data Service (DDS), the Integrated Services Digital Network (ISDN, which offered, and still offers, highly-priced 64KB and 128KB services), and Telecom's X.25 packet-switched service.
In summary, until about 1990, telecommunications in Australia continued to be dominated by the transmission of voice. Data transmission used connections designed for voice, which had the capability to carry data grafted onto them. This document is in part the story of how the Internet in Australia emerged despite the slowness of the public telecommunications commission, Telecom, then corporation, Telstra, to adapt to the demands for, and to nurture the opportunities presented by, data transmission. Between 1965 and 1990, however, the scene was being set for a major shift in the patterns of telecommunications.
This section reviews the history of the Internet, including its key technical features, its design principles and governance arrangements, its use, and some of its implications. This information is critical to a proper appreciation of the specifically Australian material that follows; but many readers of this journal will be able to skim the early sub-sections.
In the late 1960s, researchers in the U.S. gained funding from that country's Defense Advanced Research Projects Agency (DARPA) to develop a computer network concept. In September 1969, the first pair of nodes was installed at the University of California, Los Angeles campus (UCLA). The first external link was to Stanford Research Institute (SRI), several hundred kilometers north. The network was dubbed ARPANET. During the 1970s, there were developments in the architecture and the technology, and progressive growth in both the number of computers connected to ARPANET and in traffic.
The two crucial protocols that were the foundation for the subsequent explosion were implemented network-wide in 1983. These were the Transmission Control Protocol (TCP) and the Internet Protocol (IP), and the network came to be referred to as the Internet. In 1985, the numerical IP-address was supplemented by domain-names, to provide more human-friendly ways of referring to and remembering network location-identifiers.
Through the 1980s, the Internet became well-established infrastructure, and unleashed rapid growth. A number of networks had emerged that linked U.S. universities using various proprietary protocols (such as IBM's SNA and Digital's DECnet) and international standards (e.g. X.25). During the second half of the 1980s, the decision was taken to migrate key networks across to the Internet protocol suite. As this plan was implemented, the number of hosts connected to the Internet grew from 1,000 in 1984, to 10,000 in 1987, 60,000 in 1988, and 100,000 in 1989. Not only did the Internet grow substantially in size, but the user-base also became much more diverse, although still restricted to universities and other research establishments. By 1990, the Internet protocol suite dominated all other wide-area network protocols.
Authoritative reference on the origins and early years of the Internet include Hafner & Lyon (1996), Abbate (1999) and Leiner et al. (2000). A useful timeline is provided by Zakon (1993-). For a history intended to be readily accessible to non-specialists, see Griffiths (2002). Other sources are indexed by the Internet Society (ISOC).
The Internet is an infrastructure, in the sense in which that term is used to refer to the electricity grid, water reticulation pipework, and the networks of track, macadam and re-fuelling facilities that support rail and road transport. Rather than energy, water, cargo or passengers, the payload carried by the information infrastructure is messages.
The term 'Internet' has come to be used in a variety of ways. Many authors are careless in their usage of the term, and considerable confusion can arise. Firstly, from the perspective of the people who use it, the Internet is a vague, mostly unseen, collection of resources that enable communications between one's own device and devices elsewhere. Exhibit 3.2 provides a graphical depiction of that interpretation of the term 'Internet'.
From a technical perspective, the term Internet refers to a particular collection of computer networks which are inter-connected by means of a particular set of protocols usefully called 'the Internet Protocol Suite', but which is frequently referred to using the names of the two central protocols, 'TCP/IP'.
The term 'internet' (with a lower-case 'i') refers to any set of networks interconnected using the Internet Protocol Suite. Many networks exist within companies, and indeed within people's homes, which are internets, and which may or may not have a connection with any other network. The Internet (with an upper-case 'I'), or sometimes 'the open, public Internet', is used to refer to the largest set of networks interconnected using the Internet Protocol Suite.
Additional terms that are in common use are Intranet, which is correctly used to refer to a set of networks that are internal to a single organisation, and that are interconnected using the Internet Protocol Suite (although it is sometimes used more loosely, to refer to an organisation's internal networks, irrespective of the protocols used). An Extranet is a set of networks within a group of partnered organisations, that are interconnected using the Internet Protocol Suite.
A network comprises nodes (computers) and arcs (means whereby messages can be transmitted between the nodes). A network suffers from fragility if individual nodes are dependent on only a very few arcs or a very few other nodes. Networks are more reliable if they involve a large amount of redundancy, that is to say that they comprise many computers performing similar functions, connected by many different paths. The Internet features multiple connections among many nodes. Hence, when (not if) individual elements fail, the Internet's multiply-connected topology has the characteristics of robustness (the ability to continue to function despite adverse events), and resilience (the ability to be recovered quickly and cleanly after failure). The Internet also has the characteristic of scalability, that is to say that it supports the addition of nodes and arcs without interruptions, and thereby can expand rapidly without the serious growing pains that many other topologies and technologies suffer.
The conception of the Internet protocols took place during the 1960s and 1970s, at the height of the Cold War era. Military strategists were concerned about the potentially devastating impact of neutron bomb explosions on electronic componentry, and consequently placed great stress on robustness and resilience (or, to use terms of that period, 'survivability' and 'fail-soft'). These characteristics were not formal requirements of the Internet, and the frequently-repeated claims that 'the Internet was designed to withstand a neutron bomb' are not accurate. On the other hand, those design characteristics were in the designers' minds at the time.
The networks that had been designed to support voice-conversations provided a dedicated, switched path to the caller and the callee for the duration of the call, and then released all of the segments for use by other callers. Data networks were designed to apply a very different principle. Messages were divided into relatively small blocks of data, commonly referred to as packets. Packets despatched by many senders were then interleaved, enabling efficient use of a single infrastructure by many people at the same time. This is referred to as a packet-switched network, in comparison with the telephony PSTN, which is a circuit-switched network. The functioning of a packet-switched network can be explained using the metaphor of a postal system (Clarke 1998).
For devices to communicate successfully over a packet-switched network, it is necessary for them to work to the same rules. A set of rules of this kind is called a protocol. Rather than a single protocol, the workings of packet-switched networks, including the Internet, were conceived as a hierarchy of layers. This has the advantage that different solutions can be substituted for one another at each layer. For example, the underlying transmission medium can be twisted-pair copper cable (which exists in vast quantities because that was the dominant form of wiring for voice services for a century), co-axial cable (which is used for cable-TV and for Ethernet), fibre-optic cable, or a wireless medium using some part of the electromagnetic spectrum. This layering provides enormous flexibility, which has underpinned the rapid changes that have occurred in Internet services.
The deepest layers enable sending devices to divide large messages into smaller packets, and generate signals on the transmission medium that represent the content of the packets; and enable receiving devices to interpret those signals in order to retrieve the contents, and to re-assemble the original message. The mid-layer protocols provide a means of getting the messages to the right place, and the upper-layer protocols use the contents of the messages in order to deliver services. Exhibit 3.3 provides an overview of the layers as they are currently perceived.
|Application||Delivery of data to an application||Message|
HTTP (the Web), SMTP (email despatch)
|Transport||Delivery of data to a node||Segment||TCP, UDP|
|Network||Data addressing and transmission||Datagram||IP|
|Link||Network access||Packet||Ethernet, PPP|
|Physical||Handle Signals on a Medium||Signals||CSMA/CD, ADSL|
For a device to be able to use the Internet, it needs access to software that implements the particular protocols relevant to the particular kind of access that they are interested in having, and to the particular transmission medium that connects them to other devices.
Messages pass across the Internet as a result of co-operation among many devices. Those devices may be under the control of many different organisations and individuals, who may be in many different physical locations, and may be subject to many different jurisdictions. The path that any particular message follows between the sender and recipient is decided in 'real time', under program control, without direct human intervention, and may vary considerably, depending on such factors as device and channel outages, and traffic congestion. Depending on its size, a message may be spread across many packets, and the packets that make up a message do not necessarily follow the same paths to their destination.
The detailed topology of the Internet at any particular time is in principle knowable. In practice, however, it is not, because of its size, its complex and dynamic nature, and the highly dispersed manner in which coordination is achieved. Control would be facilitated if key functions were more centralised. But centralisation produces bottlenecks and single-points-of-failure; and that would be detrimental to the Internet's important characteristics of robustness, resilience and scalability.
Further details of Internet technology are provided in Clarke et al. (1998), and in texts such as Black (2000), Hall (2000) and Gralla (2002).
The application protocol layer utilises the transmission medium and the lower and middle protocol layers as an infrastructure, in order to deliver services. Some services are provided by computers for other computers, some by computers but for people, and some by people and for people. Key services that are available over the underlying infrastructure include e-mail and the World Wide Web (which together dominate Internet traffic volumes), file transfer and news (also referred to as 'netnews' and by its original name 'Usenet news'). There are, however, several score other services, some of which have great significance to particular kinds of users, or as enablers of better-known services.
During the early years, the services that were available were primarily remote login to distant machines (using rlogin and telnet from 1972), email (from 1972), and file transfer protocol (ftp, from 1973). In 1973, email represented 75% of all ARPANET traffic. By 1975, mailing lists were supported, and by 1979-82 emoticons such as (:-)} were becoming established. By 1980, MUDs and bulletin boards existed. The email service in use in 2004 was standardised as early as 1982. Synchronous multi-person conversations were supported from 1988 by Internet Relay Chat. This was also significant because the innovation was developed in Finland, whereas a very large proportion of the technology had been, and continues to be, developed within the U.S.A.
By 1990, over 100,000 hosts were connected, and innovation in application-layer protocols, and hence in services, accelerated. Between 1990 and 1994, a succession of content-provision, content-discovery and content-access services were released, as existing news and bulletin-board arrangements were reticulated over the Internet, and then enhanced protocols were developed, including archie (an indexing tool for ftp sites developed in Canada), the various 'gopher' systems (generic menu-driven systems for accessing files, supported by the veronica discovery tool), and Brewster Kahle's WAIS content search engines. Between 1991 and 1994, the World Wide Web emerged, from an Englishman and a Frenchman working in Switzerland; and in due course the Web swamped all of the other content-publishing services. By 1995, it was already carrying the largest traffic-volume of any application-layer protocol.
Exhibit 3.4, which is a revised version of an exhibit in Clarke (1994c), provides a classification scheme for the services available over the Internet.
This section provides an outline of the manner in which the Internet's architecture, protocols and operations are sustained. It commences by outlining some important design principles, which are rather different from those that guide most large undertakings. Descriptions are then provided of the institutions and processes involved in maintaining and developing the Internet's architecture, and in its operations. A final sub-section notes the current tensions in the area.
It has been crucial to the success of Internet technology that it depends on few centralised functions, and that the focus is on coordination among many, rather than on control by a few. There was and is no requirements statement. There was and is no master design specification. There are several hundred specifications for particular features; but most of these were written after demonstration software had already been implemented. The Internet shows remarkable tolerance for prototypes and experiments.
Some principles can be discerned that have guided, and continue to guide, the development of the Internet's architecture:
The governance of the Internet has demonstrated much of the same constructive looseness that characterises its design. During the early years, the institutions were merely informal groups, and the processes were merely understandings among engineers intent on making something work. Although there have been employed staff since the mid-1990s, the vast majority of the work is undertaken by some hundreds of individuals on a part-time, voluntary, unpaid basis. For those who travel to meetings, the costs are covered in most cases by their employers.
The first formal organisational element was the Internet Configuration Control Board (ICCB), established by ARPA in 1979. This became the Internet Activities Board in 1983 and later the Internet Architecture Board (IAB), which continues to operate as "the coordinating committee for Internet design, engineering and management" (IETF 1990). It is unincorporated, and operates merely as one committee among many.
The designing and refinement of protocol specifications is undertaken by Working Groups. At the end of 2003, there were 132 active Groups, with a further 325 Groups no longer operational. Proposals and working documents are published as 'Internet Drafts'. Some 2,500 were current as at the end of 2003. Unfortunately for would-be historians, they are not numbered, and not archived, making it very difficult for outsiders to re-construct the history of critical ideas.
Completed specifications are published as RFCs. This term derives from the expression 'Request For Comment', but 'RFC' is applied ambiguously. At the end of 2003, the series comprised 3,666 documents, including 38 informational documents and 77 best practices descriptions, many hundreds of obsolete specifications, many hundreds of specifications for protocols and features that have been or are now little-used, and several hundred specifications that define the Internet, including a small proportion of formally adopted standards.
The Working Groups are coordinated by the Internet Engineering Task Force (IETF). This "is a loosely self-organized group of people who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. The IETF is unusual in that it exists as a collection of happenings, but is not a corporation and has no board of directors, no members, and no dues" (IETF 2001). A committee called the Internet Engineering Steering Group (IESG) acts as a review and approval body for new specifications.
Although all of the key bodies described above are unincorporated, they have had an "organizational home" in the form of the Internet Society (ISOC) since 1992. This is a "professional membership organization of Internet experts ... with more than 150 organization and 16,000 individual members in over 180 countries. ... [It] comments on policies and practices and oversees a number of other boards and task forces dealing with network policy issues". It drew the various committees under its umbrella by issuing them with 'charters' to perform their functions.
There are several other organisations that play roles in particular areas. The deepest-nested layers dealing with transmission media are the domain of an international professional association, the Institute of Electrical and Electronic Engineers (IEEE) and the International Telecommunications Union (ITU), an international authority whose membership encompasses telecommunication policy-makers and regulators, network operators, equipment manufacturers, hardware and software developers, regional standards-making organizations and financing institutions. Meanwhile, the many protocols associated with the Web are the province of an industry association, the World Wide Web Consortium (W3C). ISOC provides a catalogue of Internet standards organisations.
Although in many cases the committees and Working Groups are dominated by U.S. citizens and others resident in the U.S.A., many non-Americans are very active in Internet governance processes, and that has been the case since at least the mid-1990s. A number of Australians are active and influential participants. In particular, Geoff Huston spent some years as Secretary of ISOC and is currently Executive Director of the IAB; Paul Twomey is President of ICANN; and Paul Wilson is Director-General of APNIC. In addition, many Australian engineers contribute to IETF and W3C Working Groups.
The bodies that are responsible for governance of the architecture also play key roles in relation to its ongoing operations, particularly the IAB and IETF. A further important organisation is the Internet Assigned Numbers Authority (IANA).
The first critical function is the allocation of IP-addresses, the numerical identifiers of Internet locations. Until the early 1990s, IANA performed that function. Starting in 1992, the role was progressively migrated to a small number of regional registries, although IANA still manages the pool of unallocated addresses.
In the U.S.A., the Government originally funded the registry functions through an organisation called InterNIC; but since 1997 they have been performed by a membership-based organisation, ARIN (American Registry for Internet Numbers), which also covers Canada, and sub-Saharan Africa. The other registries are also membership-based organisations: RIPE NCC (Réseaux IP Européens Network Coordination Centre), which covers not only Europe but also the Middle East, the North of Africa and parts of Asia; and APNIC (Asia-Pacific Network Information Centre), which covers most of Asia, plus Oceania. In late 2002, support for Latin America and the Caribbean was passed from ARIN to LACNIC (Latin America and Caribbean Internet Addresses Registry). An authoritative but very readable paper on the history and current arrangements in relation to IP-address management is Karrenberg et al. (2001).
A second important role is the management of the wide variety of parameter-values that are needed to support Internet protocols. Many years ago, the Information Sciences Institute (ISI) of the University of Southern California contracted with DARPA to perform these functions. It assigned the work to IANA, which, like so many other organisations, is not incorporated, and was for many years essentially one person, Jon Postel.
A third important function is the establishment and management of domain-names. A domain-name is an alphanumeric identifier for Internet locations, which is easier for people to use than the underlying numerical IP-address. The scheme was devised around the time that the ARPANET spawned the Internet. Increasingly, domain-names also provide separation of the name of a service from its network-location.
From 1983 onwards, IANA played a central role in relation to domain-names. It assessed applications to manage the country code top level domains (ccTLDs, such as .au, of which there are over 200), and was responsible for evaluating proposed additions to the established generic top level domains (gTLDs, such as .com and .org, of which there are currently 14). The management thereafter is hierarchical, based on the authorities delegated by IANA for each of the ccTLDs and gTLDs. The IANA register shows, for example, that the Registrar for .au is AuDA, and that for .org is Public Interest Registry (PIR) (a U.S. not-for-profit corporation established by ISOC).
In recent years the management of domain-names has been a highly visible symbol of attempts to commercialise the Internet. There is currently a movement to shift the responsibility from IANA to a new organisation, the Generic Names Supporting Organization (GNSO). This is discussed in the following sub-section.
Translating the domain-name into the IP-address is called "resolving the domain name." This is performed by a highly distributed system involving tens of thousands of servers throughout the world, called the domain name system (DNS). Through the course of development of the Internet, IANA has played a central role in the management of the Domain Name System (DNS), including management of the root-servers. (There are 13 alternative root-servers, 10 in the U.S.A., and 1 in each of Sweden, the U.K. and Japan).
A more detailed but very accessible overview of cyberspace governance is in Caslon (2003b and 2003c).
For three decades, the community of engineers that are the institutions, and whose efforts are the processes, of Internet governance, have steadfastly and fairly successfully resisted the imposition of legal and administrative strictures.
But as the Internet has matured into the world's primary information infrastructure, there has been increasing discomfort among bureaucracies about the 'constructive looseness' of Internet governance institutions and processes. They are especially concerned that these bodies operate to a considerable degree beyond the reach of national governments and even of international bodies such as the U.N. and the ITU. Hence, since the late 1990s, there have been increasingly strenuous efforts by governments to impose bureaucratic order on Internet governance.
The early running has been made by the U.S. government. Not entirely unreasonably, it considers that it has a substantial interest in the Internet, and, through a number of contracts for services, the legal right to impose some requirements on at least some of the Internet's institutions: "When the Internet was small, the DNS was run by a combination of volunteers, the National Science Foundation (NSF), and U.S. government civilian and military contractors and grant recipients. As the paymaster for these contractors, the U.S. government became the de facto ruler of the DNS" (ICANNWatch 2001).
The U.S. government encouraged the emergence of a "not-for-profit corporation formed by private sector Internet stakeholders to administer policy for the Internet name and address system". The organisation that was formed was the Internet Corporation for Assigned Names and Numbers (ICANN). This has enabled measures to be imposed that would have been infeasible if the functions had been performed by a government agency.
ICANN has three segments:
It is clear that these are intended to function as peak bodies, taking over from the pre-existing bodies, and forcing them to become participants in much more broadly-based fora. The hope was that ICANN would be able to encourage a degree of order without stunting the growth that has been achieved through a remarkably distributed (almost, dare one breathe the word, communitarian) undertaking.
Unfortunately, the organisation's constitution and behaviour have been contentious from the very beginning, and the situation remains vexed. A wide array of senior and well-respected members of the Internet community have accused ICANN of lack of representativeness, lack of openness (even denying information to its own Directors), lack of accountability, and abuse of power. A summary and references are at Clarke (2002).
Meanwhile, the rest of the world is concerned that Internet governance be not unduly subject to control by the U.S. government. The wording of the communiqué following the recent World Summit was diplomatic, but significant : "The international management of the Internet should be multilateral, transparent and democratic, with the full involvement of governments, the private sector, civil society and international organizations. ... International Internet governance issues should be addressed in a coordinated manner. We ask the Secretary-General of the United Nations to set up a working group on Internet governance, in an open and inclusive process that ensures a mechanism for the full and active participation of governments, the private sector and civil society from both developing and developed countries, involving relevant intergovernmental and international organizations and forums, to investigate and make proposals for action, as appropriate, on the governance of Internet by 2005" (WSIS 2003, at 48, 50). The wide-ranging disharmony between the U.S.A. and the rest of the world exists in cyberspace matters as well.
It remains to be seen whether the powerful interests that are benefitting from ICANN's policies will prevail, the organisation will be forced to establish less authoritarian policies and practices, or ICANN will be replaced by a more acceptable body, such as an enhanced IAB or a committee beholden to international organs.
The Internet is just an infrastructure, and the protocols and services are just tools. In order to understand their impacts and implications, it is necessary to appreciate what people have done with them.
The original conception had been that the ARPANET would connect computers. "By the second year of operation, however, an odd fact became clear. ARPANET's users had warped the computer-sharing network into a dedicated, high-speed, federally subsidized electronic post-office. The main traffic on ARPANET was not long-distance computing. Instead, it was news and personal messages. Researchers were using ARPANET to collaborate on projects, to trade notes on work, and eventually, to downright gossip and schmooze" (Sterling 1993).
The emphasis on human communications has continued through the second and third decades. Moreover, people participate in a shared hallucination that there is a virtual place or space within which they are interacting. The term most commonly used for this is 'cyberspace', coined by sci-fi author William Gibson in 1983. Gibson's virtual world was full of foreboding; but despite the dark overtones (or in ignorance of them), people cheerfully play in cyberspace. Its uses put varying emphases on connectivity and content. A general perception has been that content (typified by use of the Web) would overtake connectivity (typified by e-mail); but some feel that a balance between the two will always be evident Odlyzko (2001).
Associated with cyberspace behaviour is an ethos that developed during the pioneering era. The surge of newcomers appears to have to some degree subdued the old ethos; but in part they have adopted it, with the result that it is still very much in evidence. Exhibit 3.6 suggests the expectations that still appear to be commonly held among a significant proportion of Internet users:
In every case, the popular perceptions of cyberspace are partly misguided; but those perceptions are an important part of the shared hallucination, and influence people's attitudes and behaviour.
On the other hand, some significant aspects of human behaviour carry over from the physical to the virtual world. The Internet attracts 'low-life', variously:
Most discussions of ethics in the context of cyberspace are abstract and unhelpful. For an instrumentalist approach to cyberspace ethos, see Clarke (1999c).
There are many ways in which the implications of the Internet can be analysed. One approach to the issues it raises is in Clarke (1999a). A broader perspective is needed, such as that offered in Exhibit 3.7.
|Processor Technology||Grosch's Law – Bigger is more efficient||VLSI / micros – More is more efficient||Commoditisation – Chips with|
|Organisational Form||Hierarchies||Managed Networks||Self-managing Market/Networks|
|Software and Content||Closed|
|Confusion and Tension||Open|
|Confusion and Tension||Democracy and Frustrated Intolerance|
The implications of information technologies have changed significantly as computing and telecommunications have matured. From the invention of computing in about 1940, until about 1980, Grosch's Law held. This asserted that the processing power of computers grew exponentially with their cost. In other words, bigger was more efficient. This tendency towards centralised systems was supported by 'star' topologies for networks, with a master-slave relationship between a powerful machine at the 'hub' and a flotilla of 'dumb terminals' at the peripheries. The natural organisational form to utilise such infrastructure was hierarchical. Software was closed and proprietary. The political form that was served by the information technology of the era was authoritarian and intolerant of difference.
Grosch's Law was rescinded in about 1969, although the impact was felt in the marketplace only gradually (hence my suggestion of 1980 as the indicative year in which the old era ended). Very Large Scale Integrated (VLSI) circuitry spawned new machine architectures, and a new economics. Many micro-computers deliver not just greater flexibility than fewer larger machines, but also more power more cheaply. Networks quickly evolved into the multiply-connected, decentralised form that they currently have. Master-slave relationships gave way to so-called client-server arrangements, where intelligent remote workstations request services from dispersed devices scattered around an office, a campus, and the world. Organisations that take advantage of this technology exhibit not centralised form, but networked form. Software and politics were both thrown into confusion, from which they are only now beginning to emerge.
The final column offers a speculative interpretation of the next phase, which is returned to in the final section of the paper.
This section has demonstrated that the Internet is like no technology that preceded it. Discussion of strategic and policy aspects of the Internet cannot be sensibly undertaken without a sufficient grasp of the technology, infrastructure and governance of the Internet, and of the cyberspace behaviour of humans and their agents.
This section provides historical background on how the Australian segment of the open, public Internet came about. It commences by noting the international context in which it arose, and then traces the history in Australia from 1975 onwards.
Earlier sections of this paper provided an outline of the establishment of ARPANET in 1969 and of the Internet in 1983. The first international connection to the ARPANET appears to have been in 1973, when University College London connected via Norway. This link appears to have operated in that manner until 1982. Then the first connections were created between pairs of nodes in countries other than the U.S.A., increasing the connectivity, the international flavour, and slowly also the robustness and resilience. By the mid-1980s, research communities in several countries had established full TCP/IP connections with Internet nodes in the U.S.A., such that the Internet was now international, if only in a small way.
The Australian computer science community did not establish fixed connections at that stage, for several reasons. The following sub-sections provide background as to why this was the case, and when the situation changed.
Computer science specialisations and then departments appeared in Australian universities during the 1970s. About the mid-1970s, during the ARPANET's early years, a few Australian researchers made spasmodic connections to it via the international dial-up service offered by the then Overseas Telecommunications Commission (OTC).
Meanwhile, within Australia, computer science departments were stringing links together. The Universities of Melbourne and Wollongong exchanged files between two Unix-based computers using a dial-up line. From the mid-1970s onwards, Robert Elz at the University of Melbourne, and Bob Kummerfeld and Piers Lauder at the University of Sydney, ran the very successful Australian Computer Science network (ACSnet). ACSnet's echoes are still reverberating in the form of the Internet domain .oz.au. George Michaelson explained it to me this way: "The ACSnet protocol implemented a dialup modem-based network with store-and-forward as well as live-transfer properties. It was connected into the pre-internet global mail community via gateways to UUCP and other protocols" (private communication, 1999).
George also made the following remark: "Another reason [for Australian computer scientists gaining access to the Internet about 5 years later than they should have] was that asynchronous store-and-forward networking (UUCP, ACSNet) was so successful at delivering the core services (ftp, mail, news) that the inability to perform telnet was a minor issue. The cost:benefit equation for most people wasn't there. ... This is not to BLAME ACSnet for impeding Internet: it made [the Internet] redundant until the cost:benefit issues changed". On the other hand, whereas almost all Australian computer scientists in universities and CAEs (and some employed in industry) had ftp and email from the late 1970s, non-CS 'early adopter' Australian researchers had to wait until 1990.
In the early 1980s, a permanent Australian email connection to the U.S. ARPAnet was established. This involved various contributions by (now Prof.) Bob Kummerfeld and (Sir) Piers Lauders at the University of Sydney, and Prof. Peter Poole and Robert Elz at the University of Melbourne. In the mid-1980s, Geoff Huston at ANU contributed an email gateway from the ACSnet mail delivery system into the DEC VAX/VMS systems that had come to dominate University computer installations.
There were a number of attempts to set up a broader university network through the second half of the 1980s. In March 1986, soon after the domain-name system was deployed, IANA delegated the .au ccTLD to Robert Elz, at Melbourne University.
In early 1988, the Australian Vice-Chancellors' Committee (AVCC) took the decision to implement a national network. By late 1988, the Technical Committee had a clear concept of the viability of constructing such a network. in March 1989, Geoff Huston was transferred from the ANU's payroll to the AVCC's to work as the Technical Manager for what was called the Australian Academic & Research Network (AARNet). He prepared a financial, technical and business plan, and approval to proceed was given in about May 1989.
In May/June 1989, a NASA / University of Hawaii program came to fruition with a 56Kbps satellite circuit funded by Australian organisations at the Australian end and by the University of Hawaiii and NASA at the U.S. end. Geoff Huston advised me that the connection comprised "an Intelsat spacecraft, using a hemispherical transponder to link an earth station located at Hawaii with OTC's earth station located at Oxford Falls in Sydney. The land segments were used to complete the circuit between facilities at the University of Hawaii with those at the University of Melbourne" (personal communication, 1998).
He continued: "This 56K circuit was subsequently upgraded to 128K. The 256K upgrade was not cost effective on the hemitransponder, so when we underook this upgrade the U.S. termination of the service was shifted to San Jose on the West Coast, and the circuit was terminated at NASA'a Moffatt Field location". The connection was effected on 23 June 1989 in Robert Elz's laboratory at the Uni. of Melbourne.
From the outset, AARNet was a purely IP network. Despite temptations to support multiple protocols, the decision was made not to support voice, fax, IBM's SNA or the various international standard (ISO OSI) protocols in particular X.25. Sinclair (1999b) quoted Geoff Huston as saying "It really was 'speed was of the essence'. We were reducing it down ... and trying not to get diverted into agendas that are just open-ended".
The international link was progressively upgraded, and connections established to a succession of Australian universities and the CSIRO, with most connected by May 1990. For 1990, the funding included $800,000 from the Australian Research Council (ARC), plus contributions from participants.
A 1993 AVCC document describes AARNet as "a private data network that provides dedicated telecommunications services in support of its members' research, academic, service and operational activities. The network links all Australian universities, CSIRO Divisions, and a large number of other organisations under the Affiliate membership program" (AVCC 1993b, p.1).
Driven by the combination of supply and demand, both for communications and increasingly for content, the Internet quickly became attractive to people outside the research and teaching arena. Sinclair (1999b) quoted Geoff Huston as saying that commercial researchers were the first users outside the academic community: "The Bureau of Meteorology, BHP research labs, Telstra Research labs were very early adopters".
The Internet was infrastructure that facilitated communications and the publication of content; but the discovery of that content was not necessarily simple. Fortunately, as Sinclair also reported Geoff Huston as saying, librarians were the first non-scientists to appreciate the possibilities of the Internet. As electronic library and Internet services guru Tony Barry put it to me, "there were three classes of people who made the net successful. The technicians ..., the content providers ... and the enthusiasts/promoters. [These included] the library community as they were into networks way back in the mid 80's both via [the Australian Bibliographic Network] ABN but also accessing Dialog and Orbit in the US via MIDAS (OTC). They did a lot to promote the internet from 1992 onwards. ... The Campus-Wide Information Systems (CWIS) movement, driven by university librarians, provided a model for intranets and information sharing, which started with gopher in late 1992" (personal communication, 1998).
Sinclair also quoted Robert Elz as saying that "there were always social applications on the Internet - in the early '90s, [Roy and HG's] radio call of the Melbourne Cup was 'broadcast' and, very early on, there were text descriptions of Test cricket matches in Australia".
Then the 'killer app' arrived. The World Wide Web, courtesy of Tim Berners-Lee at CERN near Geneva, had its first, limited impact in 1991, with the viola and cello Unix-based browsers. By the end of 1993, the CERN server and NCSA's Mosaic browser were available and installed in Australian universities, offering the same interface on Macs, PCs and Unix workstations. The expertise that librarians had developed transferred easily to the Web a couple of years later. The explosive growth of the Web began in 1994, and with it the next round of explosive growth of the Internet.
The previous section documented the beginnings of the Internet in Australia. This section tracks the development from its origins in the research sector to a general information infrastructure for the Australian society and economy as a whole.
Until the early 1990s, the Acceptable Use Policy (AUP), which defined what traffic could be transmitted on the Internet's U.S. backbone, precluded general use of the Internet by the public, or for commercial purposes. AARNet applied similar policies to those in the U.S.A., stating that "Use of AARNet for commercial purposes, and use of AARNet for purposes unrelated to the broad areas of relevance to the academic and research community is not considered acceptable use of AARNet services" (AVCC 1993a). The Australian segment of the Internet accordingly operated as a restricted-access service for universities and the CSIRO. It was to be some years before users outside those communities could officially gain access, and before people did so in large numbers.
In the U.S.A., the pilot connection of commercial services to the Internet began in 1989, although in a tightly controlled manner. "Commercial Internet access was first offered by PSI and AlterNet beginning in early 1990" (Kahin 1995, p.6). In September 1993, the Clinton Administration launched its National Information Infrastructure (NII) initiative. Among other things, this ensured that the Internet migrated away from its original business model, which had relied on central funding from research grant schemes and allocations from the operational budgets of universities and other research organisations.
In order to stimulate the movement to commercially funded Internet segments, usage policies needed to be liberal, so that the providers could attract paying customers of any kind. People perceived new opportunities to communicate with one another, and business enterprises perceived new opportunities for sales and profits. Many more access-points were added, bandwidth was increased, and new services emerged. At the beginning of 1994, there were 3 corporations providing backbone services in the U.S.A. By mid-1995 there were a dozen, and the longstanding backbone for the U.S. portion of the Internet was de-commissioned.
In Australia, the use of AARNet within the targeted research communities increased very substantially between 1989 and 1995. Charging to institutions changed from a levy to a volume-based tariff in about 1992. Usage within campuses was widely dispersed, highly variable and rapidly changing. In the absence of any single obvious manner in which the costs could be allocated, most universities absorbed the cost rather than charging it on to Faculties or Departments. Analyses of the issues are in Clarke (1993, 1994a), (Clarke & Worthington 1994), Clarke (1994b and 1994c).
Preparations for a fuller future were now under weigh. In September 1993, Geoff Huston applied to IANA for a large block of IP-addresses "on behalf of the Australian network community". The ultimate goal was a national IP address registry, both for regional autonomy and for efficiency (because allocations from the US were taking weeks). It was to be "a totally independent entity, which operates within the broad structure of a not-for-profit service operation, and applies a single community policy in an open and fair manner" (Lance 1998). The address space was large enough for over 4 million individual host addresses, which was an enormous amount at a time that large allocations were becoming increasingly rare due to the potential exhaustion of the IP address space. That organisation was AUNIC, named to reflect the network interface center/centre (NIC) tradition.
The formal change in usage policy was implemented in mid-1994, when AARNet established a 'Value Added Reseller' program. This accommodated the growing demand for Internet connection from categories of users outside the research context. It also reflected the growth in the financial demands on university budgets, and on government research and education budgets, and the consequential need to acquire funding from additional sources. Sinclair (1999a) quoted either Geoff Huston or Hugh Irvine as saying that "The first commercial customers were software developers, electronics firms and computing consultants who wanted links to US computer companies. These were people who had money and knew what they were doing. Students leaving university were another group of 'early adopters'".
The term 'Internet Service Provider (ISP)' is widely used, but seldom carefully defined. It logically encompasses all organisations that provide services of any kind over the Internet. Hence a web-site hosting service, and a mailbox provider, are both ISPs. A category of service that is particularly critical is the provision of connectivity to the Internet. The term 'Internet Access Provider' (IAP) is usefully descriptive of an organisation that provides connectivity.
Some limited commercial IAP services were available even before AARNet was established. Western Australia's DIALix claims to have been offering services commercially in Perth as early as 1989. DIALix's connection to the net only became a full IP connection around 1992, but its Principals claim to have made Internet email and Usenet access commercially available by means of periodic STD calls (at 9600 bps), and its customers included some remote from Perth who themselves used dial-up STD connections to Perth.
I am assured by Ian Peter, Pegasus Networks' foundation CEO, that it offered public dialup access to the Internet in Australia, commencing in June 1989 with local access, and moving to nationwide access from 14 September 1989. It operated initially from Byron Bay, and later from Brisbane. It used UUCP and TCP/IP connections to exchange mail and newsgroups with the Internet, initially via direct dialup to USA, and later via ACSNet.
Soon after its establishment period in 1989-1990, AARNet recognised some services providers as being worthy of being related in some way with it. It appears that these formal AARNet Affiliates included Pegasus, registered in 1991, Corinthian Engineering in 1992 and connect.com also from 1992.
A further early provider was the not-for-profit Australian Public Access Network Association (APANA). Founded in 1992 by Mark Gregson, APANA ran many, widely dispersed, small, gratis hosts for bulletin board systems and newsgroups, but developed into a provider of low-cost, non-commercial access to the Internet for its members.
Snapshots of IAPs already active in March and September 1994 are available in Saleeba (1994a, 1994b) That was timely, because the patterns were about to change. AARNet introduced a 'Value Added Reseller' program, charging them under a volume (per-MByte) charging scheme. ('VAR' was a term popular in the commercial marketplace at the time). The first ISP in this formal sense was connect.com.au in May 1994.
The VAR program registered the following organisations in this order: connect.com, OZ-EMAIL, Commercial Software Training Pty Ltd, Australian Internet Systems Pty Ltd (SchoolsNET), Australia on Line, Camtech Limited, iinet Technologies, Halcyon Communications, HiLink Comunications, and Magnadata.
The Global Info Links Project, was launched in Ipswich with 100 local subscribers in December 1994, as the first community scheme. This lasted until at least 1996, and was an early pathfinder for regional schemes.
By late 1994, use by the non-AARNet user base had increased to about 20% of total traffic. An alternative business model was necessary. In July 1995, AVCC transferred its commercial customers, associated assets, and the management of interstate and international links to Telstra. Telstra thereby acquired the whole of the infrastructure that at that stage constituted 'the Internet in Australia', spawning what was subsequently to become Telstra BigPond.
This was variously regarded as the salvation of the Internet in Australia, a commercially realistic negotiation, a necessary transition, a give-away by the AVCC, a sell-out by the AVCC, and/or a naked grab by Telstra for commercial control of the Internet in Australia (Fist 1997). The passions it aroused reflected the importance that many people attached to Internet services. The AVCC's own (very brief) court-history describes it as follows: "[The transfer] stimulated further growth of the commercial and private use of the Internet in Australia. The intellectual property and expertise transfer to industry resulted in development of the Internet in Australia that would not have otherwise occurred at such a rapid rate" (AVCC 2002).
The lack of clarity about the administration of the core functions involved in running the Internet resulted in ill feeling from time to time. Telstra was perceived by many observers to have pillaged a national resource. On the other hand, it donated equipment and connectivity to the AUNIC registry, which administered the Australian IP-address space from 1993 until 2001.
During the period 1995-97, a competitive market began to develop, with both Telstra and Optus providing backbone services. The international linkage represented a serious bottleneck, but gradually Telstra started releasing additional capacity at something closer to the rate at which demand was growing. In addition, several of the larger ISPs established direct linkages overseas. During the second half of the 1990s, the market developed into a more complex, multi-layered network of wholesale-retail network services.
Commercial use of the Internet continued to grow dramatically, and Internet Access Providers (IAPs) and ISPs proliferated, reaching about 600 independent companies (many of them very small) by the late 1990s. A list of Australian IAPs and ISPs was maintained from 1995 to 2002 at Davies (2002).
Returns on investment in IAPs were generally poor. This resulted from the high cost of their raw material because of an insufficiently competitive wholesale market, compunded by continual change, and in some cases by inadequate management and inadequate access to capital. The pundits predicted the imminent demise of the 'one-man-and-his-dog' micro-operations, and the takeover of small and medium ISPs by large ISPs, generally as and when the owners ran out of money to invest. Some concentration did indeed occur, but new players have emerged as well, and the number of ISPs has continued to be in the hundreds. At the end of 2003, the Telecommunications Industry Ombudsman's membership list included in excess of 1,000 company and business names.
By the end of 1997, there were estimated to be 1.6 million Internet users in Australia, of whom 1 million were relatively frequent users. These included 500,000 users in educational institutions, 650,000 dial-up consumers, academic and SOHO (Small Office Home Office) users, and 450,000 Business & Government users. The heavy majority were using dial-up access, with modems capable of up to 33.6Kbps, with 56Kbps modems newly available. Only tiny numbers were using 64/128 Kbps ISDN. Broadband services such as cable and ADSL were not yet available.
Survey data suggested that 88% of users used email, while 65% said they used the Internet for 'entertainment' and 53% for 'business/research'. Internet users who cited online shopping as their primary Internet activity appeared to number only 5,000, although 200,000 appeared to have had at least tried it. This suggested that the conversion rate of prospects was very low, and that therefore some serious impediments existed. All of the 1997 data is from (DIST 1998),
By late 1998, the claim was made that 1.27 million households were online, "a jump of almost 50 per cent in 12 months" (DCITA 1998). By November 2000, more than 50% of Australian adults were online and nearly 40% of households had Internet access. There were 696 ISPs, and 3.92 million user-accounts with those ISPs, 87% of which were home-based. Progress with consumer Internet commerce remained slow, with still only 11% of users saying that they actively used the Internet to purchase goods or services (NOIE 2001).
By 2000, the Internet in Australia was already sufficiently mature that people had become nostalgic about it. The Telstra/AFR Web Awards inducted pioneers into the 'Australian Internet Hall of Fame' a mere 7-15 years after their key work was performed. The inductees were Geoff Huston (1996), Robert Elz (1997), Bob Kummerfeld and Piers Lauder (1998), Hugh Irvine and Chris Chaundy (1999), and Paul Wilson (2000). Reflecting the ephemeral nature of a significant proportion of Web content, the 'Hall of Fame' web-site no longer exists.
Most aspects of Internet governance are international or supra-national. There are four elements that are in some way national: infrastructure within Australia, regulatory arrangements, IP-address management and domain-names.
The network infrastructure within Australia is entirely owned by private sector organisations, although the Australian government still retains a 50% shareholding in the heavily dominant corporation, Telstra. (The fervant desire of the government since 1996 to sell more of the shares has to date been stymied by the Senate, reflecting public nervousness about crucial infrastructure passing entirely out of public ownership and control).
Internet operations in Australia are subject to a variety of Australian laws and regulatory bodies. The most significant of these are the Telecommunications Act 1997 and the Australian Communications Authority (ACA). A complaints process is operated by the Telecommunications Industry Ombudsman. A number of additional laws have incidental or express impacts on the Australian Internet, including aspects of the Crimes Act. The Office of the Federal Privacy Commissioner (OFPC) does what little it can to bring Australia's grossly deficient privacy laws to bear on Internet matters.
Between 1997 and 2004, the National Office for the Information Economy (NOIE) has stimulated and coordinated the Australian government's policies on eBusiness and eGovernment matters. The government's policy stances during that period have, however, reflected considerable ambivalence about the Internet, and lack a guiding philosophy. It has attempted to impose tighter censorship requirements than those that apply to other media, and to outlaw online gambling services, assigning responsibilities in these areas to the Australian Broadcasting Authority (ABA). It has bolstered the flagging fortunes of corporations whose revenue streams depend on copyright in digitisable works. More positively, in 2003 it implemented a world-leading, and well-constructed if somewhat flawed, attempt to impose controls on spam.
IP-addresses were allocated to Australian organisations between 1993 and 2001 by AUNIC, which was administered by Geoff Huston and supported by Telstra. The Asia-Pacific Network Information Centre (APNIC) started as a pilot project in 1993. By the mid-1990s it had evolved into the IP-address registry for 60 economies in the Asia-Pacific region. It re-located from Tokyo to Brisbane in 1998. In mid-2001, the Australian database was transferred to APNIC.
The final area of governance in which local activities are important is domain-names, specifically those within the ccTLD, .au. From 1986 until 2001, the authority for the .au domain was with Robert Elz. For a considerable period of time, all second-level domains (2LDs) were entirely administered on a voluntary basis. Elz delegated to Geoff Huston, who administered edu.au, gov.au and info.au; Hugh Irvine, through connect.com, administered net.au; and Michael Malone, through Connect West, administered asn.au. Elz himself administered com.au, org.au, id.au and oz.au.
From the outset in 1994, Australia has had about the fifth to tenth largest number of active domains of any country. The heavy majority of these were in the com.au domain. By 1996, the dot.com madness was accelerating, the volume of activity was very high, policies were unclear and not consistently applied, and intellectual property law was about to impose itself on domain-name administration in a manner completely foreign to the ethos of Internet engineers. It quickly became clear that the registration process, particularly for com.au, needed to be migrated to some more formalised and better resourced arrangement. With effect from October 1996, Robert Elz awarded a five-year licence to administer com.au to Melbourne IT Ltd. This was a commercial offshoot of Melbourne University, which considered that it had subsidised the cost of Elz's labours on behalf of the Australian Internet community for many years.
Melbourne IT seized the opportunity to establish a substantial business, and promptly started charging for domain name registrations. This caused considerable ill-feeling among some parts of the Australian Internet industry and community, especially as no other licences were issued, and Melbourne IT therefore had a 5-year monopoly to exploit. During the second half of the 1990s, the context was changing rapidly "from cooperative engineering utility to the frenzied focus of world-wide governments; from boring administrivia to commercial power-grab" (Lance 1998).
Much more than just the com.au 2LD needed to be addressed. Attempts were made to establish an organisation called the Australian Domain Name Administration (ADNA), but these were unsuccessful. In November 1999, however, Robert Elz delegated the authority for .com.au to a newly-formed not-for-profit company, .au Domain Administration (AuDA).
AuDA attracted broad (although not complete) acceptance within the Internet industry and community. It also gained governmental support. The Commonwealth has power under s.474 of the Telecommunications Act 1997 to "determine that ... a specified person or association is the declared manager of electronic addressing in relation to a specified kind of listed carriage service", which may (or may not) enable it to anoint a registrar of the .au domain.
Having formulated its key policy statements, auDA approached ICANN in June 2001, seeking the transfer of the delegation for .au from Robert Elz to AuDA. There were considerable misgivings among some of the participants. Not least among these were concerns expressed by Robert Elz to IANA that auDA was not sufficiently representative of the wide range of constituencies. Nonetheless, auDA gained the support of IANA. Redelegation of a ccTLD requires consensus between old and new registrars, except "where there is misconduct, or violation of the policies set forth in this document and RFC 1591, or persistent, recurring problems with the proper operation of a domain" (IETF 1994). ICANN ignored that requirement, and asserted that it had the authority to re-assign the responsibility for .au, without any consensus having been established, and without so much as a policy document to support its actions (Froomkin 2001).
The transfer was effected on 25 October 2001. Between mid-2001 and mid-2002, auDA progressively gained re-delegations for the other four 'open' domains from their longstanding (and long-suffering) volunteer registrars. Since then, auDA has had the authority to administer the whole .au name-space. It performs some functions itself, including the letting of contracts for the operation of databases, the setting of policies in relation to the accreditation of registrars, the delegation of authority over 2LDs, and the requirements of an applicant for a domain-name within each of the 'open' 2LDs. For a comprehensive review of developments to mid-2002, see Johnston (2002).
On 1 July 2002, auDA put a new set of arrangements came into place. It has appointed AusRegistry to operate the databases for the five 'open' 2LDs: com.au, net.au, org.au, asn.au and id.au. These databases contain the mapping information from names to numbers (the DNS registry, used by computers) and information about the registrant of each domain-name (the 'whois' registry, used by people). About 20 accredited retail registrars 'sell domain-names', create new records and amend existing ones, subject to the policies set by auDA.
Of the 'closed' 2LDs, two are particularly important. Firstly, federal, state and local government bodies use the gov.au domain. The National Office for the Information Economy (NOIE) acts as registrar, delegating to State and Territory agencies for the sub-domains such as nsw.gov.au. This was also run by Geoff Huston, in this case from 1993 until 1998. Secondly, educational institutions registered at federal or state level use the edu.au domain. Geoff Huston performed the registrar's role from 1993, and passed it to AuDA in February 2001. AuDA finally completed negotiations and passed policy control to a sectoral coordinating committee (AICTEC) and the registrar function to a Ministerial company, education.au Limited, in June 2003.
That was the final step in the migration of the administration of the .au name-space from 'cooperative engineering utility' to an incorporated organisation with a constitution that endeavours to reflect the enormous breadth of the constituency. The journey had taken a little over 17 years.
This section documents the position in 2003, both internationally, but particularly within Australia.
In mid-2003, there were estimated to be 180 million registered nodes on the Internet, 40 million web-sites, and between 600 and 700 million users.
The extent to which people have become Internet users varies enormously between countries. Exhibit 6.1-1 shows the percentages for the regions, and for the leading countries in each region. There was a small middle-field of developed countries whose adoption curve was somewhat lower (e.g. Germany, France, Italy and Spain), but the majority of countries of the world lagged a long way behind the dozen or so leaders.
|Region and Country||User Count (millions)||Population (millions)||Users (%)|
|Middle East / West Asia|
|* Hong Kong|
|* South Korea|
|* Latin America|
|* The Netherlands|
|* United Kingdom|
As Exhibit 6.1-2 shows, between 1997 and 2002 the populations of the leading nations rapidly became users, such that those countries are already approaching saturation-point. (Allowing for the young, the very old, and the incapacitated, it appears unlikely that more than 70% of the population of any country will be users). The extent of usage (e.g. as measured by hours connected per month, and traffic generated) varies widely, and in all countries scope exists for growth in these aspects.
The data-sources from which the exhibits above were compiled were NUA (2002) and PRB (2003). See also Benschop (2003) and Pew (2003).
Australia now has a substantial Internet infrastructure in place, operated by a sophisticated marketplace of telcos, wholesale IAPs, retail IAPs, and ISPs. The providers and the users both include vast corporations, small businesses, governments, associations, community bodies and individuals.
Unfortunately, the market segments that provide transmission media, and wholesale and retail connectivity, are still far from competitive. Serious concerns continue to be expressed about the concentration of control over the large majority of the physical infrastructure by Telstra, and about Telstra's strong vertical integration. Its business units include the nation's largest telco, its largest wholesale IAP, and its largest retail IAP (BigPond). It is increasingly moving into ISP services as well, particularly in the controlled and highly lucrative video-content segment. This dominance seriously warps pricing and investment decisions.
The Government is intent on swelling its coffers by selling more of Telstra. Hence, in order to avoid lowering its share-price, it refuses to contemplate the dismemberment of Telstra into three or four independent organisations operating in transmission media, wholesale, retail and content. The Australian Communications Authority has sat on its hands. The Australian Competition and Consumer Commission undertook a study during 2003, but this also appears not to have had any outcomes. The Government initiated a House of Representatives Inquiry in late 2002, but its intention was merely to prevent such an Inquiry being conducted by the Senate. The Opposition came to an accommodation with the Government in early 2003, and the Inquiry was aborted, even though 68 submissions had been received. See Budde (2003a and 2003b) and Fist (2003). Politics dominates economic and social needs, and there appears to be little prospect of Australian telecommunications infrastructure and services maturing into a fully competitive market anytime soon.
One result of the lack of competition has been over-loaded and fragile overseas connections, which plagued the Australian Internet during the mid-to-late 1990s. Fortunately, that bottleneck has been overcome, with multiple connections now in use, and controlled by several different parties in addition to Telstra. A great deal of traffic continues to be across the Pacific, partly because the U.S.A. hosts such a large proportion of the world's web-sites, but also because traffic to Europe goes across the Pacific, not via Asia.
Broadband penetration, because of high pricing and incomplete availability, continues to be low, with 86% of connections still by modem (but including a few ISDN users). Cable is available only in affluent suburbs in Sydney and Melbourne, plus a proportion of Canberra, and ADSL is only feasible within 3-4 km of a proportion of telephone exchanges. SDSL is only now becoming available. Satellite is even more expensive than the other broadband alternatives. See also Sale (2001).
Moreover, users in many areas where broadband is unavailable or excessively expensive get far less than 56Kbps from their dial-up connections. The Government has been successful in its endeavours to avoid survey information about achieved dial-up speeds becoming publicly available. As late as June 2003, in its response to the Regional Telecommunications (Estens) Inquiry, it made clear that it still regards 19.2Kbps as being acceptable as a target minimum transmission speed for regional and rural Australia, and even for the less fortunate urban areas.
At the beginning of 2004, the Internet features increasing format-bloat, undisciplined web-site designers, vast spam volumes, the increasing inclusion of wasteful images in email traffic, and sound and video formats set to become mainstream. In this context, low-speed dial-up connections are increasingly impractical for serious or otherwise impatient users; but for many Australian people and some businesses there continues to be no other realistic option. It is therefore no surprise that each Australian user is responsible for generating far less traffic than each American user: in the third quarter of 2002, "Australia has ... an order of magnitude less traffic per person than in North America" (Odlyzko 2003, drawing on ABS 2003).
The extent to which Australian organisations and individuals have adopted Internet services is well-documented. See, for example, NOIE (2003a and 2003b). In the first quarter of 2003, it was estimated that 75% of Australians 16 years and over had access to the Internet. Ahead of Australia were only Sweden (90%) and the U.S.A. (86%). Nearby was The Netherlands (74%), and close behind were Hong Kong (69%), the U.K. (69%), Germany (61%), Italy (59%), France (55%). (That list appears incomplete, however, because other leaders such as Canada, Norway, Denmark and South Korea are missing).
Online time per user per month averaged 10 hours over 18 sessions. The dominant services used were email and the Web. But synchronous chat-services and instant messaging were by then directly competing with email (although also competing with SMS over mobile phones).
The NOIE research suggests that about 30% of users were accessing government sites during 2002-03. It also shows that consumer Internet commerce continued to grow at a rate a great deal less than other Internet metrics, with only 18% of respondents saying that they had purchased anything online during the previous 12 months - compared with Canada (34%), the U.S.A. (32%), Sweden (26%), the U.K. (23%) and Japan (19%). The standout success is Internet banking. Despite the government's efforts, there may also be considerable uptake of on-line gambling and pornography. The only other segments with significant activity appear to be software purchase among more technical users, and to a limited extent books, CDs and PCs. The slow progress suggests that a great deal of Australian consumers' Internet activity is socially rather than economically motivated and/or that the business propositions put by online merchants are unattractive to consumers.
Large and medium-sized businesses have mostly adopted Internet services. Of small businesses, however, only 80% were Internet-connected, and only 34% had their own web-site; and of very small businesses, only 65% were Internet-connected, and only 15% had their own web-site. The 'B2B' segment was somewhat healthier than 'B2C': of businesses with an Internet connection, 35% said that they purchased online.
The explosion in Internet usage appeared to be exponential for a time, as a result of more people becoming users, and those who were connected using it more. The use of the Internet, like telephones, fax-machines and mobile phones, is subject to what economists refer to as 'the network effect': as the number of users goes up, the value to each user increases very rapidly because there are more users with whom interactions can be conducted (Shapiro & Varian 1999). In the laggard developed economies and in developing countries, growth will continue for some time in both user-count and usage.
But, like all adoption curves, the user-count eventually has to be recognised for what it is: a logistical curve whose steepness flexes back to the horizontal. In the leading countries, maximum penetration has already been all-but achieved, and hence the growth in user-count has flattened out. Usage (e.g. as measured by connect-time per user per month) is already fairly high, and hence ongoing growth in traffic may be dependent on users substituting Internet usage for alternative work and play behaviours. Traffic volumes are likely to continue increasing, however, as a result of bloated file-formats, and of new services in such areas as the transmission of image, sound and video.
Telstra's continued dominance across the infrastructure, wholesale and retail tiers is seriously constraining competition, maintaining artificially high prices, and holding back progress. As a result, the outlook for Australia and Australians is less rosy than it is in other leading nations. During the coming 5 years, Australia is doomed to fall from its hitherto strong position in Internet infrastructure and Internet usage. The impact will be particularly severe in ill-serviced regional areas.
Tele-conferencing, in the sense of voice-conversations, was supported as early as 1973, by the Network Voice Protocol (NVP); but sound did not generate a significant volume of traffic until compression algorithms specifically designed to reduce the size of music files, popularly referred to as 'MP3', had significant impacts in the late 1990s. In 2004, the market for recorded music is a major battleground between consumer interests and 'old-world' publishing conglomerates.
AARNet continues to be trail-blazer for the Australian Internet. When the AVCC's contract with Telstra concluded in mid-1997, it went out to tender for a national private network, linking its eight regional hubs by high-capacity dedicated bandwidth, with the capability of carrying voice and video traffic as well as data. Optus (subsequently C&W Optus) was selected. One of the initiatives AARNet has undertaken is the implementation of a set of standards that is generically referred to as 'voice over IP' (VoIP). Commencing in 1999, this has been installed within and between universities and the CSIRO. Many corporations and government agencies now look set to implement VoIP as well.
This constitutes an inversion of telecommunications. Since the 1970s,there have been transitions from voice telephony, via voice-infrastructure subverted to carry digital data, to digital data networks, and on to digital data networks carrying voice. The national telco, which once claimed to deal in long-term investments, failed until the early-to-mid 1990s to detect an impending revolution, and change had to be driven by a movement within the academic computer science community. The century-long history of analogue networks is likely to come to an end within the next one or two decades.
AARNet has also continued to lead in the further development of Internet infrastructure. In 2002, it was the prime mover in the implementation of GrangeNet, which provides a high capacity research network among the major cities (AVCC 2002). It has also been actively promoting grid computing, a means of achieving greater cooperation among Internet-linked computers. It may also be developing a stronger focus on regional and rural connectivity. As a result of the Australian Research and Education Network (AREN) initiative, it may seek an extension of its licence into other segments of education.
A pair of factors look set to bring about considerable change in the Internet during the next few years. The first of these is the proliferation of device-types that enable access. Refrigerators are of some interest, although primarily as a test of the feasibility, desirability and risks of marrying robotics with computing and communications while the infrastructure is so inherently insecure. See Clarke (1993b).
Much more significant is the development of diverse 'handheld' and 'computer-wear' devices, embodying various combinations of what have hitherto been independent telephone, address-book, diary, music-player, game-player, still-camera and video-camera devices. One of the many impacts of these devices will be that individuals will utilise the Internet via more than just PCs in the workplace, home, airline lounge and Internet café, and will increasingly access Internet services via multiple devices at once.
Closely associated with the growing diversity of small devices is the rapid increase in wireless access. This has occurred at the levels of wide area networks (mobile cell-phone, satellite), local area networks (using the various 'WiFi' standards), and personal area wireless networks (in particular, Bluetooth). Distinguishing the key characteristics of 'wireless' and 'mobile' requires considerable care (Clarke 2003b). It appears very likely that effective use of new transmission media and support for new, small, mobile, wireless devices will require changes to Internet protocols. It will probably also require the insertion of additional layers of protocols firstly to deal with session management, and secondly to enable data to be presented differently, depending on the device's characteristics.
Amidst the many positive aspects, a number of spectres haunt the Internet. Section 3.7 suggested that the next two decades would be substantially different from the last few, and would be characterised by tensions between liberty and authoritarianism. There are ongoing endeavours by governments and corporations to overcome the pseudonymity that is the native state of the Internet, and the anonymity that can be achieved with moderate effort. See Clarke (1999). Alliances of business and government are trying to require people to use identities issued by authorities, which can be authenticated, and which can be used as a basis for profile construction, location and tracking. Means are being devised to facilitate surveillance, including various proposals for the unique identification of devices, and for digital rights management schemes to provide unique identification and traceability of copies of software and documents.
There is a considerable degree of tension between, on the one hand, the well-established expectations of open content and the growing movement towards open source software, and, on the other, the demands of copyright owners that intellectual property law be rapidly migrated towards a much stronger notion, more like the property law applying to chattels. It remains to be seen whether the music industry in Australia will alienate its customers as effectively as the U.S. industry has done, by initiating actions against consumers.
Publishers are being supported by parliaments through substantially increased intellectual property protections. Coupled with proprietary formats and mechanisms, and with personal identification schemes, publishers hope to sustain their monopoly control over content. The resolution of the tension between these closed, proprietary approaches and open content will be a critical test of freedoms versus authoritarianism in the twenty-first century information society and economy. See Clarke (1999 and 2001).
Meanwhile, threats exist to the information infrastructure. The Internet may be balkanised, especially if consumers continue to resist paying as much money as providers seek from them. Already 'tunnelling' techniques are in use, which close down a proportion of the transmission capacity of the open network for 'virtual private networks'. Depending on financial returns and security aspects, there could also be a movement back towards dedicated 'leased lines'. There is also a real risk of existing protocols being adapted to enable much greater levels of governmental and corporate control, and of new protocols being introduced that provide such facilities, and that can replace the existing, fundamentally democratic net.
The early phases of the research-only Internet (1969-1990) and of the open, public Internet (1993-2003), have brought interesting times. The current, tense phase promises to be even more interesting, as powerful institutions seek to mould the Internet to a form more attuned to the needs of profit-making and social control.
This section contains all cited references with the exception of unpublished papers by the author, which can be found in the following section.
Abbate J. (1999) 'Inventing the Internet' MIT Press, 1999
ABS (2003) 'Internet Activity: September 2002' Australian Bureau of Statistics, Report 8153.0, 7 March 2003
ASTC (1994) 'The Global Connection: Future needs for research data networks in Australia: Draft Findings', Australian Science and Technology Council, April 1994 , at http://www.cit.nepean.uws.edu.au/docs/aarnet/ASTC/
AVCC (1993a) 'AARNet Acceptable Use Policy' Australian Vice-Chancellors' Committee Inc., 1993, at http://www.ucc.gu.uwa.edu.au/infobase/policies/aarnetau.ucc
AVCC (1993b) 'Engineering a Connection to AARNet' Australian Vice-Chancellors' Committee Inc., 1993, at http://nemesis.csse.monash.edu.au/~azaslavs/local_links/aarnet_aarn-doc.pdf
AVCC (2002) 'AARNet History' Australian Vice-Chancellors' Committee Inc., 2002, at http://www.aarnet.edu.au/about/history.html
Benschop A. (2003) 'Internet Use(rs): Demography and Geography of the Internet', September 2003, at http://www2.fmg.uva.nl/sociosite/websoc/demography.html
Bennett J.M., Broomham R., Murton P.M., Pearcey T., Rutledge R.W. (eds.) (1994) 'Computing in Australia - The development of a profession' Hale & Iremonger in association with the Australian Computer Society Inc, 1994
Black U. (2000) 'Internet Architecture: An Introduction to IP Protocols' Prentice Hall, 2000
Budde P. (2003a) 'Submission to the House of Representatives Inquiry into the Structure of Telstra', January 2003, at http://www.aph.gov.au/house/committee/cita/telstra/subs/sub05.pdf
Budde P. (2003b) 'Australia - Telecommunications and Broadcasting - History' Paul Budde Communication Pty Ltd, 2003, from http://www.budde.com.au/TOC/TOC196.html
Cardwell D. (1994) 'The Fontana History of Technology' Fontana, 1994
Caslon (2003a) 'History of Australian Telecommunications', Caslon Analytics, 2003, at http://www.caslon.com.au/austelecomsprofile1.htm
Caslon (2003b) 'Cyberspace Governance Guide', Caslon Analytics, 2003, at http://www.caslon.com.au/governanceguide.htm
Caslon (2003c) 'Domains and the DNS', Caslon Analytics, 2003, at http://www.caslon.com.au/domainsprofile.htm
Ceruzzi P.E. (1998) 'A History of Modern Computing' MIT Press, 1998
Clarke R. (1993) 'AARNet Economics: How to Avoid Cooking the Golden Goose' Proc. Networkshop'93, Melbourne, December 1993, at http://www.rogerclarke.com/II/AARNetEcs.html
Clarke R. (1993b) 'Asimov's Laws of Robotics: Implications for Information Technology' Published in two parts, in IEEE Computer 26,12 (December 1993) 53-61 and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html
Clarke R. (1994a) 'Electronic Support for Research Practice' The Information Society 10,1 (March 1994), at http://www.rogerclarke.com/II/ResPractice.html
Clarke R. (1994b) 'Background Briefing on the Information Infrastructure' Policy 10,3 (Spring 1994), at http://www.rogerclarke.com/II/PaperIIPolicy.html
Clarke R. (1999b) 'Anonymous, Pseudonymous and Identified Transactions: The Spectrum of Choice', Proc. IFIP User Identification & Privacy Protection Conference, Stockholm, June 1999, at http://www.rogerclarke.com/DV/UIPP99EA.html
Clarke R. (1999c) 'Ethics and the Internet: The Cyberspace Behaviour of People, Communities and Organisations' Proc. 6th Annual Conf. Aust. Association for Professional and Applied Ethics, Canberra, 2 October 1999. Revised version published in Bus. & Prof'l Ethics J. 18, 3&4 (1999) 153-167, at http://www.rogerclarke.com/II/IEthics99.html
Clarke R. (1999d) 'Freedom of Information? The Internet as Harbinger of the New Dark Ages', Proc. Conf. 'Freedom of Information and the Right to Know', Melbourne, 19-20 August 1999. Republished in First Monday 4, 11 (November 1999), at http://firstmonday.org/issues/issue4_11/clarke/, at http://www.rogerclarke.com/II/DarkAges.html
Clarke R. (2001) 'Paradise Gained, Paradise Re-lost: How the Internet is being Changed from a Means of Liberation to a Tool of Authoritarianism' Mots Pluriels 18 (August 2001), at http://www.rogerclarke.com/II/PGPR01.html
Clarke R. & Dempsey G. (1998) `Technological Aspects of Internet Crime Prevention', Proc. Conf. 'Internet Crime', Australian Institute for Criminology, Melbourne University, 16-17 February 1998, at http://www.rogerclarke.com/II/ICrimPrev.html (with Gillian Dempsey, Ooi Chuin Nee and Rovert F. O'Connor)
Clarke R., Dempsey G., Ooi C.N. & O'Connor R.F. (1998b) 'The Technical Feasibility of Regulating Gambling on the Internet', Proc. Conf. Gambling, Technology & Society, 7 - 8 May 1998, at http://www.rogerclarke.com/II/IGambReg.html
Clarke R. & Worthington T. (1994) 'Vision for a Networked Nation: The Public Interest in Network Services' The Australian Computer Society, 17 May 1994, at http://www.acs.org.au/president/1997/acsnet/acsnet.htm
CMHC (2000-) 'Timeline of Computer History' The Computer Museum History Center, Boston, at http://www.computerhistory.org/timeline/
CMHC (2000-) 'A History of the Internet' The Computer Museum History Center, Boston, at http://www.computerhistory.org/exhibits/internet_history/index.page
Davies K. (2002) 'The Australian ISP List', various versions, 1995-2002, at http://www.cynosure.com.au/isp/
DCITA (1998) 'Australia's e-commerce report card', Dept of Communications, Information Technology & the Arts, December 1998, at http://www.dcita.gov.au/Article/0,,0_1-2_1-4_13791,00.html
DIST (1998) 'Electronic Commerce in Australia', Dept of Industry, Science & Tourism, April 1998, at http://www.noie.gov.au/publications/NOIE/statistics/ecomstat.pdf
Dodge M. (2000-) 'Some Useful References on the Geography of Cyberspace', at http://www.cybergeography.org/references.html
Dodge M. & Kitchin R. (2000a) 'An Atlas of Cyberspaces' October 2000, at http://www.cybergeography.org/atlas/atlas.html
Dodge M. & Kitchin R. (2000b) 'Historical Maps of ARPANET and the Internet' October 2000, at http://www.cybergeography.org/atlas/historical.html
Fist S. (1997) 'The TelstraNet plans to take-over the Internet' May 1997, at http://www.electric-words.com/telstra/telisp.html
Fist S. (2003) 'Submission to the House of Representatives Inquiry into the Structure of Telstra', January 2003, at http://www.aph.gov.au/house/committee/cita/telstra/subs/sub015.pdf
Froomkin M. (2001) 'How ICANN Policy Is Made (II)', ICANNWatch, 5 September 2001, at http://www.icannwatch.org/essays/dotau.htm
Gralla P. (2002) 'How the Internet Works' 6th Edition, Prentice-Hall, 2002
Griffiths R.T. (2002) 'History of the Internet, Internet for Historians (and just about everyone else)', at http://www.let.leidenuniv.nl/history/ivh/frame_theorie.html
Hafner K. (1994) 'The Creators' Wired 2.12 (December 1994), at http://www.wired.com/wired/archive/2.12/creators_pr.html
Hafner K. & Lyon M. (1996) 'Where Wizards Stay Up Late: The Origins of the Internet' Simon & Shuster, 1996
Hall E. (2000) 'Internet Core Protocols: The Definitive Guide' O'Reilly, 2000
Huston G. (1993) 'Trends in Communications Technologies: AARNet and the Internet' in Mulvaney J. & Steel C. (Eds.) 'Changes in Scholarly Communications Patterns' Aust. Academy of the Humanities, Canberra, 1993, pp.71-83
Huston G. (1998) 'ISP Survival Guide: Strategies for Running a Competitive ISP' Wiley 1998
Huston G., Cerf V.G. & Chapin L. (2000) 'Internet Performance Survival Guide: QoS Strategies for Multiservice Networks' Wiley, 2000
ICANNWatch (2001) 'ICANN for Beginners' ICANN Watch, 2001, at http://www.icannwatch.org/icann4beginners.shtml
IEEE (1979-) 'Annals of the History of Computing', Computer Society of the Institution of Electrical and Electronic Engineers, at http://www.computer.org/annals/
IETF (1990) 'The Internet Activities Board' Internet Engineering Task Force, RFC1160, May 1990, at http://www.ietf.org/rfc/rfc1160.txt
IETF (1994) 'Domain Name System Structure and Delegation' Internet Engineering Task Force, RFC1591, March 1994, at http://www.ietf.org/rfc/rfc1591.txt
IETF (2001) 'The Tao of IETF - A Novice's Guide to the Internet Engineering Task Force' Internet Engineering Task Force, RFC3160, August 2001, at http://www.ietf.org/rfc/rfc3160.txt
ISOC (2000-) 'Internet Histories', at http://www.isoc.org/internet/history/index.shtml
Johnston I. (2002) 'Domain Name System Reform: International and Australian Developments: Some Public Policy Perspectives' University of Canberra, June 2002
Kahin B. (1995) 'The Internet and the National Information Infrastructure' in Kahin B. & Keller J. (eds.) 'Public Access to the Internet' MIT Press, 1995
Karrenberg D., Ross G., Wilson P. & Nobile L. (2001) 'Development of the Regional Internet Registry System' The Internet Protocol Journal 4, 4 (December 2001), at http://www.cisco.com/warp/public/759/ipj_4-4/ipj_4-4_regional.html
Lance K. (1998) 'The Domain Name System: Engineering vs Economics' Proc. AUUG Conf., September 1998, at http://www.rogerclarke.com/II/LanceSep98.html
Lee J.A.N. (1998) 'Computing Machines', at http://ei.cs.vt.edu/~history/machines.html
Leiner et al. (2000) 'A Brief History of the Internet, version 3.31', at http://www.isoc.org/internet/history/brief.shtml
Mowery D.C. (2003) '50 Years of Business Computing: LEO to Linux' J. Strat. Inf. Sys. 12, 4 (December 2003) 295-308
Moyal A. (1984) 'Clear Across Australia: A History of Telecommunications' Nelson, Melbourne
NOIE (2000-) 'The Current State of Play: Australia and the Information Economy' National Office for the Information Economy, Resource Page, at http://www.noie.gov.au/projects/framework/Progress/ie_stats/state_of_play.htm
NOIE (2001) 'Current State of Play - June 2001' National Office for the Information Economy, June 2001, at http://www.noie.gov.au/projects/framework/Progress/ie_stats/CSOP_June2001/index.htm
NOIE (2003a) 'Pocket Stats' National Office for the Information Economy, July 2003, at http://www.noie.gov.au/publications/NOIE/statistics/pocket_stats.htm
NOIE (2003b) 'NOIE Information Economy Index' National Office for the Information Economy, August 2003, at http://www.noie.gov.au/publications/NOIE/NOIE_index/Aug03/index.htm
NUA (2002) 'How Many Online?', NUA Internet Surveys, at http://www.nua.ie/surveys/how_many_online/index.html
NUA (2003) 'U.S. E-Commerce 1998-2003', NUA Internet Surveys, at http://www.nua.com/surveys/analysis/graphs_charts/comparisons/ecommerce_us.html
Odlyzko A. (2001) 'Content is Not King' First Monday 6, 2 (February 2001), at http://www.firstmonday.dk/issues/issue6_2/odlyzko/index.html
Odlyzko A.M. (2003) 'Internet traffic growth: Sources and implications' in Optical Transmission Systems and Equipment for WDM Networking II, B. B. Dingel, W. Weiershausen, A. K. Dutta, and K.-I. Sato, eds., Proc. SPIE, vol. 5247, 2003, pp. 1-15, at http://www.dtc.umn.edu/~odlyzko/doc/itcom.internet.growth.pdf
Pew (2003) 'The Ever-Shifting Internet Population: A new look at Internet access and the digital divide' Pew Research Center, April 16, 2003, at http://www.pewinternet.org/reports/reports.asp?Report=88&Section=ReportLevel1&Field=Level1ID&ID=378
PRB (2003) '2002 World Population Data Sheet', Population Reference Bureau, at http://www.prb.org/Content/ContentGroups/Datasheets/wpds2002/2002_World_Population_Data_Sheet.htm
Sale A. (2001) 'Broadband Internet Access in Regional Australia' J. Res. & Practice in Infor. Technol. 33, 4 (December 2001) 346-355
Saleeba Z. (1994a) 'Network Access in Australia FAQ', March 1994, at http://www.rogerclarke.com/II/zik.faq.9403.html
Saleeba Z. (1994b) 'Network Access in Australia FAQ', September 1994, at http://www.rogerclarke.com/II/zik.faq.9409.html
Sinclair J. (1999a) 'Everybody's gone surfin'', The Age, 19 June 1999, at http://www.rogerclarke.com/II/Surfin.html
Sinclair J. (1999b) 'The Network Anniversary', The Age, 22 June 1999, at http://www.rogerclarke.com/II/Anniv.html
Sterling B. (1993) 'Internet' Science Column #5, The Magazine of Fantasy and Science Fiction (February 1993), at http://www.forthnet.gr/forthnet/isoc/short.history.of.internet
Telstra (2003) 'Telstra's History', at http://www.telstra.com.au/communications/corp/history.cfm
T&D (2001-) 'Measuring the Net', T&D Limitless Web, 2001-, at http://www.limitless.co.uk/net.lml
WSIS (2003) 'Declaration of Principles', World Summit on the Information Society, Geneva, Document WSIS-03/GENEVA/DOC/4-E, 12 December 2003, at http://www.itu.int/dms_pub/itu-s/md/03/wsis/doc/S03-WSIS-DOC-0004!!PDF-E.pdf
Zakon R.H. (1993-) 'Hobbes' Internet Timeline', at http://www.zakon.org/robert/internet/timeline/
This section provides references to unpublished papers by the author which provide material of relevance to the topic. Published papers by the author are in the previous section.
Clarke R. (1994c) 'Information Infrastructure for The Networked Nation' Xamax Consultancy Pty Ltd, at http://www.rogerclarke.com/II/NetNation.html
Clarke R. (1998) `The Internet as a Postal Service: A Fairy Story' Xamax Consultancy Pty Ltd, February 1998 at, http://www.rogerclarke.com/II/InternetPS.html
Clarke R. (1999a) 'Bigger Than Y2K ; Much More Pressing Than a Constitutional Preamble; I-I-I-It's 'Internet Issues' Xamax Consultancy Pty Ltd, March 1999, at http://www.rogerclarke.com/II/Issues99.html
Clarke R. (2000) '"Information Wants to be Free"' Xamax Consultancy Pty Ltd, 24 February 2000, at http://www.rogerclarke.com/II/IWtbF.html
Clarke R. (2001) 'Defamation on the Web' Xamax Consultancy Pty Ltd, 2 October 2001, at http://www.rogerclarke.com/II/DefWeb01.html
Clarke R. (2002a) 'Overview of Internet Governance' Xamax Consultancy Pty Ltd, 14 August 2002, at http://www.rogerclarke.com/II/Governance.html
Clarke R. (2002b) ''The Birth of Web Commerce' Xamax Consultancy Pty Ltd, 21 October 2002, at http://www.rogerclarke.com/II/WCBirth.html
Clarke R. (2003a) ''Copyright: The Spectrum of Content Licensing' Xamax Consultancy Pty Ltd, 2 July 2003, at http://www.rogerclarke.com/EC/CCLic.html
Clarke R. (2003b) 'Wireless Transmission and Mobile Technologies' Xamax Consultancy Pty Ltd, 18 October 2003, at http://www.rogerclarke.com/EC/WMT.html
Clarke R., Dempsey G., Ooi C.N. & O'Connor R.F. (1998a) 'A Primer on Internet Technology' Xamax Consultancy Pty Ltd, February 1988, at http://www.rogerclarke.com/II/IPrimer.html
I made no contributions at all to the emergence of the Internet in Australia, but I was one of the early beneficiaries of other people's efforts. The preparation of this paper has drawn on the published literature, but to a very considerable extent also on information contributed by more than 100 correspondents. Errors, omissions and imprecisions are mine, however, and readers are encouraged to draw problems with the text to my attention.
Roger Clarke is a consultant in strategic and policy aspects of electronic business, information infrastructure, and dataveillance and privacy. He is neither a communications engineer, nor a computer scientist, but holds degrees in Information Systems from U.N.S.W., and a doctorate from the A.N.U. He spent 1970-83 as an information systems professional, and 1984-95 as a senior academic. He is a Visiting Professor in eCommerce at the University of Hong Kong, a Visiting Professor in the Cyberspace Law & Policy Centre at U.N.S.W., and a Visiting Fellow in Computer Science at A.N.U. He has published scores of papers, all since 1995 at http://www.rogerclarke.com/, a site that attracts over 2 million sites p.a. He is a Board member of Electronic Frontiers Australia and of the Australian Privacy Foundation.
Go to Roger's Home Page.
Go to the contents-page for this segment.
Send an email to Roger
Created: 29 November 1998
Last Amended: 29 January 2004
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 40 million by the end of 2012.
Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 29 November 1998 - Last Amended: 29 January 2004 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/OzI04.html