Principal, Xamax Consultancy Pty Ltd, Canberra
Visiting Fellow, Department of Computer Science, Australian National University
Version of 23 April 2002
© Xamax Consultancy Pty Ltd, 2002
This document is at http://www.anu.edu.au/people/Roger.Clarke/DV/NotesCFP02.html
Computers, Freedom and Privacy is the primary annual event that addresses the impacts of information technology on the often conflicting interests of freedoms and privacy. It has run each year since 1991, in a variety of major cities in the U.S., and once in Toronto, and attracts about 500 on-site delegates.
Most sessions at CFP feature multiple speakers, presenting very briefly on a fairly tightly defined topic, followed by interaction among panel-members, and active questioning from the floor. There is a small number of invited speakers, mostly during opening sessions, lunches and dinners; and there are several events that are held within the context of the conference.
For me, there's always a touch of the surreal about the event. It's partly the jet-lag, and the odd sleeping-hours that go with it. But the feeling is also induced by the topics (which are primarily about the virtual world, and about the rapidly emergent future virtual world), by the level of the addresses and conversations, and by the diversity of perspective.
For the watcher of or participant in Internet policy issues, the program and delegates' list is such a 'who's who' that you keep an eye on the name-badges around you, in case you get the chance to finally meet the meat behind a long-known cyberspace persona. And that you do. Everyone's accessible, and interested in sampling the views of anyone who's interested enough to turn up.
These are personal notes prepared, on the fly, by a frequent participant and advisory committee-member, who has been long involved in privacy issues, in Internet policy issues, and in the community that is CFP. Because they reflect my own interests, and which sessions I attended, they vary from reasonably deep treatment of six or seven sessions, to mere mention of most of them. And - be warned - these are personal notes, unconstrained by any need to be impartial.
This document starts with an overall impression of CFP'02, followed by a partly thematic and partly sequential treatment of the various sessions.
I also published my notes on my previous attendances in 1993, 1994, 1995, 1997, 1999 and 2000.
Note that the conference site contains many presentation materials from the event.
Compared to previous events, the tone this year was rather subdued. I made some remarks along the following lines.
There was a lot less straight-talking at this Conference than was the case in previous years. Many speakers were hedging, especially the (many) lawyers. It seemed to me that Americans have been cowed, somewhat by the terrorist strike, but even moreso by their own rhetoric. As a foreigner, I felt that I needed to remind them that this is supposed to be 'the land of the free'.
There were of course some honourable exceptions, not least those lawyers who did not seem to be concerned that their CLE points might be deducted, or their careers otherwise harmed, if they were seen to imply doubt in the wisdom of existing laws, in the impossibility of contravening any law nomatter how obviously unreasonable, or the policies of the present Administration. A few people said that they felt my remarks were a bit harsh; but most of the feedback I received was supportive.
The sessions varied from reasonable to excellent. The continual accidental meetings in the hall were critical for me in checking what some of the leading thinkers were working on, and bouncing and refining my own ideas.
For me, the main themes this year were in the areas of activism; of identity; of computers, freedom and privacy after the terrorist actions of 11 September and the gross over-reactions that followed them; of intellectual property; and of the digital divide. My notes on the various sessions are clustered under those three headings, followed by catch-alls for the remainder, and for the several associated events.
There's been a drift towards excessive respectability among the body of delegates at CFP. There's now a predominance of lawyers, policy-wonks and respectable hackers (as distinct from crackers). The 'less respectable' hackers and crackers who participated in conferences 1991 - c. 1995 no longer see reasons to come, because they're not reached out to by the program. Probably as a consequence, the FBI appears in smaller numbers. I regret that loss of vitality.
If CFP is to avoid middle-aged, middle-class mediocrity, it needs to sustain difference. If we've lost contact with most of the hacker community, one other opportunity is activists. At long last, a plenary session was held on the topic 'Activism Online'; and a tutorial was offered on legal constraints on activist behaviour. I hope this community is going to be a focus for future sessions at CFP.
See also the resources already offered at the Public Citizen site.
The Internet has become a fundamental tool for political organizing. This panel explored how Bay Area activists and others are using the Internet to mobilize local and global grassroots movements, and what barriers stand in their way.
A remarkable amount of information is available, but you need to tap into the right keywords if you want to locate it. Another such source is Net Action.
Proponents argue that digitizing the nation's social security card system to resemble a credit card system, and creating one national information database, are needed to protect against terrorism. Critics argue that such a tracking and/or monitoring system would violate the core freedoms of the nation's citizens and that what is needed is better procedures among agencies and standardization of data entry.
From the Chair, Peter Swire drew attention to the need for identity authentication in various contexts, and the recurring calls by some segments of government and business for a national ID card to cure all kinds of ills.
AAMVA is the association of motor vehicle administrators in c. 70 jurisdictions. It includes Canada and Mexico. The IT Division operates a private secure network, using AT&T. Commercial driver licensing systems are inter-linked in order to make suspension of licences effective. Problems that confront Dept. of Motor Vehicle (DMV) administrators include people with multiple licences, people with licences in multiple states, false breeder documents, modified licence documents (esp. age), counterfeit licence documents, and bribery of DMV employees.
They recognise responsibilities primarily in relation to driver-related offences, but also youth access to adult-only goods and services, crime facilitation, and identity theft. September 11 has created an opportunity for proponents of enhancement to DMV systems to address additional goals.
A Federal Driver Privacy Protection Act exists, as well as State statutes. The privacy impact is recognised, and a working group is writing model legislation for privacy. [But it can be expected to be restricted to privacy of personal data, and to be a 1970s-style draft, with vast numbers of exceptions to suit the wishes of bureucracy]. He admitted that, after all these years of abuse and complaint, the penalties are very low, especially for corrupt employees. [In fact, I could see very little different in this presentation from one I heard a decade ago].
A task force is currently working to reduce the ease with which drivers' licences are used to facilitate crime and fraud. [What, again? Or still?]. The task force is focussing on:
12 million drivers' licences in the U.S. already carry a biometric, but they're different biometrics, and mostly ineffectual. Challenges include:
Schulman drew attention to the many vague suggestions that there should be a national ID scheme. The burblings of Larry Ellison of Oracle were the most publicised. A sterile, abstract debate was being conducted, that is pitched as though privacy has to be slid down in order to achieve more security.
Would any such proposals work in the September 11 context, and prevent such disasters? The practicalities of the debate are run by the vendors, and are frequently inaccurate. Privacy advocates are quoted briefly at the end, and are seldom directly engaged in the debate about whether the proposal will work.
A study of an existing, smaller-scale scheme is instructive in understanding whether any such scheme could make sense. The US/Mexico Border Crossing Card (BCC) is a new card issued by the State Dept. since late 2001 (to Mexican nationals who frequently cross the U.S. border?), and used by the Immigration and Naturalization System (INS). There are 4 million cards, rising to 6. The system evidences considerable complexity. A 40 pp. paper is available.
The card is manufactured by Drexler, which claims that this scheme gives a good insight into how such things work. See LaserCard.com. It features an optical stripe on the back, containing a digital signature, digital biometric, and counterfeit-resistant features.
Issue depends on the conventional breeder document, the birth certificate. [My 1994 paper on human identification refers to this as 'the entry-point paradox']. The manufacturing process for the card is slow, forcing an interim arrangement to be used. I understood Schulman to say that, at least at this stage, there are no machines to read the biometric, and matching of the card to the person is still manual. He said that it was still not resolved whether the scheme will involve a search against a remote and up-to-date database, or against a stop-list. He implied that the scheme was in some disarray.
See also Andrew's paper on 'Identification Card Conspiracy Theories'.
Dierdre provided an outline of the recently published Academy of Sciences report called 'IDs - Not That Easy: Questions about Nationwide Identity Systems'.
A key contention is that there needs to be clarity about what the problem-set is that is meant to be being solved. Is it prevention of terrorist acts? Is it preclusion of particular people from particular areas? Is it investigation of events, in order to find the guilty parties?
A national ID system might, but might not, solve such problems. There needs to be cool, calm assessment of the technical and operational feasibility of such a scheme.
The card is a trivial manifestation of a complex system. Does there even need to be a card? Or should the focus be on a 'nationwide identity system' rather than a card?
Candidates that have been mentioned include drivers' licences, trusted traveller documents, and the social security card. All suffer significant problems. [And the SSN has been repeatedly rejected as a basis for a national ID scheme]. Some proposals include extensiveness across contexts, such as travel including air travel and accommodation.
Some key questions:
But what is identity anyway? Individuals often have multiple identities. What data is associated with an identifier? How is an identity established? (Within the U.S. alone, there are currently 80 kinds of drivers' licences, and thousands of kinds of birth certificates). How do we relate identifiers to individuals? How can exceptions be handled? What availability, what back-up? Would consolidation of other databases be necessary? If so, it creates a centralised point of failure, and a point of vulnerability both for denial of service, and for unauthorised access to and modification of data. Correlation is made much more feasible if a common identifier is used.
Any proposal needs to be coherent, and a compelling case needs to be prepared for any such scheme, and it needs to be subjected to public review and debate
The time-management was inadequate, and in a rare departure from CFP norms, there was no time left for questions.
Peter Swire asserted that ID schemes incorporating biometrics will be used. We need technical proposals that can work for all parties.
Biometrics identification systems are being touted as one of the best methods for identifying terrorists and preventing future attacks. But what exactly are the capabilities of these systems? Are some biometrics systems more reliable than others? And where will the use of these technologies stop? Can they be controlled?
[Caveat: As a panel member, my remarks in this section are probably yet more committed to a particular perspective than those in the rest of this document]
Barry Steinhardt reported on the utter inadequacy of facial recognition technology, and its successive complete failures at the Tampa SuperBowl and Ybor City FL. Not one person has been correctly identified by face recognition technology in public places. This led to the inevitable conclusions that the vendors are misrepresenting the technology in a manner at least bordering on fraudulent, and the flock of airport administrators and other public and private sector organisations that have been foolish enough to commit to installing it are being wantonly gullible. Barry suggested a set of Fair Information Practices that could be applied to biometrics. Resources include an FAQ on Facial Recognition, and 'Drawing a Blank'.
Martin Huddart appeared to accept at least that the Tampa implementation was "inappropriate", and perhaps even that face recognition was terrible. Although invited as a representative of the industry association, he used the example of his own company's product, based on hand geometry. (This appeared to be less an unreasonably marketing-oriented manoeuvre, and more the quite reasonable ploy of focussing attention on one of the few technologies that has been proven, albeit in quite specific contexts). He claimed that 90% of nuclear industry installations used it, as do many operational areas within airports, anthrax laboratories, etc. It has even been applied on at least one university campus (Purdue?).
He promoted its use as a 'transaction-verifier', and asserted (without explanation) that "it can prevent identity theft". [Given that it 'reduces' (in the logical sense) a person's hand-geometry to what appears to be a 6-hexadecimal-character code, it would appear to risk duplicates, and to be potentially spoofable.] He stated that IBIA does not advocate storing images / bit-maps, but rather the storage only of "one-way hashed templates". [I have to express some doubt as to whether it's really a one-way hash]. He drew attention to the diversity of contexts, by distinguishing active/overt from passive/covert collection, and identification from what he and the industry [naively] call 'verification'.
He pointed out that it's not the biometric that identifies a person, but rather the database that's used in conjunction with it. He claimed it was not true that "it can track where you go" [which seemed like sleight of hand, because, in his terms, it's the trail of transactions recorded against the database that can do the tracking]. He also argued that, because different biometrics are appropriate in different circumstances, and they're not compatible, they aren't as intrusive as privacy advocates think. [Once again that was misleading, because all of them can be mapped to the same identity in the same database, and hence the data that they generate can be compatible, thereby giving rise to every fear that advocates describe].
He drew attention to the fact that the IBIA has a set of principles. [He overlooked the facts that they are utterly inadequate, and that they were uttered ex cathedra rather than being the subject of meaningful negotiation with all relevant stakeholder groups]. All in all, however, he was a much more credible representative of the industry association than the snake-oil salesmen that have been crawling around Washington DC and every Council in the U.K.
Peter Hope-Tindall stressed that, even if the biometric was stored only on a token in the possession of the relevant individual, there were still significant privacy risks to be overcome. He drew attention to the 2% card-failure rate in the Mondex trails in Guelph, Ontario, (which would, for example, create significant difficulties in relation to 8-10 people in every jumbo-jetload of airline passengers). He was scathing about the claims of vendors of 100% accuracy, and of the ease with which John Cleese, in drag, had no difficulty getting through a facial recognition checkpoint.
He considered that the kinds of projects that were feasible were those which:
He stressed the need for open discussion, and for statutory protections. He bemoaned the current use of biometrics as a 'technology panacea' that is spuriously perceived as a solution to complex problems, and the opportunism of marketers who are seeking sales irrespective of the merits and inadequacies of their products.
In George Tomko's absence, Peter also read George's paper. George considered comparison against a database rather than a personally-carried reference measure to be not privacy-friendly. He also considers that the threat-model is commonly misunderstood (i.e. the organisation commissioning the system is unclear which ill(s) the system is meant to address). This linked with a quite general failure to appreciate the difference between privacy and security. George also made a case for biometric encryption. [A biometric is a measure that is very rarely identical on each occasion it is read; whereas an encryption-key must be a constant; so I'm sceptical about the feasibility of this].
Capt. Ron Davis expressed concern about the absence of law enforcement officers from the debates about biometrics, because they were the people who were going to eventually have to use the technologies, products and applications. He drew attention to the risks of over-use of 'national security' as a driving force, noting that it was much-loved by Hitler. His primary proposition was the prevalance of 'racial profiling' and stereo-typing, i.e. the selective focus on minorities, and the expectation that they were the trouble-makers in any given situation.
Roger Clarke (that's me) was the final speaker on this panel. I acknowledged that a lot of points in my presentation had been already made, and hence focussed on a few. Using a slide-set, I argued that:
One question to the panel concerned the fuzziness of the measurements, the resultant certainty that for some subjects the process will produce unsatisfactory results, and hence the necessity of fallback (or 'backdoor') arrangements to exist, resulting in compromised security. 1-5% of a full Boeing 747 is 4-20 people, so the scope for breach is very large.
Another point raised from the audience was that biometric schemes create a serious barrier for people who need to change their identity. The example given was victims of domestic violence, but other categories include 'celebrities', 'notorieties' and 'VIPs' (who are subject to widespread but excessive interest among sections of the media and the general public, including 'stalkers'), protected witnesses, and people in a variety of security-sensitive occupations.
I drew attention to the need for refinement of concepts and terminology in order to analyse such situations. In particular, we use the terms identity, identifier and identification ambiguously. We have many identifiers (our personally-chosen and externally-imposed names, and our many organisationally-imposed codes). But each of us is only one entity. To correspond with that, we need the concept of entifiers (for our biometrics), and entification (for the process of extracting an entifier).
A more substantial paper is forthcoming. If it isn't there yet, feel free to fall back on my preparatory notes.
Many electronic and mobile commerce systems collect and transfer information about user identity and location. Are single-sign-on systems for Web users such as Microsoft's Passport, AOL's Magic Carpet and Sun's Liberty Alliance Project desirable conveniences, or unacceptable threats to privacy, or both? Is the logging and retention of cell phone users' travels across mobile telephony cells acceptable, or does this cross the line allowing pervasive surveillance? Are the information practices of the multiple organizations handling the information fair? Are the systems secure? What impact will these services have on anonymity of movement?
[Arbogast and Cochetti were so bad that I gave up on this session after about 10 minutes. Microsoft is blabbering. The Verisign speaker hadn't even taken the trouble to read any of the accumulated literature, and was trying to define 'privacy' to mean something that suits corporate goals. Neither had done their homework on the nature of CFP: the audience is too well-informed to swallow the kinds of pretences that were being put forward. At one stage I was going to be a member of this panel. It's fortunate that things were re-arranged, because I'd have been utterly scathing about my fellow-panellists. Fortunately, Dan, Avi and Jason are all gentlemen. Well, at least I thought Jason was a gentleman. He tells me he thought he was quite blunt].
My take on this topic is at Clarke (2001).
Civil discovery subpoenas issued to ISPs and OSPs such as Yahoo, Earthlink and AOL seeking the identities of their subscribers and users have become commonplace. These subpoenas raise serious First Amendment concerns. The panel discussed what legal tests ought to be applied when the identity of an Internet speaker or listener is sought and discuss practical implications for Internet users. It also discussed possible legal or non-legal responses to the problem
Continuing advancements in computer technology for storing, processing, and transferring personally indentifiable information have vaulted privacy into the public consciousness. Though technology can be used to invade our privacy, various tools, techniques, and standards have emerged over the fast five years aimed at protecting privacy on the Internet. These so-called privacy enhancing technologies were reviewed by leading experts, who discusssed what's currently available, areas that need further research, and predictions into technologies of the future.
My background paper on this is at Clarke (2001).
The panel discussed the legality of the many acts since 11 September 2002 to resist FOIA requests, to create new exemptions to FOIA to aid cyber-security, and the removal of previously accessible information from Internet by both government agencies and other organisations. It also assessed the short term and long term ramifications of the restriction of public access to formerly public information.
Passage of the US PATRIOT will alter the way that law enforcement monitors communications, particularly on the Internet. Privacy advocates have argued that these new powers endanger the privacy rights of innocent citizens. Law enforcement argues that these tools are necessary to catch terrorists. Was this the proper balance and what will it mean for the future?
Following the events of September 11th, the leaders of developed nations have moved quickly to establish new agreements for international security cooperation. Many of these agreements are being forged secretively, and with little democratic oversight. This session discussed the new era of control and surveillance that has arisen since that tragic day, and what it will mean for our privacy and for national security and law enforcement.
Computer professionals have been arrested for violating the Digital Millenium Copyright Act, and some have refused to attend conferences in the United States or to publish current research because of the threat of arrest. How does (or might) this affect you in your job?
First a role-playing scenario depicted an author from abroad starting to deliver his paper at CFP2002 describing a decryption algorithm that he has widely sold to individuals who have used it to break the digital-rights-management protections on various commercial software content and applications and being arrested for violation of the DMCA. The U. S. attorney has been encouraged to make the arrest by a major industry publishing association. A well-known attorney with experience in high profile DMCA cases is engaged to defend the author. The publishing association says that it is not attacking speech but rather vigorously prosecuting violations of their intellectual property rights, and this should be a lesson not only to the arrestee, but also to the organization that sponsors the conference. A reporter for a major newspaper witnesses the whole event and interviews all the parties involved.
Then a panel of participants with hands-on experience in these matters will discuss and debate what's happened so far and what is likely to happen next. Can we have free speech and unfettered scientific research while protecting digital content? How can conflicting laws be resolved at the same time that technology for playing and broadcasting copyrighted work is becoming less expensive and in greater supply? How does the preceding scenario illustrate (and misrepresent) the real issues involved?
Scenario Participants:
Panel
This was a beautifully conceived and executed hypothetical. There were three factual bases behind the hypothetical setting:
A (possibly unplanned) fourth factual basis, pointed out during Question Time by Phil Zimmerman (Mr PGP), was that the prosecutor in the hypothetical, Bill Keane (now in the private sector), had been the prosecutor in Phil Zimmerman's case, in which the Justice Dept. used the prosecution as an instrument of oppression, keeping Zimmerman under serious threat of imprisonment for some 4 years.
JPB argued that 'intellectual property' is an oxymoron, and an obnoxious one. It's been a serious mistake to apply property notions to the non-physical, and we should stop doing it. Publishers add far too little value to justify their continued existence in the new context. Publishers also control access to information, and there is bias in the messages reaching the public about extensions to I.P.
Metalitz criticised JPB for adopting a Cloud-9 perspective, and mounted an orthodox defence based on the fact that Congress has passed laws to protect I.P. in both the distant and the recent past (and that therefore I.P. must be 'a good thing'). The justifications offered were old-style rationalist/industrial economics, ignoring the more relevant information economic analyses; protection encourages investment, publishing industries are big, and employ lots of people; and other countries do it too. It was a big-end-of-town answer, looked at the past, overlooked the politics of market power, and ignored the topic (which was meant to be 'the future'). He provided a splendidly confident rendition of dinosaur thinking. [I should of course have this whole paragraph in square brackets, in order to indicate that it embodies value judgements; but it's probably obvious enough ...]
My take on this topic is at Clarke (1999) and Clarke (2001).
Most of the core standards and protocols on which the Internet is built are in the public domain or available on a "royalty free" (RF) basis, and the open source software movement depends on such an approach. Increasingly, however, standards-settings organizations such as the IETF and the W3C have considered standards that would be covered by "reasonable and non-discriminatory" (RAND) patent licensing. Standards bodies are also increasingly facing claims by third parties that standards and protocols in development are covered by privately-held patents. These patent issues raise fundamental and difficult questions about the work of standards bodies, and the future of open source and the open Internet:
Regrettably I joined this important session far too late. The latter stages were discussing W3C's current policy in relation to patents, and specifically 'Reasonable And Non-Discriminatory' (RAND) patent licensing (which may involve fees) and Royalty-Free (RF) licences (which is a RAND licence without fees).
The P2P lawsuits are piling up: Napster, Scour, Aimster, Morpheus. Although the rhetoric is about piracy, the litigation is about technology. In every P2P case to date, copyright owners have targeted the technologists, instead of the end-users doing the infringing. What does this mean for the peer-to-peer industry, and what lessons should be drawn by other technology innovators? Are we entering a world where technologists will be held liable for the activities of their end-users?
Some history and teaching resources on this topic are at Clarke (2000-).
Lunchtime speaker on Thursday was Larry Irving, Assistant Secretary of Commerce for Communications and Information in the Clinton Administration, who is widely credited with coining the term "the digital divide".
(I emailed with Larry before the event, to clarify whether the term was originally used in the internal U.S. content or the international context , i.e. northern hemisphere cf. southern hemisphere, developed cf. developing world. He affirmed the former. The division in the U.S. is multi-dimensional. In subsequent discussion, he said that "I want to make sure every American, regardless of race is on line. Income and education are indicators of connectivity, and race is an additional factor that increases the likelihood of being on the wrong side of the Divide").
Larry admitted that, since the end of the Clinton era, he's spent more time in the marketplace than in the marketplace for ideas; but he was re-emerging, and glad to be back. His extemporé address focussed primarily on the digital divide, followed by some comments on the threats involved in media concentration, and on privacy.
He launched the 'digital divide' notion, and spent some years committed to it, including a failed attempt at a market-based approach. During the Clinton period, the initiative made considerable progress, with investment by federal, state and local governments, and philanthropies. Under the Bush Administration, however, funding has prettymuch ceased.
He'd subsequently backed off from the issue. The problem was that people had started to pigeonhole him, to overlook the many other things he did, and to discount what he said about it on the grounds that he seemed to be a single-issue person, he'd been saying it for years, and was an Afro-American, who would say that, wouldn't he? In any case, it needed fresh blood.
He came back to it this year, because of the many inane statements being made by public officials in the Bush Administration, such as:
It's a myth that the digital divide has been conquered. On latest figures, Larry argued, a quite small proportion of whites have no access to the Internet, but 60% of Afro-Americans, 70% of Hispanics, and 87% of Hispanics who speak Spanish at home. It's true, however, that the disparity of access is also closely correlated with income: of people in households with income above $80,000 p.a., 80% have access from at least one of home, work and school; whereas, below $25,000 p.a., the corresponding figure is only 25%. Statistically, however, the disparity is only about 50% explained by income, i.e. race is a critical determinant.
[Larry subsequently provided me with some further stats, which combined with population data of the U.S. Census, provides the following (very rough!) pattern:
Ethnic Category Population Not Connected % Not Connected 'White' 210m 80m 38% Black or African American 35m 20m 57% Hispanic/Latino 35m 20m 57% Total 280m 120m 43% ]
[During question-time, I suggested that access disparities in Australia and Western Europe were less seriously skewed along ethnic and income lines; but that differences among urban, regional and rural areas were a big concern. He hadn't mentioned geography in his address. So was there such an issue in the U.S.? He answered that the strongly Scandinavian States (Minnesota and the Dakotas) were, like Scandinavia itself, strongly wired; but that otherwise the rural States, especially the rural blackbelt, high-minority, southern States, were major problems, i.e. there's a strong correlation. The unconnected whites are mostly less educated, have lower income and often live in rural or centre-city (as opposed to suburban) communities.]
It's also a myth that market forces will shatter the divide. Although, in total, 'the black and brown markets' have big buying-power, his attempts (together with Magic Johnson and the CEO of Starbucks), using the mark 'UrbanMagic', couldn't get traction with the idea of a market based approach. "It was not direct rejection, but a more subtle disbelief that the m inority market was viable afor technology ... Stereotypes die a tough death".
Larry related an anecdote about his ability to convey the value of information technology to his friends in Brooklyn and Queens (among them, policemen and baggage-handlers). He demonstrated his 2,000 Napster downloads and his Photoshop-morphed images; and they understood without explanation that they really wanted one of those things! But he'd been unable to convince Gateway, Dell, Sony, or even Apple to conduct advertising campaigns using that approach.
He also argued that the intuition that people need in order to use network-based tools can be acquired in any context (the home, the school or the workplace). It doesn't matter much which the context is, as long as the intuition is developed. But the infrastructure gap between universities and community colleges (in Australian terms, TAFE colleges) is enormous. [Community colleges are much lower cost than universities, and enrolments have high proportions of white lower income or working class people, blacks and Hispanics. In Australia (and in Western Europe), on the other hand, enrolments in the equivalents of community colleges are generally not as heavily ethnical biassed; and they're generally well-equipped anyway.
[On another aspect, I've got to express some doubt that intuitions about such things as P2P, video and audio, and even IRC/ICQ, could be developed in the workplace].
On the question of how much of this is a federal government policy question, Larry argued that it's not up to business (and as a Company Director he is well aware of the legal obligation to drive shareholder value and hence not be socially responsible). He also argued that local communities need to address local policy issues. The federal responsibility is to ensure that the infrastructure is in place, together with the capacity to use it. There's a paradox that the Bush Administration will fund infrastructure enhancement in developing countries, but not within the U.S.
A second broad topic that Larry briefly addressed was the considerable concentration that is currently occurring in the media industry, across TV, cable and wireless infrastructure, IAP and ISP services, and content; and within regions. He believes that there's an emergent broadband oligarchy, which is seriously threatening to the public interest. [Did he mean the more obvious 'oligopoly'? Or does he genuinely fear 'government by a dominant class comprising Murdoch, Turner and Disney'?!].
Larry's third and last topic-area was privacy. He expressed concern about the survival of privacy not just in 'an age of information technology', but in 'an age of convenience'. Consumers sell their data for far too low a price; and kids sell their family history and profile for a T-shirt. [The second one was new to me, and I should ask him some time!].
He also expressed concern about video surveillance, especially the democratic version (i.e. everyone can afford a camera, and a video-feed onto the web). Another worry was the invasive behaviour of technologies like Kazaa/Audio Galaxy [Kazaa, by the way, is nominally owned by an Australian company, although no-one seems to be able to find it. But because it's a bit smaller than Microsoft, it's a good example to use ...]. The problem is not just that it's utterly invasive, but also that the licence terms (which are complex small-print that of course no-one ever reads) authorise the invasive behaviour.
In his closing comments, he identified the need for not just theories, but also for real-world action. And he noted that he gets a better hearing overseas than in the U.S.A.
For another report on Larry's speech, see Robert MacMillan's piece in Newsbytes of 18 Apr 2002.
Recent studies have reported that the digital divide between technology "haves" and "have-nots" is closing in the US, meaning more minorities, the poor, and rural residents have access to the Internet than ever before. However, amidst signs of changing government policies and dwindling sources of corporate funding for digital divide programs, it is unclear whether current progress is sustainable. This session discussed what's working and what's not from the perspective of Bay Area organizations and companies working to bridge the digital divide.
On Tuesday 16 April, CFP commenced with six half-day Tutorials. These were on:
The Opening Speaker was James Bamford, investigative journalist, sometime Washington Investigative Producer for ABC's World News Tonight, and author of the only two books on the National Security Agency ('The Puzzle Palace' and more recently 'Body of Secrets').
Bamford provided background on his research into the National Security Agency. The first book was prepared in the face of considerable opposition from the agency, although the second was produced in less contentious circumstances.
NSA is a huge agency located on its own highly secure campus north of Washington, dubbed by its occupants as 'crypto-city' (with 50 buildings, including its own chip plant, its own paper-pulping plant, and 32 miles of roads). It hosts the world's most powerful collection of computers (about 120 supercomputers?). It produces 50-100 million documents each year, more than the rest of the U.S. government combined.
Its primary function is to eavesdrop on traffic from all over the world, analyse it, and pass it on to other agencies, in order to assist the U.S.A. to protect its national interests. The more interesting data (such as military and diplomatic traffic) may be compressed or encrypted. The volumes are vast, and filtering is essential. The example of a filtering rule that Bamford used was that 1-301-688 is the NSA's telephone prefix, and 202-456 is the White House; and these are meant to be indicators of a message of interest to spooks ...
NSA is increasingly focussing on access to content in databases. This is being conducted by the Special Collection Service, a CIA-NSA combine.
The paradox facing the NSA today is that some people criticise it for collecting too much information, and others criticise it for going deaf.
A special court has existed since 1978, the Foreign Intelligence Surveillance Court. It is highly secretive and barely known. It appears to have never yet declined a government request. In any case, it deems the US President to have the power to authorise bugs on any person and any organisation.
The Echelon system is an arrangement whereby the USA shares secrets with several other countries, in return for feeding traffic in to NSA.
The organisation's powers are astounding, and its behaviour insufficiently controlled. It may well not be monitoring Americans within the U.S., but it monitors people and organisations elsewhere in the world, whether the parties are Americans or not.
NSA has some degree of authority over telcos, and agreements with them, in order to facilitate collection of data. Technological change presents continual challenges. Technology is far less friendly to NSA than it once was. Satellite transmission was very easy to pick up, but fibre-optic cable is taking over now, and that makes it harder, because there are fewer access points. There has been some increase in the use of encryption, as the cost has come down.
The switch from a focus on Russia to a general worldwide interest means that there has been a proliferation of relevant languages, and the availability of qualified and security-cleared linguists is challenging. Examples mentioned were Haiti, Uganda and the Burmese-Thai border.
September 11 highlighted serious problems with the effectiveness of the NSA's techniques in relation to contemporary terrorism. NSA looks in particular for people on watch-lists, large transfers of money, and transfers of weapons. The September 11 terrorists evidenced none of them. For some time in the late 1990s, Osama bin Laden was monitored while using a satellite phone; but it was never used for terrorism-related conversation, but only for social calls.
Patrick Feng asked about the impact of the NSA on technology and standards. There's a chapter in the second book, but Bamford was rather vague about it. He mentioned only reverse-engineering of equipment such as Cisco switches, and hiring ex-employees of companies that supply relevant hardware and software. The talk was interesting, but hardly thrilling.
Patrick Ball of the Association for the Advancement of Science gave an update on a project to provide evidence in support of the prosecution of Slobodan Milosevic at the International Criminal Tribunal for Former Yugoslavia (ICTY). The report was presented in The Hague on 13-14 March 2002.
The report used techniques from historical demography as well as multiple systems estimation to model patterns of killing and migration flow. Comparing killings and migration to patterns of KLA activity and NATO airstrikes, the hypotheses advanced by the Yugoslav government were rejected. Key coincidences in the data were observed which are suggestive of agreement with the hypothesis that Yugoslav forces were responsible for the violence.
Free and fair elections are the foundation of democracy. Computers will revolutionize the way we vote. This panel examined the challenges that are introduced when people use computers or the Internet to vote, and whether adequate solutions exist to meet those challenges.
The U.S. has greatly varying legislation in 50 jurisdictions, and 10,000 electoral areas. There is considerable evidence of errors in existing manual and automated election systems. Internet elections are going to have to confront many challenges. Even electronic tools in election booths are seriously challenging. The session investigated some of the risks, and the prospects of progress being made.
This panel considered policy challenges for activists from around the globe, and provided perspective on how the issues are being debated and regulated in different regions of the world.
ICANN was created 3 years ago as a unique experiment in Internet self-governance. Could a private, non-government, global organization coordinate critical Internet naming and numbering functions in a legitimate way? Increasingly, critics complain that ICANN has not fulfilled it's promise. This panel debate examined whether ICANN's vision of bottom-up global self-governance for the Internet is a myth or a reality.
"Medical records are beacons into our past [and] windows into our future," wrote Simson Garfinkel in 'Database Nation: The Death of Privacy in the 21st Century'. With the growing power of information technologies, medical diagnostics, predictive medicine, and genome science to obtain sensitive information about individuals, the questions we need to ask are: Who should have access to this information? And under what conditions? Moreover, to what degree should the public be aware of -- and actively involved in -- the dialogue concerning the generation, use, and disclosure of health, medical, and genetic information? These questions as viewed through the lenses of the law, policy, ethics, business and the public were the focus of this session.
Public institutions such as libraries, election agencies and courts routinely gather and store records containing personal information on individuals. Computers, databases and the Internet are facilitating greater access to public records, including those records that contain personal information on individuals. This panel will examine current practices and protections and seek ways to balance an individual's privacy with the public's right to know.
For my take on this topic, see Clarke (1997).
There seems to be universal agreement that consumers need to have a better understanding of data privacy and commercial business practices to enable them to make smart choices. However, there is also general agreement that relatively little progress has been made. Who is to blame for the lack of consumer privacy information? How can things improve in the future?
Litigation, particularly class action litigation is becoming a significant tool to enforce privacy in the private sector. There have been few if any, class certifications in the Privacy Arena. However, the courts are now struggling with several critical issues that could alter that outcome. This panel included the counsel for the respective sides in several of these important cases under the Cable and the Electronic Communications Privacy Act.
Timothy J. Muris, Chairman of the US Federal Trade Commission (FTC), tried to convince delegates that the US government had their interests at heart, and was doing good things for e-privacy. He claimed to have made more resources available, to have been responsible for new initiatives, and to have launched new cases. [He was self-confident and interesting, but the message was every bit as vacuous as I'd feared. The Democrats were fairly poor at addressing consumers' needs, but the Republicans are dreadful]. He referred specifically to:
Bill Lockyer, California Attorney General. A very relaxed speaker, and sometime labor man, with some nice one-liners:
Jackie Speier, California State Senator
Dewayne Hendricks, Dandin Group, on 'Are the Tools the Rules?: The Future of the Digital Commons', introduced by Bruce Koball, Technical Consultant & Information Director, CFP Steering Committee
Bruce Sterling, sci-fi author and literary critic, journalist, and presenter extraordinaire. Having chaired Bruce at a couple of earlier CFPs, I very much regretted having to leave before his closing address. But you can read the version he's posted it on his Viridian site. Now did he really write the original version with a fountain pen?
Each year, the Electronic Frontier Foundation (EFF) provides awards to individuals and organisations that have made significant contributions to the advancement of rights and responsibilities in the information society. Details are on EFF's site. This years awards were made to:
Some years ago, Privacy International (PI) instigated the Orwell Awards for privacy-invasive behaviour. These are national awards, and recent events have been held in Austria, Denmark, France, Germany, Hungary, The Netherlands, Switzerland, the United Kingdom and the United States, with more coming soon.
This years awards for the U.S. were announced at the Conference. PI also sends positive signals about privacy-protecting behaviour, under the name 'the Brandeis Awards'.
I had to leave just as the awards were being made, but the winners should become visible on the above sites.
CFP is a hard-working conference. After dinner each night, clusters of people with common interests gather to swap notes in a semi-structured environment. There were about eight of these running at once, on topics as diverse as:
Go to Roger's Home Page.
Go to the contents-page for this segment.
Created: 15 April 2002
Last Amended: 23 April 2002
These community service pages are a joint offering of the Australian National University (which provides the infrastructure), and Roger Clarke (who provides the content). |
The Australian National University Visiting Fellow, Faculty of Engineering and Information Technology, Information Sciences Building Room 211 | Xamax Consultancy
Pty Ltd, ACN: 002 360 456 78 Sidaway St Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 1472, 6288 6916 |