Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Version of 17 April 2017
Forthcoming in Proc. 30th Bled eConference, June 2017
© Xamax Consultancy Pty Ltd, 2017
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/SOS/CACT.html
The supporting slide-set is at http://www.rogerclarke.com/SOS/CACT.pdf
The notion that the powerful shoot messengers who bear unwelcome messages goes back to at least Plutarch and perhaps as far as Sophocles. Researchers whose work is adjacent to, rather than directly within, the disciplinary mainstream, may at times feels that this applies even in academic disciplines. This paper reports on a journey undertaken in order to achieve publication of a critique of papers published in a Special Issue of a leading eCommerce journal. The literature on content analysis was first examined, with particular reference to a range of approaches to literature reviews. Conventional, directed, summative and computational content analysis techniques were considered, and exemplars in the IS literature identified. Because the critique has been undertaken in the critical theory research tradition, the role of criticism in research was also reviewed. The findings enabled refinements to be made to the protocol used for conducting the content analysis, together with strengthening of the robustness of the paper's research method section and improvements to the expression of the research findings.
This paper reports on experience gained during a research project. The project involved the use of a new body of theory to critique the papers in a Special Issue of a leading eCommerce journal. The resulting paper was submitted to the same journal, and rejected. The grounds were a combination of claimed lack of robustness of the research method and dismay about the fact that the Special Issue papers had been subjected to criticism.
I found both of these grounds bewildering. The research method had been carefully prepared, had been previously applied and the results published, and it was, I considered, suitably documented. Moreover, the suggestion that papers should not be subjected to criticism sounded to me like the antithesis of the scientific method to which the journal and the management disciplines generally claim to aspire.
I accordingly set out on a deeper study of meta-questions that were affecting the project. What guidance is available in relation to secondary research whose raw data is published academic papers? What particular approaches need to be adopted when the theory-lens through which the observation is being performed arises from critical theory research? What guidance exists for expressing the outcomes of research of this nature? This paper's objective was accordingly to enhance the publishability of the underlying research, by grounding the content analysis technique more firmly in the research methods literature, demonstrating the appropriateness of constructive criticism of published works, and improving the expression of the results.
The paper is structured as follows. Brief explanations are provided of the underlying theory, the Special Issue to which it was applied, the research method adopted, and key aspects of the review process. A series of investigations is then outlined, involving searches of relevant methods literatures. This encompasses several variants of literature reviews and content analysis. The nature of criticism is discussed, and critical theory research reviewed. It is concluded that two particular techniques provide the most useful guidance on how to approach a project of this nature.
The paper concludes by showing how the insights arising from the journey have enabled enhancements of the research method, and of the manner in which the method and the findings are communicated to the reader.
The underlying research project adopted the particular theoretical lens of 'researcher perspective'. This was defined in Clarke (2015, 2016) as:
the viewpoint from which phenomena are observed
The papers postulated that:
In each research project, at least one 'researcher perspective' is adopted, whether expressly or implicitly, and whether consciously or unconsciously.
The researcher perspective influences the conception of the research and the formulation of the research questions, and hence the research design, the analysis and the results.
Each particular perspective is specific, not universal.
Because the interpretation of phenomena depends on the perspective adopted, the adoption of any single researcher perspective creates a considerable risk of drawing inappropriate conclusions.
IS researchers generally adopt the perspective of a participant in an information system (IS) - commonly the organisation that runs it, or an organisation that is connected to it, but sometimes the individuals who use it. Occasionally, researchers may adopt the perspective of an external stakeholder or 'usee', by which is meant a party who is affected by the IS but is not a participant in it.
Studies of several samples of refereed publications in the IS literature have shown that a very large proportion of research adopts solely one particular perspective - that of 'the system sponsor'. By that term is meant the organisation that develops, implements or adapts a system, process or intervention, or for whose benefit the initiative is undertaken.
The theory advanced in Clarke (2015, 2016) argues firstly that the single-mindedness of IS researchers is frequently harmful to the interests of other stakeholders, but secondly that the interests of system sponsors are also badly-served by such single-perspective research. Higher-quality research will be achieved through greater diversity in single-perspective research, by dual-perspective research, and by multi-perspective research.
The above theory relating to researcher perspective was applied to a Special Issue of the journal Electronic Markets, on 'Personal Data Markets', which was published in Volume 25, Issue 2 (June 2015).
A market is a context in which buyers and sellers discover one another and transact business, and inherently involves at least two participants, but usually considerably more participants and other stakeholders. The digital surveillance economy that has emerged since c. 2000 is a complex web of markets. Moreover, it involves vastly more capture of consumer data than has ever previously been the case, expropriation of that data for a wide variety of purposes by a wide variety of corporations, and its application to narrowcasting of advertisements, behaviour manipulation and micro-pricing. It would therefore appear reasonable to anticipate that projects would adopt varying researcher perspectives.
In order to investigate the researcher perspectives adopted in the papers in the Special Issue, a research method was applied that had been developed and refined in several previous studies, some of them reported in Clarke (2015, 2016). The process specification used is in Annex 1.
One important aspect is the extraction of the Research Question (or in the case of constructivist approaches such as Design Science Research, the Objective). In some papers this is explicit, and in others implied, but in some it needs to be inferred. The most vital part of the study is the identification and interpretation of passages of text that disclose the perspective adopted by the researcher. Again, this may be explicit, but it is more commonly implicit, and in many cases it has to be inferred. In order to enable audit, the process includes the recording of the key passages that led to the interpretations made, and publication as Supplementary Materials of the process specification, key passages, codings and interpretations for each paper.
The paper was submitted to Electronic Markets in February 2016, went through two rounds of reviews, and was rejected in January 2017. The primary grounds were "[the research article format is not] appropriate, legitimate, or even warranted", "[inadequate] description of the research method used" and "overstated criticism". Each of these was a major surprise, given that copious information was provided about the research method, and critiquing of the existing state of theory is fundamental to any discipline that claims to be scientific.
It was plainly necessary for me to assume hostility on the part of reviewers, step back, and gather the information needed to convey to reviewers the appropriateness of the research method and of criticising prior published works. This led to works on content analysis in its many forms, and on the role of criticism in the IS discipline.
A significant proportion of research involves the appraisal of content previously uttered by other people. This section briefly reviews categories of research technique whose focus is adjacent to the topic addressed in this paper.
Qualitative research techniques such as ethnography, grounded theory and phenomenology involve the disciplined examination of content, but content of a kind materially different to refereed papers. The text may be generated in natural settings (field research), in contrived settings (laboratory experiments), or in a mix of the two settings (e.g. interviews conducted in the subject's workplace). The materials may originate as text, or as communications behaviour in verbal form (speech in interviews that is transcribed into text), as natural non-verbal behaviour ('body-signals'), or as non-verbal, non-textual communications behaviour (such as answering structured questionnaires). In other cases, text that arises in some naturalistic setting is exploited by the researcher. Commonly-used sources of this kind include social media content, electronic messages, and newspaper articles.
The issues arising with analysis of these kinds of content are very different from those associated with the analysis of carefully-considered, formalised content in refereed articles.
A context that is more closely related to the present purpose is the examination of substantial bodies of published research. "Generally three broad categories of literature reviews can be distinguished. Firstly, literature reviews are an integrative part of any research thesis ... Secondly, literature reviews can be an important type of publication in their own right ... However, the most common form of literature review appears as a part of research publications. ... As part of research articles, literature reviews synthesize earlier relevant publications in order to establish the foundation of the contribution made by an article" (Boell & Cecez-Kezmanovic 2014, p.260).
A succinct, although rather negative, description of the approach that was common until c. 2000 is as follows: "Traditional literature reviews ... commonly focus on the range and diversity of primary research using a selective, opportunistic and discursive approach to identifying and interpreting relevant literature (Badger et al., 2000; Davies, 2000). In traditional `narrative' reviews, there is often no clear audit trail from primary research to the conclusions of the review, and important research may be missing, resulting in biased and misleading findings, and leading to puzzling discrepancies between the findings of different reviews" (Oakley 2003, p.23).
In 2002, the Guest Editors of an MISQ Special Issue expressly set out to drive improvements in literature review techniques in IS. Their declared aim was "to encourage more conceptual structuring of reviews in IS" Webster & Watson (2002, p.xiv). The Editorial is highly-cited and appears to have had considerable impact on literature reviews published in the IS field.
The conduct and presentation of literature reviews has subsequently been influenced by the 'evidence-based' movement in the health care sector. This adopts a structured approach to the task: "Systematic reviews ... synthesise the findings of many different research studies in a way which is explicit, transparent, replicable, accountable and (potentially) updateable" (Oakley 2003, p.23, emphasis added).
It was subsequently argued within the IS literature that a "rigorous, standardized methodology for conducting a systematic literature review" was still needed within IS (Okoli & Schabram 2010), and the authors proposed the following 8-step guide:
This appears to be the current mainstream in literature reviews in IS, and it provides some relevant guidance for the current project.
The focus in this paper is on the appraisal of published research papers. In some cases, the body of work is large. For example, many researchers have studied all articles (or at least the abstracts of all articles) in large sub-sets of papers. The sampling frame is typically one or more journals, most commonly the (atypical, but leading) 'Basket of 8' IS journals. In other cases, the body of work whose content is analysed is smaller, carefully-selected collections, perhaps as small as a single article, book or journal Issue.
In order to understand approved practices in this field of research, I adopted a two-pronged approach. Firstly, I searched out papers on the research technique. The findings are outlined in this section. In parallel, I identified relevant exemplars. Extracts from 10 such papers are in Annex 3.
Citing Weber (1990), Indulska et al. (2012, p.4) offer this definition:
Content Analysis is the semantic analysis of a body of text, to uncover the presence of strong concepts
A critical aspect of content analysis is that it seeks to classify the text, or specific aspects of the text, into a manageable number of categories. In Hsieh & Shannon (2005), the following definition is adopted (p.1278):
Content Analysis is the interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns
The authors indicate a 7-step process which they attribute to Kaid (1989). See also vom Brocke & Simons (2008):
As with any research technique, all aspects need to be subject to quality controls. Krippendorff (1980), Weber (1990) and Stemler (2001) emphasise steps 3-5 in relation to the coding scheme and its application. They highlight the importance of achieving reliability. Possible approaches include coding by individuals with strong experience in both the review of articles and the subject-matter, parallel coding by multiple individuals, review of individuals' coding by other parties, and publication of both the source materials and the detailed coding scheets, in order to enable audit by other parties.
Content analysis techniques exhibit varying degrees of structure and rigour, from impressionistic to systematic, and they may involve qualitative and/or quantitative assessment elements. Quantitative data may be on any of several scales: nominal, ordinal, cardinal or ratio. Data collected on higher-level scales, especially on a ratio scale, is able to be subjected to more powerful inferencing techniques. Qualitative data, on the other hand, may be gathered on a nominal scale (whereby differences are distinguished, but no ordering is implied) or on an ordinal scale (such as 'unimportant', 'important', 'very important').
Quantification generally involves measurement, most fundamentally by counting - which raises questions about the arbitrariness of boundaries, and about configuration and calibration of the measuring instrument(s). Some research methods involve sleight of hand, most commonly by making the largely unjustified assumption that 'Likert-scale' data is not merely ordinal, but is cardinal (i.e. the spaces between the successive terms are identical), and even ratio (i.e. the scale also features a natural zero).
Many authors implicitly equate quantification with rigour, and qualitative data with subjectivity. They accordingly deprecate qualitative analysis, or at least relegate it to pre-theoretical research, which by implication should be less common than research driven by strong theories. The majority of authors spend only limited time considering the extent to which the assumptions and the processes underlying the act of quantification may be arbitrary or themselves 'subjective'. Positivism embodies an implicit assumption that computational analysis necessarily leads to deep truth. The assumption needs to be tested in each particular circumstance, yet such testing is seldom evident.
A positivist approach to categorising content analysis "along a continuum of quantification" distinguishes "narrative reviews, descriptive reviews, vote counting, and meta-analysis" (King & He 2005, p.666):
King & He's categorisation is helpful, but it involves a switch from largely textual source-materials in the first three categories to wholly quantitative source-materials in the fourth.
More usefully still, three approaches are distinguished by Hsieh & Shannon (2005). These are examined in the following sub-sections.
In this approach, "coding categories are derived directly from the text data". The approach is effective when used "to describe a phenomenon [particularly] when existing theory or research literature on a phenomenon is limited" (p.1279). In such preliminary research, it is normal to allow "the categories and names for categories to flow from the data".
Hsieh & Shannon suggests that only selected text is examined (although that appears to be not necessarily the case), and that the context may not be well-defined. The external validity of conclusions arising from this approach may therefore be limited. They conclude that the technique is more suited to concept development and model-building than to theory development. Depending on the degree of generality of the conclusions claimed by the author, full disclosure of the text selection, coding and inferencing procedures may be merely desirable or vital.
In this case, "analysis starts with a theory or relevant research findings ... to help focus the research question ... and as guidance for [establishing and defining] initial codes" (pp. 1277, 1281).
Segments of the text that are relevant to the research question are identified, and then coded. To the extent that the declared or inferred content of the text does not fit well to the predefined categories, there may be a need to consider possible revisions of the coding scheme , or even of the theory on which the research design was based.
It may be feasible to draw inferences based on counts of the occurrences of categories and/or on the intensity of the statements in the text, such as the confidence inherent in the author's choice of language (e.g. "this shows that" cf. "a possible explanation is that").
As with any theory-driven research, the evidence extracted from the text may have a self-fulfilling-prophecy quality about it, i.e. there is an inevitable tendency to find more evidence in support of a theory than in conflict with it, and contextual factors may be overlooked. In order to enable auditability, it is important that not only the analysis be published, but also the raw material and the coding scheme.
This "involves counting and comparisons, usually of keywords or content, followed by the interpretation of the underlying context" (p.1277). The first step is to explore usage, by "identifying and quantifying certain words or content in text with the purpose of understanding the contextual use of the words or content" (p.1283).
Because of the complexity and variability of language use, and the ambiguity of a large proportion of words and phrases, a naive approach to counting words is problematic. At the very least, a starting-set of terms needs to be established and justified. A thesaurus of synonyms and perhaps antonyms and qualifiers is needed. Allowance must be made for both manifest or literal meanings, on the one hand, and latent, implied or interpreted meanings, on the other. Counts may be made not only of the occurrences of terms, but also of the mode of usage (e.g. active versus passive voice, dis/approval indicators, associations made).
The degree of analytical rigour that quantification can actually deliver depends a great deal on a number of factors. Critical among them are:
Publication of details of text selection and the analytical process is in all cases important, and essential where a degree of rigour and external validity is claimed.
A decade later, it is useful to break out a fourth approach from Hsieh & Shannon's third category.
This approach obviates manual coding by performing the coding programmatically. This enables much larger volumes of text to be analysed. The coding scheme may be defined manually, cf. directed content analysis / a priori coding. However, some techniques involve purely computational approaches to establishing the categories, cf. 'machine-intelligent' (rather than human-intelligent) emergent coding. The processing depends, however, on prior data selection, data scrubbing and data-formatting. In addition, interpretation of the results involves at least some degree of human activity.
In Indulska et al. (2012, p.4), a distinction is made between:
Debortoli et al. (2016), on the other hand, distinguish three alternative approaches:
Given that the 'big data analytics' movement is highly fashionable, vast volumes of data are available, and there is a comfort factor involved in office-based work much of which is automated, it would appear reasonable to anticipate that Quantitative Computational Content Analysis techniques will be a growth-area in the coming few years - at least until their limitations are better appreciated (Clarke 2016a, 2016c).
Content analysis is accepted as a research technique within the IS discipline, but its use has been somewhat limited. For example, in a survey of the papers published in six leading IS journals during the 1990s, Mingers (2003) found that the use of content analysis as a research technique was evident in only four of the journals, and even in those four in only 1-3% of all papers published during that time.
In February 2017, of the nearly 15,000 refereed papers indexed in the AIS electronic library, 13 had the term 'content analysis' in the title, and 69 in the Abstract. Annex 3 presents 10 instances which together provide an indication of the range of applications and approaches. A total of 770 papers of the 15,000 contained the term - c. 5%. This is, however, subject to over-inclusiveness (e.g. where the technique is merely mentioned in passing, where the term is used in a manner different from that applied in this paper, and where the technique is applied to interview transcripts rather than to published transcripts). It is also subject to under-inclusiveness (e.g. where some other term is used for essentially the same technique). In recently-published papers, the most common forms of text that have been subjected to content analysis appear to be social media and other message content, with other categories including newspaper articles and corporations' 'letters to shareholders'.
The literature relating to the above four categories of content analysis provides a considerable amount of information relevant to the current project. However, there is a dimension of the project that is not addressed by these techniques, and guidance needed to be sought elsewhere.
The previous sections have considered the analysis of content. The other area in which further insight was sought relates to the purpose for which the analysis is undertaken.
In some cases, the purpose of undertaking content analysis may be simply exposition, that is to say the identification, extraction and summarisation of content, without any significant degree of evaluation. There are benefits in undertaking content analysis in a positive frame of mind, assuming that all that has to be done is to present existing information in brief and readily-accessible form (as indeed much of the present paper does).
Alternatively, the researcher can bring a questioning and even sceptical attitude to the work. Is it reasonable to, for example, assume that all relevant published literature is of high quality? that the measurement instruments and research techniques have always been good, well-understood by researchers, and appropriately applied? that there have been no material changes in the relevant phenomena? that there have been no material changes in the intellectual contexts within which research is undertaken?
Criticism is the analysis of the merits and faults of a work. The word can be applied to the process (the sequence of actions) or the product (the expression of the analysis and the conclusions reached). There are also common usages of the term 'criticism' in a pejorative sense, implying that the critic is finding fault, is being destructive rather than constructive, and is failing to propose improvements to sustain the merits and overcome the faults. The term 'critique' is sometimes substituted, in an endeavour to avoid the negative impressions, to indicate that the work is systematic, and to bring focus to bear on the contribution being made by both the criticism and the work that is being subjected to it.
Criticism plays a vital role in scientific process. The conventional Popperian position is that the criterion for recognising a scientific theory is that it deals in statements that are empirically falsifiable, and that progress depends on scrutiny of theories and attempts to demonstrate falsity of theoretical statements: "The scientific tradition ... passes on a critical attitude towards [its theories]. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them" (Popper 1963, p.50).
However, senior members of a discipline commonly behave in ways that are not consistent with the Popperian position. This might be explained by the postulates of 'normal science', which view the vast majority of research work as being conducted within a 'paradigm' and subject to its conventions (Kuhn 1962). In more practical terms, the problem may arise because senior members of any discipline have strong psychic investment in the status quo, and - nomatter how cogent and important the argument - react negatively against revolutionary propositions. Sharply-worded criticisms appear to be more likely to be published if they are uttered by a senior about a contrarian idea, whereas they seem more likely to be deplored when they are made by an outsider about the contemporary wisdom.
Two examples are commonly cited within the IS discipline as suggesting that conservativism is important and criticism is unwelcome. In a section on the tone to be adopted in a Literature Review, Webster & Watson (2002) recommended that "A successful literature review constructively informs the reader about what has been learned. In contrast to specific and critical reviews of individual papers, tell the reader what patterns you are seeing in the literature" (p.xviii, emphasis added). The recommendation to concentrate on 'patterns in the literature' is valuable, because it emphasises that the individual works are elements of a whole. On the other hand, the use of 'in contrast to' is, I contend, an overstatement. To make assertions about a population without providing sufficient detail about the individual instances invites reviewers to dismiss the analysis as being methodologically unsound. It is, in any case, essential to progress in the discipline that each of us be prepared to accept criticism.
The advice continued: "Do not fall into the trap of being overly critical ... If a research stream has a common 'error' that must be rectified in future research, you will need to point this out in order to move the field forward. In general, though, be fault tolerant. Recognize that knowledge is accumulated slowly in a piecemeal fashion and that we all make compromises in our research, even when writing a review article" (p.xviii, emphasis added). Here, the authors' expression failed to distinguish between the two senses of the word 'critical'. The authors' intention appears to me to have been to warn against 'overly critical expression'. On the other hand, it is an obligation of researchers to 'think critically' and to 'apply their critical faculties'. I submit that it would be inappropriate for readers of the article to interpret the quotation as valuing politeness among researchers more highly than scientific insight and progress.
In the second example, a senior journal editor, providing advice on how to get published in top journals, wrote that "the authors' contributions should be stated as gaps or new perspectives and not as a fundamental challenge to the thinking of previous researchers. To reframe, papers should be in apposition [the positioning of things side by side or close together] rather than in opposition" Straub (2009, p.viii, emphasis added). This is Machiavellian advice, in the positive, or at least amoral, sense of 'if the Prince wishes to be published in top journals, then ...'. Unfortunately, it is all-too-easily interpreted as expressing a moral judgement that 'criticism is a bad thing'.
The inferences that I draw from the above analysis are as follows:
Positivism and interpretivism are well-established schools of research in IS. They have been joined by design science. And they have an odd bedfellow, in the form of what is variously termed 'critical research' and 'critical theory research'. The term 'critical' in this context is different from, but related to, the sense of 'analysis of the merits and faults of a work' discussed in the previous section.
Design research is concerned with constructing an artefact, variously of a technological or an intellectual nature. Both positivism and interpretivism, on the other hand, are concerned with description and understanding of phenomena. Sometimes the focus is on natural phenomena, but frequently the interest is in natural phenomena have been subjected to an intervention. Importantly for the present project, however, both positivism and interpretivism involve strenuous avoidance of moral judgements and of 'having an agenda'.
Critical theory research, on the other hand, recognises the effects of power and the tendency of some stakeholders' interests to dominate those of other stakeholders. It brings to light "the restrictive and alienating conditions of the status quo" and expressly sets out to "eliminate the causes of alienation and domination" (Myers 1997). "Critical research generally aims to disrupt ongoing social reality for the sake of providing impulses to the liberation from or resistance to what dominates and leads to constraints in human decision-making. Typically critical studies put a particular object of study in a wider cultural, economic and political context, relating a focused phenomenon to sources of broader asymmetrical relations in society ... (Alvesson & Deetz 2000, p.1). "Critical IS research specifically opposes technological determinism and instrumental rationality underlying IS development and seeks emancipation from unrecognised forms of domination and control enabled or supported by information systems" (Cecez-Kezmanovic 2005, p.19).
In Myers & Klein (2011), three elements of critical research are identified:
Appropriate approaches to critical theory research are highly inter-related with the subject-matter, and hence theorists of critical research method avoid offering a recipe or even a process diagram. Myer & Klein (2011) does, however, offer guidance in the form of Principles for Critical Research (pp.24-29):
The Element of Critique
The Element of Transformation
The original theoretical work on 'researcher perspective', on which my current paper is based, is appropriately framed within a critical theory research design. The paper whose rejection stimulated these Notes, on the other hand, uses content analysis to apply that theory to a set of papers in a new and potentially very important research domain. The notions discussed in this section are therefore of general relevance to the establishment of a satisfactory content analysis research design, but do not directly address the issues that I am confronting.
Although the references discussed above are of relevance to the problem, they fell short of the need. Two particular sources appeared to provide an appropriate foundation for content analysis of the kind that my research project undertakes. One is an approach to literature review, and the other an approach to content analysis that the authors in question refer to as 'Critical Discourse Analysis'.
It has been argued that the emphasis on 'systematic' literature reviews noted in s.4.3 above "suppresses aspects of quality in research and scholarship that are at least as important as clarity, countability and accountability - such as intertextual connectivity, critique, interest, expertise, independence, tacit knowledge, chance encounters with new ideas, and dialogic interactions between researcher, 'literature' and 'data'" (MacLure 2005, p.394).
In Boell & Cecez-Kecmanovic (2014) it is argued that a constructively loose and iterative process is needed, to avoid undue constraints and unlock insight and creativity: "Highly structured approaches downplay the importance of reading and dialogical interaction between the literature and the researcher; continuing interpretation and questioning; critical assessment and imagination; argument development and writing - all highly intellectual and creative activities, seeking originality rather than replicability [MacLure, 2005, Hart, 1998]" (p.258, emphasis added).
The authors "propose hermeneutic philosophy as a theoretical foundation and a methodological approach for studying literature reviews as inherently interpretive processes in which a reader engages in ever exp[a]nding and deepening understanding of a relevant body of literature. Hermeneutics does not assume that correct or ultimate understanding can be achieved, but instead is interested in the process of developing understanding" (p.259). The framework, reproduced in Figure 1, comprises two intertwined cycles: a search and acquisition circle, and a wider analysis and interpretation circle (p.263).
The authors perceive the mapping and classification of literature as being "a creative process that builds on a deeper understanding of the body of literature achieved through analytical reading. This process may lead to new questions and identify new relevant publications to be included in the body of knowledge" (p.267). The approach embodies "questioning and critical assessment ... of previous research" (p.258), and analysis of "connections and disconnections, explicit or hidden contradictions, and missing explanations" and thereby the identification or construction of "white spots or gaps" (p.267, emphasis added).
"A critical assessment of the body of literature ... demonstrates that literature is incomplete, that certain aspects/phenomena are overlooked, that research results are inconclusive or contradictory, and that knowledge related to the targeted problem is in some ways inadequate [Alvesson and S[an]dberg, 2011]. Critical assessment, in other words, not only reveals but also, and more importantly, challenges the horizon of possible meanings and understanding of the problem and the established body of knowledge" (p.267).
Wall et al. (2015) proposes an approach to content analysis that the authors refer to as 'Critical Discourse Analysis'. Their starting-point is that "the information systems (IS) discipline is subject to ideological hegemony" (p.258). They see this as being harmful, and they argue that "review papers can ... challenge ideological assumptions by critically assessing taken-for-granted assumptions" (p.257).
They explain the idea of 'ideological hegemony' as being "the conscious or unconscious domination of the thought patterns and worldviews of a discipline or subdiscipline that become ingrained in the epistemological beliefs and theoretical assumptions embedded in scientific discourse (Fleck, 1979; Foucault, 1970; Kuhn, 2012). In academic literature, a hegemony may manifest as common framing of research topics and research questions, the domination of theories and research methods that carry similar assumptions, common beliefs about what constitutes the acceptable application of research methods, and common beliefs about how research results should be interpreted.
"By ideology, we mean those aspects of a worldview that are often taken for granted and that disadvantage some and advantage others. Ideologies are not falsehoods in an empirical sense, but are a constitutive part of researchers' and research communities' worldview ... that are removed from scrutiny (Freeden, 2003; Hawkes, 2003). Thus, ideologies can be harmful to individuals who are disadvantaged or marginalized by them, and they can be problematic to scientific research because they represent blind spots" (p.258, emphases added).
Wall et al. proposes that a critical review method "based on Habermasian strains of critical discourse analysis (CDA) (Cukier, Ngwenyama, Bauer, & Middleton, 2009; Habermas, 1984)" (p.259) can overcome the limitations of working only within ideological assumptions. CDA "examines more than just a communicative utterance. Foucauldian analysis also examines the context in which an utterance was uttered by assessing power relationships between actors and the structures and processes that guide behavior and constrain the development of knowledge (Kelly, 1994; Stahl, 2008)" (p.261, emphasis added).
The process involves the assessment of "violations of four validity claims" (p.261):
The authors identify four principles (pp.263-4):
They propose a seven-step process (pp. 265-9):
The hermeneutic approach to literature review and the CDA approach to content analysis, overlaid on the prior literature, enable the design of a content analysis research method with the desired attributes. That research method has a good fit with theory developed using critical theory research. It prioritises depth of insight over narrow, positivist quantification. It encourages the analyst to focus on key validity claims and the hidden assumptions within the text under study. It forces the researcher to confront, and to take into account, their own ideology and agenda. It pushes the researcher in the direction of critique for the purposes of theory construction or re-construction, rather than criticism for its own sake.
The preceding sections provide a basis for adapting the research method for my research project. The most significant implications for my work were as follows:
The adapted process specification is in Annex 2.
This paper has reported on the results of a study of meta-questions affecting a content analysis project. A range of guidance has been located and summarised in relation to secondary research whose raw data is published academic papers. The role of criticism (or critiquing) in IS research has been clarified. The particular challenge has been confronted of how to perform content analysis when the theory-lens through which the observation is being performed arises from critical theory research.
The primary purpose of the work has been fulfilled, in that the process specification for the analysis of the relevant papers has been adapted in order to better reflect existing theory relating to content analysis of published works, particularly in a critical theory context. Further, a set of changes to the research method section has been identified, which have implications for the interpretation of the papers and the expression of the critique. In addition, guidance has been assembled on how to, and how not to, commuicate the results.
This paper has implications for IS researchers generally. Much of the material that has been summarised applies to all content analysis of published papers, nomatter whether the research approach adopted is positivist, interpretivist, design science or critical theory. A small qualification is appropriate, in that the majority of the material relates to research that goes beyond mere exposition of existing literature and is at least modestly questioning about that literature's quality and/or continuing relevance.
This paper goes further, however, in that it contains guidance in relation to constructive criticism of existing works. I contend that IS will become increasingly static, and its outputs will be decreasingly valuable, if it values politeness to authors too highly and puts too little emphasis on constructive criticism of existing literature. The method adopted includes proposals about how a researcher can detect and avoid excessively sharp expression, focus the discussion on the message, avoid shooting the original messenger, and in turn avoid being shot themselves.
Alvesson M. & Deetz S. (2000) 'Doing Critical Management Research' Sage, 2000
Alvesson M. & Sandberg J. (2011) 'Generating Research Questions Through Problematization' Academy of Management Review, 36, 2 (2011) 247-271
Boell S.K. & Cecez-Kecmanovic D. (2014) 'A Hermeneutic Approach for Conducting Literature Reviews and Literature Searches' Communications of the Association for Information Systems 34, 12, at http://tutor.nmmu.ac.za/mgerber/Documents/ResMeth_Boell_2014_Literature%20Reviews.pdf
vom Brocke J. & Simons A. (2008) 'Towards a Process Model for Digital Content Analysis - The Case of Hilti' BLED 2008 Proceedings. Paper 2, http://aisel.aisnet.org/bled2008/2
Cecez-Kecmanovic D. (2001) 'Doing Critical IS Research: The Question of Methodology' Ch.VI in 'Qualitative Research in IS: Issues and Trends: Issues and Trends' (ed. Trauth E.M.), pp. 141-163, Idea Group Publishing, 2001, at https://pdfs.semanticscholar.org/37b1/e4c060b93fcfa04d81f03b750e746ba42f2d.pdf
Cecez-Kecmanovic D. (2005) 'Basic assumptions of the critical research perspectives in information systems' Ch. 2 in Howcroft D. & Trauth E.M. (eds) (2005) 'Handbook of Critical Information Systems Research: Theory and Application', pp.19-27, Edward Elgar, 2005
Clarke R. (2015) 'Not Only Horses Wear Blinkers: The Missing Perspectives in IS Research' Keynote Presentation, Proc. Austral. Conf. in Infor. Syst. (ACIS 2015), at https://arxiv.org/pdf/1611.04059, Adelaide, December 2015, PrePrint at http://www.rogerclarke.com/SOS/ACIS15.html
Clarke R. (2016a) 'Big Data, Big Risks' Information Systems Journal 26, 1 (January 2016) 77-90, PrePrint at http://www.rogerclarke.com/EC/BDBR.html
Clarke R. (2016b) 'An Empirical Assessment of Researcher Perspectives' Proc. Bled eConf., Slovenia, June 2016, at http://www.rogerclarke.com/SOS/BledP.html
Clarke R. (2016c) 'Quality Assurance for Security Applications of Big Data' Proc. EISIC'16, Uppsala, 17-19 August 2016, PrePrint at http://www.rogerclarke.com/EC/BDQAS.html
Debortoli S., Müller O., Junglas I. & vom Brocke (2016) 'Text Mining For Information Systems Researchers: An Annotated Topic Modeling Tutorial' Communications of the Association for Information Systems 39, 7, 2016
Hsieh H.-S. & Shannon S.E. (2005) 'Three Approaches to Qualitative Content Analysis' Qualitative Health Research 15, 9 (November 2005) 1277-1288, at http://www33.homepage.villanova.edu/edward.fierros/pdf/Hsieh%20Shannon.pdf
Indulska M., Hovorka D.S. & Recker J.C. (2012) 'Quantitative approaches to content analysis: Identifying conceptual drift across publication outlets' European Journal of Information Systems 21, 1, 49-69, at /http://eprints.qut.edu.au/47974/
Kaid L.L. (1989) 'Content analysis' In Emmert P. & Barker L.L. (Eds.) 'Measurement of communication behavior', pp. 197-217, Longman, 1989
King W.R. & He J. (2005) 'Understanding the Role and Methods of Meta-Analysis in IS Research' Communications of the Association for Information Systems 16, 32
Krippendorff K. (1980) 'Content Analysis: An Introduction to Its Methodology' Sage, 1980
Kuhn T.S. (1962) 'The Structure of Scientific Revolutions' University of Chicago Press, 1962
MacLure M. (2005) '`Clarity bordering on stupidity': where's the quality in systematic review?' Journal of Education Policy 20, 4 (2005) 393-416, at http://www.esri.mmu.ac.uk/respapers/papers-pdf/Paper-Clarity%20bordering%20on%20stupidity.pdf
Mingers J. (2003) 'The paucity of multimethod research: a review of the information systems literature' Information Systems Journal 13, 3 (2003) 233-249
Myers M.D. (1997) 'Qualitative research in information systems' MISQ Discovery, June 1997, at http://www.academia.edu/download/11137785/qualitative%20research%20in%20information%20systems.pdf
Myers M.D. & Klein H.K. (2011) 'A Set Of Principles For Conducting Critical Research In Information Systems' MIS Quarterly 35, 1 (March 2011) 17-36, at https://pdfs.semanticscholar.org/2ecd/cb21ad740753576215ec393e499b1af12b25.pdf
Oakley, A. (2003) Research evidence, knowledge management and educational practice: early lessons from a systematic approach, London Review of Education, 1, 1: 21-33, at ttp://www.ingentaconnect.com/contentone/ioep/clre/2003/00000001/00000001/art00004?crawler=true&mimetype=application/pdf
Okoli C. & Schabram K. (2010) 'A Guide to Conducting a Systematic Literature Review of Information Systems Research' Sprouts: Working Papers on Information Systems, 10, 26 (2010), at http://sprouts.aisnet.org/10-26
Popper K. (1963) 'Conjectures and Refutations: The Growth of Scientific Knowledge' Harper & Row, 1963
Stemler S. (2001) 'An overview of content analysis' Practical Assessment, Research & Evaluation 7, 17 (2001), at http://PAREonline.net/getvn.asp?v=7&n=17
Straub D. W. (2009) 'Editor's comments: Why top journals accept your paper' MIS Quarterly, 33, 3 (September 2009) iii-x, at http://misq.org/misq/downloads/download/editorial/3/
Wall J.D., Stahl B.C. & Salam A.F. (2015) 'Critical Discourse Analysis as a Review Methodology: An Empirical Example' Communications of the Association for Information Systems 37, 11 (2015)
Webster J. & Watson R.T. (2002) 'Analyzing The Past To Prepare For The Future: Writing A Literature Review' MIS Quarterly 26, 2 (June 2002) xiii-xxiii, at http://intranet.business-science-institute.com/pluginfile.php/247/course/summary/Webster%20%20Watson.pdf
Weber R.P. (1990) 'Basic Content Analysis' Sage, 1990
Gallivan M.J. (2001) 'Striking a balance between trust and control in a virtual organization: a content analysis of open source software case studies' Information Systems Journal 11, 4 (October 2001) 277-304, at http://heim.ifi.uio.no/~jensj/INF5700/TrustControlVirtualorganization.pdf
"I employed content analysis, that is techniques for making replicable and valid inferences from data to their context. I used a traditional form of content analysis, whereby I approached the data with a predefined set of content variables and searched for passages that embodied these themes (Carney, 1972; Andren, 1981)" (p.289).
"Appendix A contains the detailed results of the content analysis, showing the relevant, coded passages for each of the five constructs discussed above. I have used direct quotes from the original document in these results, paraphrasing only where necessary to make explicit some information assumed in the original source material or to summarize material that would have been too lengthy to quote directly in the table. This table is structured first by the source and, within each source, according to the five key themes (efficiency, predictability, calculability, control and trust)" (p.291).
"SUPPLEMENTARY MATERIAL
The following material is available
from
http://www.blackwell-science.com/products/journals/suppmat/isj/isj108/is108sm.htm
Appendix A: Content analysis of open source software case studies"
(p.300).
(However, in February 2017, the domain-name no longer existed).
Rivard S. & Lapointe L. (2012) 'Information Technology Implementers' Responses to User Resistance: Nature And Effects' MIS Quarterly 36, 3 (September 2012) 897-920
"When a response category was not a sufficient condition, we drew on qualitative content analysis of the cases to identify additional candidate conditions that, combined with the implementers' response, would form a configuration of conditions sufficient for a given outcome. Through careful coding and examination of the data, content analysis allows inferences to be made from data to their context in order to provide new knowledge and insights (Hsieh and Shannon 2005). This helped us further capitalize on the richness of the case survey material (Reuver et al. 2009). Here again, we relied on consensus to resolve any inter-coder discrepancies" (p. 902).
Arnott D. & Pervan G. (2012) 'Design Science in Decision Support Systems Research: An Assessment using the Hevner, March, Park, and Ram Guidelines' Journal of the Association for Information Systems 13, 11 (November 2012) 923-949
"Content analysis involves the coding and analysis of a representative sample of research articles. In this approach, data capture is driven by a protocol that can have both quantitative and qualitative aspects. This form of data capture is labour intensive but has the advantage that it can illuminate the deep structure of the field in a way that is impossible to achieve with other literature analysis approaches" (p.926)
"In general IS research, content analysis has been used by Alavi and Carlson (1992) in their analysis of management information system's intellectual evolution, by Farhoomand and Dury (1999) in what they termed an "historiographical" examination of IS research, and by Chen and Hirschheim (2004) in their paradigmatic and methodological examination of IS research from 1991 to 2001.
"In specific segments of IS research, Guo and Sheffield (2008) used content analysis to examine knowledge management research, while Palvia, Pinjani, and Sibley (2007) analyzed all articles published in Information & Management. In DSS literature analysis, Arnott and Pervan (2005, 2008) used content analysis in overall reviews of the field, while Benbasat and Nault (1990) used content analysis to critically assess empirical DSS research. Fjermestad and Hiltz followed this approach to analyze group support systems research both in the laboratory (Fjermestad & Hiltz, 1998/1999) and in the field (Fjermestad & Hiltz, 2000/2001). Pervan (1998) used content analysis in a general review of GSSs research.
"Following this tradition, the research in this paper adopted a content-analysis method to help understand the nature of DSS design-science research and to assess its strengths and weaknesses (p.926).
"We coded each of the 1,167 articles using the Alavi and Carlson (1992) taxonomy as modified by Pervan (1998) to include action research and to distinguish between positivist and interpretive case studies. Table 2 shows the result of this coding.
"Both researche[r]s inspected the articles from the article types "tools", "techniques", "methods", "model applications", "conceptual frameworks and their application", "description of type or class of product"; "technology, systems, etc", "description of specific application", "system, etc", and "action research", to determine whether they met Hevner et al.'s (2004) design-science research definition. In particular, we inspected each paper for a focus on an innovative artifact instead of providing a description of an existing commercial product.
"This yielded a DSS design-science research sample of 362 articles. A list of
the articles in the sample is available at
http:dsslab.infotech.monash.edu.au/index.php/projects/dss-foundations.
This
sample shows the importance of design-science research because it is the
primary strategy of 31 percent of DSS articles" (p.928).
[In February 2017, the link is broken.]
"We coded the 362 DSS design-science research articles using the protocol that Appendix A shows. We based the protocol on the guidelines proposed by Hevner et al. (2004). The time taken to code each article varied from 20 minutes to over one hour. To ensure coding validity, both researchers coded each paper, with disagreements in coding discussed and resolved. This approach has been used in prior studies (e.g., Eierman, Niederman, & Adams, 1995). It was important to keep re-reading Hevner et al. (2004) during the coding process in order to remain calibrated to their definitions, implied constructs, and meanings. An important aspect of coding validity is that the two researchers have decades of experience in the DSS area, are experienced journal reviewers and editors, and have published DSS design-science research projects" (p.929).
Weigel, Fred K.; Rainer, R. Kelly Jr.; Hazen, Benjamin T.; Cegielski, Casey G.; and Ford, F. Nelson (2013) "Uncovering Research Opportunities in the Medical Informatics Field: A Quantitative Content Analysis" Communications of the Association for Information Systems 33, 2
"In this study, we conduct a systematic screening of the major academic sources of medical informatics literature in order to establish a baseline perspective of the extant research. The purpose of this article is to motivate cross-disciplinary scholarship in healthcare by creating a typology of topics that might be used as the basis for future investigation." (p.16).
"Content analysis has been used in past research aimed at creating typologies or examining research directions [Tangpong, Michalisin and Melcher, 2008; Zhao, Flynn and Roth, 2007]. One of the key concepts behind content analysis is that large bodies of text are grouped into a relatively small number of categories based on some criteria so that the large bodies of text can be managed and understood. We synthesized the procedures outlined in content analysis methods literature to establish the content analysis process we performed [Corman, Kuhn, McPhee and Dooley, 2002; Krippendorff, 2004; Neuendorf, 2002]. In this section, we describe each step of this procedure" (p.18)
The content was limited to "noun phrases" in c. 2,000 article abstracts from 4 health profession and 3 IS journals. The analysis used 'centering resonance analysis', which identifies the 'centers' -- words or noun phrases that form the subjects of the discussion. This identified 17,000 nodes, which were then reduced to 10 themes.
Clarke R. & Pucihar A. (2013) 'Electronic Interaction Research 1988-2012 through the Lens of the Bled eConference' Electronic Markets 23. 4 (December 2013) 271-283, PrePrint at http://www.rogerclarke.com/EC/EIRes-Bled25.html
"This paper, and the preparatory analysis reported in Clarke (2012a), needed a means whereby some structure could be imposed on the large amount of material within the scope of the review. A small number of classification schemes have been previously published that were of potential relevance to this study, e.g. Barki & Rivard (1993) and Galliers & Whitley (2002, 2007) re the field of Information Systems (IS) generally, Elgarah et al. (2005) and Narayanan et al. (2009) both re EDI adoption, Ngai & Wat (2002) re eCommerce, and Titah & Barki (2006) re eGovernment. However none was found that addressed the electronic interaction field both in a comprehensive manner and at an appropriate level of detail.
"A classification scheme was developed through visual inspection of the titles of all [824] papers in the Bled eConference Proceedings during the refereed period 1995-2011, identifying 63 keywords with 972 mentions, and then clustering them on the basis of the first-named author's familiarity with the subject-matter and his particular world-view. This gave rise to 34 keyword-clusters within 3 major groups. Details are in Appendix 6 to Clarke (2012a). No authoritative basis for the clustering is, or can be, claimed".
Niemimaa, Marko (2015) "Interdisciplinary Review of Business Continuity from an Information Systems Perspective: Toward an Integrative Framework,"Communications of the Association for Information Systems: Vol. 37, Article 4.
"In this paper, I use a narrative review with some descriptive elements (King & He, 2005). King and He view narrative and descriptive reviews along a continuum of different types of approaches to analyzing past research. The narrative approaches present "verbal descriptions of past studies" that are "of great heuristic value, and serve to postulate and advance new theories and models...and direct further development in a research domain" (p. 667). The descriptive approaches:
"introduce some quantification" and "often involves a systematic search of as many relevant papers in an investigated area, and codes each selected paper on certain characteristics, such as publication time, research methodology, main approach, grounded theory, and symbolic research outcomes (e.g., positive, negative, or non-significant)" (p. 667).
"I chose the narrative approach because I sought to integrate past contributions to an IS continuity framework and direct further developments of the topic (King & He, 2005). Thus, I present the paper's contributions (see Section 4) and the elements of the integrative framework (see Section 5) in a more elaborate fashion (in narrative-like format) than is typical for descriptive reviews. Accordingly, I illustrate the previous studies' main themes and the related elements of the integrative framework with interesting examples instead of systematically listing all studies under each result category. Further, to present the distribution of research approaches in the IS continuity literature, to present the distribution of papers per theme, and to indicate where and when the most research efforts have been made, I include quantifications that are typical for descriptive studies" (p.74).
"To thematize the papers, I classified the contributions into themes to identify the most common themes across contributions, which I did by avoiding predefined categories. Instead, the themes emerged from the papers themselves (Bacon & Fitzgerald, 2001). Allowing the themes to emerge was necessary because no predefined categories existed due to the topic's multidisciplinary nature. Instead, the themes resulted from an iterative literature analysis in the spirit of hermeneutic analysis (Myers, 2004; Boell & Cecez-Kecmanovic 2014)" (p.75).
"The fundamental tenet of hermeneutic analysis is that correct understanding emerges from the interplay between the parts and the whole (Klein & Myers, 1999). The "whole" refers here to the understanding that one gains through reading and analyzing the papers; that is, the "parts". The interdisciplinary focus of the research further supported using hermeneutics because "(i)nterdisciplinary integration brings interdependent parts of knowledge into harmonious relationships through strategies such as relating part and whole or the particular and the general" (Stember, 1991, p. 4). To understand the whole, I read each paper through and wrote notes down about it. Subsequently, I coded each paper by using qualitative coding techniques (Miles & Huberman, 1994). The notes included emerging categories and other notes that I felt were important for the research (e.g., interesting findings, representative papers for each category). For example, the codes included "methodologies", "frameworks", "lifecycle" that, I assimilated after several iterations of hermeneutic interpretation and qualitative coding into a single category. Section 4 presents the emerged categories, their definitions, and their respective content" (p.75).
Palvia P., Daneshvar Kakhki M., Ghoshal T., Uppala V. & Wang W. (2015) 'Methodological and Topic Trends in Information Systems Research: A Meta-Analysis of IS Journals' Communications of the Association for Information Systems 37, 30 (2015)
"In this paper, we present trends in IS research during a 10-year period (2004 to 2013). ... , we provide a long-overdue update. We reviewed all papers from seven major IS journals and Coded them based on topics studied, methodologies used, models rendered, and paradigmatic approaches taken" (Abstract).
"We developed a four-dimensional framework comprising topics, methodologies, models, and paradigmatic approaches for classifying each research paper. We assigned each paper with codes for each of the four dimensions" (p.632).
"The majority of meta-analysis projects code papers based on a combination of two or more of the following four dimensions: research topic, research model, research methodology, and paradigmatic research approach. For example, Claver, González, and Llopis (2000) study the research topics and research methodology in IS research. Gonzalez et al. (2006) examine the literature on IS outsourcing based on topic and methodology. Alavi and Carlson (1992) examine IS papers for research topics, theme, and research approach. Chen and Hirschheim (2004) use two dimensions of research methodology and research approach to examine IS publications. Our review and coding captures all of these four dimensions" (p.633).
Cram, W. Alec and D'Arcy, John (2016) "Teaching Information Security in Business Schools: Current Practices and a Proposed Direction for the Future,"Communications of the Association for Information Systems: Vol. 39, Article 3.
"After collecting the available syllabi, we descriptively coded the documents, which refers to classifying segments of text as a particular phenomenon (Miles & Huberman, 1994). We coded the papers into both higher-level categories and a series of underlying subcategories (see Table 1). We established a preliminary set of categories at the beginning of the coding process based on the domains existing in two popular security certifications: the Certified Information Systems Security Practitioner (CISSP) and Certified Information Security Manager (CISM) (Hernandez, 2012; ISACA, 2014). The first author coded a preliminary sample of syllabi, which the second author reviewed. We discussed the initial approach and results and refined the categories. We then coded the remainder of the syllabi and, as we identified new characteristics in the data, iteratively defined the categories. By the end of the coding, we did not encounter any new coding categories, which suggests that we reached a saturation point in the taxonomy. We coded a total of 542 passages into four main categories and thirty subcategories (i.e., an average of 12 passages per syllabus). Appendix B provides representative examples of data coded to each category" (p.35).
Adelmeyer, M.; Walterbusch, M.; Biermanski, P.; Seifert, K.; Teuteberg, F. (2017): Rebound Effects in Cloud Computing: Towards a Conceptual Framework, in Leimeister, J.M.; Brenner, W. (Hrsg.): Proceedings der 13. Internationalen Tagung Wirtschaftsinformatik (WI 2017), St. Gallen, S. 499-513
"47 papers contained concepts characterizing rebound effects in several ways. In order to uncover and name the underlying concepts, we applied a conceptual content analysis approach when analyzing the literature [22], placing a strong focus on the general definition of rebound effects. In total, the following eight characterizing concepts of rebound effects in general were extracted from the identified literature: efficiency improvement; growth in consumption; direct/indirect; micro/macro level; offset; unintended; short-/long term; behavioral response" (p.504).
[22] Recker, J.: Scientific Research in Information Systems: A Beginner's Guide. Springer, Heidelberg (2012)
The author thanks Professors Robert Davison of the City University of Hong Kong, Dubravka Cecez-Kecmanovic of UNSW, and the reviewers, for constructive suggestions for improvements to the paper.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Computer Science at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 9 February 2017 - Last Amended: 17 April 2017 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/CACT.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2022 - Privacy Policy