Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Theory of Authorization'

A Generic Theory of Authorization, Access Control and Id/Entity Management

Version of 2 May 2024

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2023-24

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://rogerclarke.com/ID/GT-IdEM.html

An early version of the theory in sections 3-6 was presented at ACIS'23, Wellington, 5-8 December 2023, with a supporting slide-set


Abstract

The term Authorization refers to a key element within the process whereby control is exercised over access to resources by means of information and communications technologies. It involves the assignment of a set of permissions or privileges to particular users or categories of users. The term Access Control refers to a further element, which provides users with the capability to exercise those permissions. The administration of data about users is the subject of a family of business processes, collectively referred to as Identity Management (IdM). The conventional approaches to IdM, despite a quarter-century of experience and refinements, is still far from satisfactory, with widespread abuse by both authorized users and imposters. Additional challenges have arisen as artefacts are increasingly being provided with the means to act with varying degrees of autonomy directly on real world things. This article builds on a previously-published pragmatic metatheoretic model to develop and present a generic theory whose scope, and some of whose particulars, overcome key limitations of currently mainstream approaches. The generic theory is the applied to identify weaknesses in current practice, and show how the theory enables the conception, design and implementation of more effective products and services to manage entities and identities.


Contents


1. Introduction

Information and Communications Technology (ICT) facilities have become central to the activities not only of organisations, but also of communities, groups and individuals. The end-points of networks are pervasive, and so is the dependence of all parties on the resources that the facilities provide access to. ICT moved long ago beyond the processing of data and its use for the production of information. Support for inferencing has become progressively more sophisticated, some forms of decision-making are being automated, and there is increasing delegation to artefacts of the scope for autonomous action in the real world. Humanity's increasing reliance on machine-readable data, and on computer-based data processing, inferencing, decision and action, is giving rise to a high degree of vulnerability and fragility, because of the scope for misuse, interference, compromise and appropriation. There is accordingly a critical need for effective management of access to ICT-based facilities.

Conventional approaches within the ICT industry have emerged and matured over the last half-century. Terms in common usage in the area include identity management (IdM), identification, authentication, authorization and access control. The adequacy of current techniques has been in considerable doubt throughout the first two decades of the present century. A pandemic of data breaches has spawned notification obligations in many jurisdictions since the first Security Breach Notification Law was enacted in California in 2003 (Karyda & Mitrou 2016), and the resources of many organisations have proven to be highly susceptible to unauthorised access, including some of those apparently most strongly motivated and resourced to maintain high standards in the area (ITG 2023). The topic-area continues to attract attention in the ICT security literature. Recent articles are, however, primarily concerned with specific issues or contexts, and only secondarily with conceptual or technical weaknesseses in products and services, or empirical evidence of their effectiveness. For example, Pritee et al. (2024) and Aboukadri (2024) report on the use of machine learning (ML) techniques to incorporate empirically-based assessment of user behaviour into various phases of IdM. Other articles have had their focus on specific areas of vulnerability, such as 'privileged accounts' (Sindiren & Ciylan 2019) and 'delegated authorization' in health contexts (Zhao & Su 2024).

I contend that the many vulnerabilities in contemporary IdM arise from inadequacies in the conventional conception of the problem-domain, and in the models underlying architectural, infrastructural and procedural designs to support authorization and access control. My motivation in conducting the research reported here has been to contribute to improved information systems (IS) practice and practice-oriented IS research. The IS notion is not interpreted in the narrowly technological sense evident in, for example, ISO/IEC 27001 2018: "[a] set of applications, services, information technology assets, or other information-handling components" (p.5). Most IS have significant human elements, and making sense of an IS accordingly demands adoption of a socio-technical perspective.

The method adopted in this article is to extend a previously-published pragmatic metatheoretic model into the authentication and authorization spaces, develop a generic model of what is referred to in this article as Id/Entity Management (Id/EM), and demonstrate the model's efficacy in clarifying weaknesses in conventional IdM and in addressing them. The final step in the Id/EM process is access control, whose function of to enable the exercise of permissions by, but only by, authorized users. The access control step is dependent on an earlier authorization step, which establishes the permissions. Id/EM refers to the combination of architecture, infrastructure and process, and involves several additional steps in support of authorization and access control.

The article commences by reviewing the context and nature of authorization and access control, within their broader context of conventional identity management. This culminates in initial observations on issues that are relevant to the vulnerability to unauthorised access. An outline is then provided of a pragmatic metatheoretic model (PMM), highlighting the aspects of relevance to the analysis. Generic theories of authentication (GTA) and of authorization (GTAz) are summarised, reflecting the insights of the PMM meta-model. This lays the foundations for proposals for adaptations to IS theory and practice in all aspects of identity management, including identification and authentication, with particular emphasis placed in this article on authorization and access control. The resulting Id/Entity Management (Id/EM) theory is then used as a lens whereby weaknesses in conventional theory and practice in the area can be articulated, and means of addressing them can be proposed.


2. The Conventional Approach to Authorization and Access Conrol

This preliminary section provides a brief overview of conventional IdM, with particular reference to the key steps in the process. This is necessary to lay the foundation for an understanding of the differences that arise from the theory advanced later in the article.

A dictionary definition of authorization is "The action of authorizing a person or thing ..." (OED 1, first part); and authorize means "To give official permission for or formal approval to (an action, undertaking, etc.); to approve, sanction" (OED 3a) or "To give (a person or agent) legal or formal authority (to do something); to give formal permission to; to empower" (OED 3b). OED also recognises uses of 'authorization' to refer to the result of an authorization process: " ... formal permission or approval; an instance of this" (OED 1, second part). Overloading a term within a body of technical terminology creates unnecessary linguistic confusion, and hence separate terms are preferable.

The remainder of this section outlines conventional usage of the term within the ICT industry, with an emphasis on the underpinnings provided by industry standards organisations, clarifies several aspects of those standards, summarises the various approaches in current use, and highlights a couple of aspects that need some further articulation.

2.1 ICT Industry Definitions

The authorization notion was first applied to computers, data communications and IS in the 1960s. It has of course developed considerably since then, both deepening and passing through multiple phases. However it has mostly been treated as being synonymous with the selective restriction of access by some entity to a resource, an idea usefully referred to as 'access control'. Originally, the resource being accessed was conceived as a physical space, such as enclosed land, a building or a room; but, in the context of ICT, the focus has conventionally been on categories of IS resource, such as a service, data, software, or a device, local or remote.

The following quotations and paraphrases provide representative, short statements about the nature of the concept as it has been practised in ICT during the period c.1970 to 2020:

Authorization is a process for granting approval to a system entity to access a system resource (IETF 2007, at 1b(I), p.29)

Access control is granting or denying an operation to be performed on a resource (ISO/IEC 27001 2018, p.1, ISO/IEC 29146 2024)

Access control or authorization ... is the decision to permit or deny a subject access to system objects (network, data, application, service, etc.) ... The terms access control and authorization are used synonymously throughout this document (NIST800-162 2014, p.2)

NIST's uses of access control and authorization as synonyms for one another is problematic. Josang (2017, pp.135-142) draws attention to ambiguities in the mainstream definitions in all of the ISO/IEC 27000 series, the X.800 Security Architecture, and the NIST Guide to Attribute Based Access Control (ABAC). To overcome the problems, Josang distinguishes between:

Josang's approach is consistent with, but clearer than, that of a leading textbook, which uses 'access control' at times to refer to a suite of processes, but then narrows its application down to the operational aspects of limiting access to those that have been authorized to have access, and 'authorization' as the preparatory step of deciding who or what has what permissions:

Authorization: [ The function of determining the grant ] of a right or permission to a system entity to access a system resource. This function determines who is trusted for a given purpose (Stallings & Brown 2015, p.116)

Access control: A collection of mechanisms that work together to create a security architecture to protect the assets of the information system. ... (Stallings & Brown 2015, p. 5) [ cf. Access management? ]

Access Control: [ The limitation of ] information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems) and to the types of transactions and functions that authorized users are permitted to exercise ... We can view access control as the central element of computer security ... (Stallings & Brown 2015, pp. 26, 114)

In relation to the notion of what it is that a user is entitled to do, many terms are applied by in somewhat inconsistent and overlapping ways. In particular, IETF uses '[an] authorization'; NIST uses 'privilege' and '[an] authorization'; and ISO uses four terms as synonyms: "privilege [aka] access right [aka] permission mean [an] authorization to a subject to access a resource" (ISO/IEC 29146 2024). In this article, with the intention of overcoming the resulting confustions, the term Permission is used to refer to a declaration of allowed Actions on a particular IS Resource by a particular Actor. This approach is once again consistent with the approach of the leading textbook:

Permission: An approval of a particular mode of access to one or more objects. Equivalent terms are access right, privilege, and authorization (Stallings & Brown 2015, p.130)

In Table 1, correspondences are indicated among the terms used in important standards documents and in the theory proposed in this article. In standards documents, the terms 'subject' and 'system resource / object' are intentionally generic. NIST800-162 (2014, p.3) refers to an 'object' as "an entity to be protected from unauthorized use" . Examples of IS resources referred to in that document include "a file" (p.vii), "network, data, application, service" (p.2), "devices, files, records, tables, processes, programs, networks, or domains containing or receiving information; ... anything upon which an operation may be performed by a subject including data, applications, services, devices, and networks" (p.7), "documents" (p.9), and "operating systems, applications, data services, and database management systems" (p.20).

Table 1: Equivalent Terms

2.2 The Broad Field of Identity Management

Authorization processes depend on reliable information. Identity Management (IdM) and Identity and Access Management (IAM) are ICT-industry terms for frameworks comprising architecture, infrastructure and processes that enable the management of user identification, authentication, authorization and access control processes. IdM was an active area of development c.2000-05, and has been the subject of a considerable amount of standardisation, in particular in the ISO/IEC 24760 series (originally of 2011, completed by 2019). A definition provided by the Gartner consultancy is:

Identity management ... concerns the governance and administration of a unique digital representation of a user, including all associated attributes and entitlements (Gartner, extracted 29 Mar 2023, emphasis added)

The process flow specified in NIST-800-63-3 (2017, p.10) is in Figure 1. This is insufficiently precise to ensure effective and consistent application. Josang (2017, p.137, Fig. 1) provides a better-articulated overview of the functions, reproduced in Figure 2. This distinguishes the configuration (or establishment) phase from the operational activities of each of Identification, Authentication and Access. This is complemented by a mainstream scenario (Josang 2017, p.143, Fig. 2) that illustrates the practical application of the concepts, and is reproduced in Figure 3.

Figure 1: NIST's Digital Identity Model

Extracted from NIST-800-63-3 2017 (p.10)

Figure 2: Phase Model of Identity Management

Extracted from https://en.wikipedia.org/wiki/File:Fig-IAM-phases.png
See also Josang (2017, p.137), Fig. 1

Figure 3: Process Flow for Identity Management

Extracted from Josang (2017, p.143), Fig. 2

The IdM industry long had a fixation on public key encryption, and particularly X.509 digital certificates. This grew out of single-signon facilities for multiple services within a single organisation, with the approach then being generalised to serve the needs of multiple organisations. The inadequacies of monolithic schemes gave way to federation across diverse schemes by means of common message standards and transmission protocols. Multiple alternative approaches are adopted on the supply side (Josang & Pope 2005). These are complemented and challenged by approaches on the demand-side that contest the dominance of the interests of corporations and government agencies and seek to also protect the interests of users. These approaches include user-selected intermediaries, own-device as identity manager, and nymity services (Clarke 2004).

The explosion in user-devices (desktops from c.1980, laptops from c.1990, mobile-phones from 2007 and tablets from 2010) has resulted in the present context in which two separate but interacting processes are commonly involved. Individuals authenticate locally to their personal device using any of several techniques designed for that purpose; and the device authenticates itself to the targeted service(s) through a federated, cryptography-based scheme (FIDO 2022). The Identity Management model is revisited in the later sections of this article.

2.3 Families of Authorization Models

Whether a request is granted or denied is determined by an authority. In doing so, the authority applies decision criteria. From the 1960s onwards, a concept of Mandatory Access Control (MAC) has existed, originating within the US Department of Defense. Instances of data are assigned a security-level, each user is assigned a security-clearance-level, and processes are put in place whose purposes are to enable user access to data for which they have a requisite clearance-level, and to disable access in relation to all other data. The security-level notion is not an effective mechanism for IS generally. Instead, the criteria may be based on any of the following (with the acronyms for particular models of Access Control listed for each of the alternatives, and key features of each model outlined below):

An early approach of general application was Discretionary Access Control (DAC), which restricts access to IS Resources based on the identity of users who are trying to access them (although it may also provide each user with the power, or 'discretion', to delegate access to others). DAC matured into Identity Based Access Control (IBAC), which employs mechanisms such as access control lists (ACLs) to manage the Actors' Permissions to access the IS Resources. The Authorization process assumes that the identity of each Actor has been authenticated.

IBAC is effective in many circumstances, and continues to be used. It scales poorly, however, and large organisations have sought greater efficiency in managing access. From the period 1992-96 onwards, Role Based Access Control (RBAC), became mainstream in large systems (Pernul 1995, Sandhu et al. 1996, Lupu & Sloman 1997, ANSI 2012). In such schemes, an Actor has access to an IS Resource based on a Role they are assigned to. This offers efficiency where there are significant numbers of individuals performing essentially the same functions, whether all at once, or over a period of time. Application of RBAC in the highly complex setting of health data is described by Blobel (2004). See also ISO 22600 Parts 1 and 2 (2014). Blobel provides examples of roles, including (p.254):

Two significant weaknesses of RBAC are that Role is a construct and lacks the granularity needed in some contexts, and that environmental factors are excluded. To address those weakness, Attribute Based Access Control (ABAC) has emerged since c.2000 (Li et al. 2002): "ABAC ... controls access to [IS Resources] by evaluating rules against the attributes of entities ([Actor] and [IS Resource]), operations, and the environment relevant to a request" (NIST-800-162 2014, p.vii). "Attribute-based access control (ABAC) [makes] it possible to overcome limitations of traditional role-based and discretionary access controls" (Schlaeger et al. 2007, p.814). This includes the capacity to be used with or without the requestor's identity being disclosed, by means of an "opaque user identifier" (p.823).

NIST800-162 (2014) provides, as examples of Attributes of an Actor, "name, unique identifier, role, clearance" (p.11), all of which relate to human users, and "current tasking, physical location, and the device from which a request is sent" (p.23). Examples of IS Resource attributes given include "document ... title, an author, a date of creation, and a date of last edit, ... owning organization, intellectual property characteristics, export control classification, or security classification" (p.9). The examples of environmental conditions that are provided include "current date, time, [actor/IS resource] location, threat [level], and system status" (pp.24, 29).

Industry standards and protocols exist, to support implementation of authorization processes, and to enable interoperability among organisationally and geographically distributed elements of information infrastructure. Two primary examples are OASIS SAML (Security Assertion Markup Language), a syntax specification for Assertions about an Actor, supporting the Authentication of Identity and Attribute Assertions, and Authorization; and OASIS XACML (eXtensible Access Control Markup Language), which provides support for Authorization processes at a deeper level of granularity.

Beyond ABAC, an even more finely grained approach is adopted by Task-Based Access Control (TBAC). This associates Permissions with a Task, e.g. in a government agency that administers welfare payments, with a case-identifier; and in an incident management system, with an incident-report identifier; in each instance combined with some trigger such as a request by the person to whom the data relates (the 'data-subject'), or a Task allocation to an individual staff-member by a workflow algorithm. See (Thomas & Sandhu 1997, Fischer-Huebner 2001, p.160). To date, however, TBAC appears to have achieved limited adoption.

2.4 Deeper Issues in Authorization Models

The present article treats the following aspects as being for the most part out-of-scope:

However, the following two aspects are relevant to the analysis that follows.

(1) The Granularity of IS Resources

The above broad description of the conventional approach to Authorization adopts an open interpretation of the IS Resource in respect of which an Actor is granted Permissions. In respect of processes, a Permission might apply to all available functions, or each function (e.g. view data, create data, amend data, delete data) may be the subject of a separate permission.

In respect of Data, a hierarchy exists. For example, a structured database may contain data-files, each of which contains data-records, each of which contains data-items. The Unix file-system, for example, distinguishes separate functions of file-level read, write and execute, with write encompassing all of create, amend, delete and rename. A Permission may apply to all Data-Records in a Data-File, but it may apply to only some, based on criteria such as a TRecord-Identifier, or the content of individual Data-Items. Hence, visualising a Data-File as a table, a Permission may exclude some rows (Records) and/or some columns (Data-Items). There is a modest literature on the granularity of data-access permissions, e.g. Karjoth et al. (2002), Zhong et al. (2011).

(2) Authority Exercised by the Individual to Whom Data Relates

The Authorization process assumes the existence of an Authority that can and does make decisions about whether to grant Actors Permissions in relation to IS Resources. In many cases, the Authority is simply assumed to be the operator that manages the relevant data-holdings and/or exercises Access Control over those data-holdings. In other cases, the operation may be outsourced to an agent but the decisions retained by the principal. However, contexts exist in which the Authority is some other party entirely. One example is a regulatory agency. Another example is the person to which the data relates. This is the case in schemes that include an opt-out facility at the option of that person, and consent-based (also sometimes referred to as opt-in) schemes. In these schemes, the Authority with respect to each Record in the data-holdings is the person to which it relates, and the system operator implements the criteria set by that person. Although such patterns are most common in the case of human entities, they also arises with organisational entities.

Important examples are in health-care settings, where highly-sensitive health data is involved, such as that relating to mental health, sexually-transmitted diseases and genetic material. A generic model is described in Clarke (2002, at 6.) and Coiera & Clarke (2004). In the simplest case, each individual has a choice between the following two criteria:

  1. An unqualified, general consent (although personal data continues to be subject to the protections that the law provides for health care data); and
  2. An unqualified, general denial of consent (which is subject to such constraints as the law places on that right).

Further articulation might provide each individual with a choice between the following two criteria:

  1. A general consent, subject to zero or more specific denials; and
  2. A general denial, subject to zero or more specific consents.

Each specific denial or consent is expressed in terms of specific attributes, which may define:

A fully articulated model supports a nested sequence of consent-denial or denial-consent pairs ('Yes/No, except ... unless ...'). These more complex alternatives enable a patient to have confidence that some categories of their health data are subject to access by a very limited set of treatment professionals. However, most of the conventional models lack the capacity to support consent as an Authorization decision criterion. Schemes of this kind are reviewed in Iyer et al. (2021), who investigate the expressiveness of negated conditions (e.g. not on leave) compared with negative authorizations (default denial unless one of a set of authorizing rules applies).

The next section outlines a meta-model that has been devised to support IS practice and practice-oriented IS research. The sections after that extend that meta-model to express Generic Theories of Authentication (GTA) and of Authorization (GTAz).


3. The Pragmatic Metatheoretic Model (PMM)

In previously-published work (Clarke 2021, 2022, 2023a, 2023b), a model is proposed that reflects the viewpoint adopted by Information Systems (IS) practitioners, and that is designed to support understanding of and improvements to IS practice and practice-oriented IS research. The model embodies the socio-technical system view, whereby organisations are recognised as comprising people using technology, each affecting the other, with effective design depending on integration of the two. The model is 'pragmatic', as that term is used in philosophy, that is to say it is concerned with understanding and action, rather than merely with describing and representing. It is also 'metatheoretic' (Myers 2018, Cuellar 2020), on the basis that it builds on a working set of assumptions in each of the areas of ontology, epistemology and axiology. Care is taken in the choice of terms, the expression of clear definitions of them, and the interweaving of them into a comprehensive, coherent and internally consistent terminology. This section provides a brief overview of the meta-model. Defined terms are highlighted using Capitals and Italics, and all definitions are provided in Figures, and consolidated into an associated Glossary.

As depicted in Figure 4, the Pragmatic Metatheoretical Model (PMM) distinguishes a Real World from an Abstract World. The Real World comprises Phenomena of two kinds, Things and Events, each of which have Properties. These can be sensed by humans and artefacts with varying reliability. Abstract Worlds are depicted as being created at two levels. The Conceptual Model level reflects the modeller's perception of Real World Phenomena. At this level, the notions of Entity and Identity correspond to the category Things, and Transaction to the category Events. Some key definitions are provided in Figure 5A.

Figure 4: A Pragmatic Metatheoretical Model

After Clarke (2021)

A vital aspect of this meta-model is the distinction between Entity and Identity. An Entity corresponds with a Physical Thing. An Identity, on the other hand, corresponds to a Virtual Thing, which is a particular presentation of a Physical Thing, most commonly when it performs a particular Role, i.e. adopts a particular pattern of behaviour. For example, the NIST (2006) definition of authentication distinguishes a "device" (in the terms of this model, an artefactual Entity) from a "process" (an artefactual Identity), and the Gartner definition of IdM refers to "a digital representation [i.e. a human Identity] of a user [i.e. a human Entity]". An Id/Entity-Instance is a particular occurrence of an Id/Entity. An Entity-Instance may adopt one Identity-Instance in respect of each role it performs, or it may use the same Identity-Instance when performing multiple and even all roles. Conversely, an Identity-Instance may be assumed by multiple Entities at the same time, and/or over time. For example, within a corporation, different human Entity-Instances succeed one another in adopting the Identity CEO, whereas the Identity Company Director is adopted by multiple human Entity-Instances not only successively, but also at the same time. For simplicity of expression, the remainder of this article uses Id/Entity for both the generic concepts and instances of them.

The Data Model Level enables the operationalisation of the relatively abstract ideas in the Conceptual Model level. This moves beyond a design framework to fit with data-modelling and data management techniques and tools, and to enable specific operations to be performed to support organised activity. The PMM uses the term Information specifically for a sub-set of Data: that Data that has value (Davis 1974, p.32, Clarke 1992b, Weber 1997, p.59). Data has value in only very specific circumstances. Until it is in an appropriate context, Data is not Information, and once it ceases to be in such a context, Data ceases to be Information. Assertions are putative expressions of knowledge about one of more elements of the metatheoretic model. Figure 5A provides definitions of key terms.

A further notion that assists in understanding models of human beings is the Digital Persona. This means, conceptually, a model of an Id/Entity's public personality based on Data and maintained by Transactions, and intended for use as a proxy for the Id/Entity; and, operationally, a Data-Record that is sufficiently rich to provide the record-holder with an adequate image of the represented Id/Entity. A Digital Persona may be Projected by the Id/Entity using it, or Imposed by some other party, such as an employer, a marketing corporation, or a government agency (Clarke 1994a, 2014). As the term 'identity' is used in conventional IdM, to refer to "a unique digital representation of a user", it is an Imposed Digital Persona.

The concepts in the preceding paragraphs declare the model's ontological and epistemological assumptions. A third relevant branch of philosophy is axiology, which deals with 'values'. The values in question are those of both the system sponsor and stakeholders. The stakeholders include human participants in the particular IS ('users'), but also those people who are affected even though they are not themselves participants ('usees' -- Berleur & Drumm 1991 p.388, Clarke 1992a, Fischer-Huebner & Lindskog 2001, Baumer 2015). The interests of users and usees are commonly in at least some degree of competition with those of social and economic collectives (groups, communities and societies of people), of the system sponsor, and of various categories of formalised organisations. Generally, the interests of the most powerful of those players dominate.

Figure 5A: Terms from the Pragmatic Metatheoretic Model

Further developed from Clarke (2021)

The basic PMM was extended in Clarke (2022), by refining the Data Model notion of Record-Key to distinguish two further concepts: Identifiers as Record-Keys for Identity-Records (corresponding to Virtual Things in the Real World), and Entifiers as Record-Keys for Entities (corresponding to Physical Things).

A computer is an artefactual Entity, for which which a Processor-ID may exist, failing which its Entifier may be a proxy, such as a the Network Interface Card Identifier (NIC ID) of, say, an installed Ethernet or Wifi card, or its IP-Address. A process is an artefactual Identity, for which a suitable Identifier is a Process-ID, or a proxy such as its IP-Address concatenated with its Port-Number. For human Entities, the primary form of Entifier is a biometric, although the Processor-ID of an embedded chip is another possibility (Clarke 1994b p.31, Michael & Michael 2014). For authorized users (whether a human or an artefact), a UserID or LoginID is a widely-used proxy Identifier.

This leads to distinctions between Identification processes, which involve the provision or acquisition of an Identifier, and Entification processes, for which an Entifier is needed. The acquired Id/Entifier can then be used as the Record-Key for a new Data-Record, or as the means whereby the Id/Entity can be associated with a particular, already-existing Id/Entity-Record. The terms Entifier and Entification are uncommon, but have been used by the author in refereed literature since Clarke (2002) and applied in over 30 articles within the Google Scholar catchment, which together have over 500 citations. Key terms are defined in Figure 5B.

Figure 5B: Terms Relating to Id/Entifiers and Id/Entification

Further developed from Clarke (2022)


4. The Generic Theory of Authentication (GTA)

Two further papers extend the PMM in relation to Authentication. In Clarke (2023b), it is argued that the concept needs to encompass Assertions of all kinds, rather than just Assertions involving Id/Entity. That article presents a Generic Theory of Authentication (GTA), defining it as a process that establishes a degree of confidence in the reliability of an Assertion, based on Evidence. The GTA distinguishes various categories of Assertion that may or may not involve Id/Entity, including Assertions of Fact, of Content Integrity and of Value. An item of Evidence is referred to as an Authenticator. A Credential is a category of Authenticator that carries the imprimatur of some form of Credential Authority. A Token is a recording medium on which useful Data is stored. Examples of 'useful Data' in the current context include Id/Entifiers, Authenticators and Credentials.

The logically second of the two papers, (Clarke 2023a), defines an Id/Entity Assertion as a claim that a particular Virtual Thing or Physical Thing is appropriately associated with one or more Id/Entity-Records. An Id/Entity Assertion is subjected to Id/Entity Authentication processes, in order to establish the reliability of the claim. Also of relevance is the concept of a Property Assertion, whereby a particular Data-Item-Value in a particular Id/Entity Record is claimed to be appropriately associated with, and to reliably represent, a particular Property of a particular Virtual Thing or Physical Thing.

Real-World Properties, and Abstract-World Id/Entity Attributes, represented by Data-Items, are of many kinds. A category of especial importance in commercial transactions is a Principal-Agent Relationship Assertion, whereby a claim is made that a particular Virtual Thing or Physical Thing has the Property of authority to perform an Action on behalf of another particular Thing. An agent may be a Physical Thing (a person or a device), or a Virtual Thing (a person currently performing a particular role, or a computer process). Chains of principal-agent relationship assertions are common, each of which may require Authentication. Definitions of key terms are provided in Figure 5C.

Figure 5C: Terms Relating to Authentication

From Clarke (2023b, 2023a)

The theory reviewed in this section is extended in the following section to encompass Authorization, in order to lay the foundation for an assessment of the suitability of the conventional approaches to authorization described earlier in this article.


5. A Generic Theory of Authorization (GTAz)

This section applies the Pragmatic Metatheoretic Model (PMM) and the Generic Theory of Authentication (GTA), outlined above, and presents a new Generic Theory of Authorization (GTAz). Some aspects of this theory have been previously presented, in Clarke (2023c). Additional terms are defined in Figure 5D. All definitions are reproduced in an associated Glossary (Clarke 2024).

An Actor may or may not be granted a Permission to perform an Action on a Resource. A process needs to be performed in order to determine whether or not that will be the case. The term Authorization is used to refer to that process. The entity that makes the determination is the Authorization Authority. The decision has no outcomes other than the recording of the determination for future use.

An Actor that has a Permission needs to be provided with the capability to exercise it. An Entity that can provide that capability needs to receive a request from the Actor, and check the request, and the recorded information about the Actor and its Permissions, and, if the checks are satisfied, establish a Session that enables the Permissions to be exercised. That process is referred to as Access Control.

A comprehensive model of these processes needs to encompass the possibility (and, in practice, likelihood) that an Actor other than the intended Actor may endeavour to take advantage of the Permissions that the intended Actor has been granted. Such an Actor is referred to as an Imposter. Checks undertaken as part of the Operational Phase need to ensure that the intended Actors are provided with a Session and appropriate Permissions, and Imposters are not. The term Masquerade refers to Actions that are performed by an Imposter.

A further category of abuse needs to be recognised and defined. A Permission may be absolute, but in many cases it is qualified in some way. It may apply to all Instances of a Resource, some, or just one (e.g. a single Data Record). It may apply to any Purpose or Task, or a quite specific Purpose or Task (e.g. display of, or modification of, some Data-Items but not others). A Permission Breach occurs if the Actor has broader access than that defined in the Permissions, or performs an Action for a Purpose or Task that is not encompassed by the Permission.

Figure 5D: Terms Relating to Authorization

Further developed from Clarke (2023c)

With key underlying concepts defined, a comprehensive business process can be defined to apply them to the purpose of enabling appropriate users appropriate access to appropriate resources, and denying inappropriate access.


6. Id/Entity Management (Id/EM)

The notion of Identity Management (IdM) was discussed earlier in this article. It is intended as a comprehensive architecture for relevant infrastructure and processes. The earlier discussion noted various inadequacies in conventional conceptualisations, models and terminologies. Josang's (2017) Phase Model, reproduced in Figure 2, endeavoured to address many of the issues. Building on Josang's work, and applying the Pragmatic Metatheoretic Model (PMM), Generic Theory of Authentication (GTA) and Generic Theory of Authorization (GTAz) outlined above, this section presents a refinement and further articulation of Josang's model, which is referred to here as Id/Entity Management (Id/EM).

6.1 Overview of Id/EM

In Figure 6, a diagrammatic overview of the field as a whole is provided. Within Id/EM, a Registration Phase and an Operational Phase are distinguished.

The Registration Phase is conducted when new, or renewed, Permissions are sought for a new, or renewing, Actor. The Phase comprises four Steps:

The Operational Phase is conducted on each occasion on which an Actor seeks to exercise previously-established Permissions. It comprises:

Figure 6: A Generic Process Model of Id/Entity Management (IdEM)

In Figure 5E, definitions of further terms are provided, to complete the exposition of the theoretical model of Authorization, Access Control and Id/Entity Management.

Figure 5E: Further Terms to Support IdEM

Further developed from Clarke (2023c)

The following section further develops some aspects of the Id/EM Process that are of particular significance.

6.2 The Authorization Step

Authorization, the third step of the Registration Phase depicted in Figure 6, is a primary focus of this article. Adopting the modified definitions in the IETF and NIST standards proposed in Josang (2007), a clear distinction is drawn between the Authorization process (discussed in this section) and the final step of the Operational Phase, Access Control. Authorization is the process whereby an Authorization Authority decides whether or not to determine that an Actor has one or more Permissions in relation to a particular Resource. A Permission may be specific to an Actor-Instance, or the Actor-Instance may be assigned to a previously-defined Role and inherit Permissions associated with that Role. A Permission may be, and to represent an effective safeguard needs to be, provided for, and only for, a particular Purpose, Use, Function or Task.

Categories of Role include:

The Authorization Authority is commonly the operator of an Information System, as principal, or the operator of an Id/Entity Management service acting as an agent for a principal. Many other possibilities exist, however, such as a regulatory agency, a professional registration board and an individual to whom personal data relates.

An Actor that is assigned Permissions may be any Physical Thing or Virtual Thing, provided that the Actor has the capability to perform relevant Actions in relation to relevant Resources, or alternatively has an agent that can do so on the behalf of the Actor. Generally, human Actors have the capability to act. Many artefacts lack suitable actuators, but devices are increasingly being provided with capacity to act, and hence artefactual Actors are increasingly common.

An Actor may therefore take many forms, in particular:

Generally, any Actor capable of Action can perform as an agent for a principal. Generally, however, an artefactual Actor cannot act as a principal. This is because legal regimes generally preclude artefacts from bearing responsibility for actions and outcomes, and from liability for harm arising from an action. They cannot be subject to provisions of the criminal law nor be bound by contract.

Two broad categories of Resources need to be distinguished, which are subject to distinct forms of Actions:

6.3 The Operational Phase

The Registration Phase paves the way for an Actor to be given the capacity to perform an Action, expressed as the entitlement called a Permission. The Operational Phase may be instigated at any time after Registration is complete, and as many times and as frequently as suits the circumstances. The first step, Id/Entification, exhibits no material differences from the first step in the Registration Phase. The second step, Operational Authentication, takes advantage of the investment undertaken during the Pre-Authentication process, in order to achieve both effectiveness and efficiency.

The Authenticator(s) used in the Operational Phase may be the same as one or more of those used in the Pre-Authentication step of the Registration Phase. More commonly, however, a purpose-designed arrangement is implemented to enable a quick and convenient process. One approach of long standing is for a 'shared secret' (password, PIN, passphrase, etc.) to be nominated by the User, or provided to the User by the operator. Another mechanism is a one-time password (OTP) provided to the User. This may be delivered just-in-time, via a separate and previously-agreed communications channel. Other currently mainstream approaches involve a one-time password generator that is pre-installed on the User's device(s) by a remote process, or physically delivered to them sufficiently in advance of the Login activity. The earlier steps in the IdEM process enable the third step, Access Control to establish a Session in which the User can exercise the Permissions it is entitled to.

This section has built on the prior presentations of PMM and GTA, and has presented new bodies of theory relating to Authorization (GTAz) and Id/Entity Management (Id/EM). The remainder of the article assesses the potential value that the theory can deliver to IS practice. The term 'Id/EM theory' is used in the remainder of this document to refer to the combination of the elements described above as PMM, GTA, GTAz and Id/EM.


7. Application of the Theory

The purpose of this article is to present a theory of Id/Entity Management (Id/EM) that is derived from a Pragmatic Metatheoretic Model of the field addressed by Information Systems practice, extended to embody theoretical treatments of Id/Entities, the Authentication of Assertions, and the critical elements of Authorization and Access Control. The theory has many features that evidence some divergence from conventional Identity Management (IdM) practices. This section considers how the theory can be applied in order to achieve improvements in the quality of Authorization and Access Control.

In the Identity Management field, no single Industry Standard dominates the field, and no single Identity Management product or service dominates the market. The approach that has been adopted is to consider several key sources that have had, and continue to have, substantial impact on industry practices. A few references are made to the IETF Security Glossary (IETF 2000, 2007), but the primary focus is on standards documents that have significant influence and whose guidance has been available for a sufficient period of time to have been reflected in contemporary products, services and practices. The following are the primary sources considered in this section:

The various standards differ considerably in their purposes, approaches, language and structures. This section accordingly adopts a thematic approach, commencing with foundational matters, then moving on to specific aspects.

7.1 Conceptual Foundations

The Id/EM theory presented in this article is based on express metatheoretic assumptions that adopt a truce between the two alternative ontological assumptions, the 'materialist' assumption (that phenomena exist in a real world) and the 'idealist' approach (that everything exists in the human mind). It adopts the notion that a Real World of Phenomena exists, on one plane, which humans cannot directly know or capture, but which humans can sense and measure, such that they can construct manipulable Abstract-World models of those Phenomena, on a plane distinct from that of the Real World. Terms are then adopted and defined that apply in, and only in, respectively the Real World and the Abstract World.

A further ontological commitment is to a clear distinction between two categories of Real-World Phenomena: Physical Things and Virtual Things. This is depicted in Figure 4. That distinction is then carried across into an epistemological commitment to the Abstract World, in the elements of the conceptual-model elements of Entities and Identities, and in the data-model elements of Entity Records and Identity Records. Entities and Identities need to be teased apart, each with one or more (Id)Entifiers to distinguish (Id)Entity Instances. A human Entity must be able to map to multiple human Identities, such that a human who acts using one Identity (such as prison warder, undercover agent or protected witness) is able to keep that Identity distinct from other Identities (such as as householder, parent and football coach).

Conventional IdM practice, on the other hand, has long featured the conflation of Physical Things with Virtual Things, and Entities with Identities. For example, the IETF's Glossary defines identification as "an act or process that presents an identifier to a system so that the system can recognize a system entity and distinguish it from other entities" (IETF 2000, p.83, IETF 2007, p.144, emphases added).

ISO 24760-1 carries the conflation of ideas over into international standards. Even after the 2019 revision, it defines "identification" as "process of recognizing an entity" (all quotations from p.1, emphases added), and verification as "process of establishing that identity information ... associated with a particular entity ... is correct" (p.3, emphases added). This is despite the document having earlier distinguished 'entity' (albeit somewhat confusingly, as "item relevant for the purpose of operation of a domain [or context] that has recognizably distinct existence") from 'identity' ("set of attributes ... related to an entity ...").

The NIST (2017) treatment of Real-World and Abstract-World elements, and of 'entity' and 'identity', is incomplete, inconsistent and confusing, in several ways:

In FIPS-201-3 (2022), the US government's Standard for Personal Identity Verification, defines identity as "The set of physical and behavioral characteristics by which an individual is uniquely recognizable" (FIPS-201-3, p.98, emphases added). Firstly, this is a Real-World definition, as distinct from the widely-used Abstract-World conception of an Identity, and its definition is in terms not of physical characteristics (in Id/EM terms, Properties) but of Attributes and/or Data-Items. Secondly, it is about a Physical Thing ("an individual"), but it refers to it as an "identity", which more usefully represents a Virtual Thing that arises from a Role performed by a Physical Thing.

These multiple layers of fog in the NIST documents could be overcome, the clarity of communication to practitioners greatly enhanced, and the effectiveness of implementations within government agencies and business enterprises much-improved, by revising the NIST terms and definitions either to reflect the suite used in Id/EM theory, or those of some other scheme that features completeness, consistency and coherence.

Standards and associated practices also feature mis-handling of relationship cardinality. Entities generally have multiple Identities, and Identities may be performed by multiple Entities, both serially and simultaneously. A common example is the practice of parents sending their young children to an automated teller machine or EFTPOS terminal with the parent's payment-card and PIN. Another is a common practice among aged parents, who depend on their adult children to perform Internet Banking tasks on their behalf. Similarly, in many organisations, employees share loginids in order to save time switching between users. An enterprise model is inadequate if it does not encompass these common activities. It is also essential that there be a basis for distinguishing authorised delegations that may breach terms of contract with the card-issuer or of organisational policies and procedures, on the one hand, and criminal behaviour on the other.

The ISO-24760-1 standard accepts that an Entity may have multiple Identities, e.g. "An entity can have more than one identity" (p.1), and "This document considers any set of attributes that describe a particular entity as an identity for the entity ... Different sets of ... attributes form different identities for the same human entity" (p.8). It also expressly recognises that the relationship is not 1:n but m-n, stating that "Several entities can have the same identity" (p.1, see also pp.8-9). Yet it fails to reflect those statements in the remainder of the document. Any Authorization scheme built on a model that fails to actively and consistently support those realities is doomed to embody confusions, and evidence errors and insecurities.

In NIST (2017), it is challenging to interpret the intentions regarding the cardinality of the relationship between 'subject' (presumably equivalent to Entity) and 'identity' (cf. Identity). In a single passage, the NIST document acknowledges the existence of 1-to-many relationships: "a subject can represent themselves online in many ways. An individual may have a digital identity for email, and another for personal finances. A personal laptop can be someone's streaming music server yet also be a worker-bot in a distributed network of computers performing complex genome calculations" (p.iv). Generally, however, the document appears to implicitly assume a one-to-one relationship, i.e. that a Real-World human can only have a single Abstract-World Identity, and an Identity is only adopted by one human.

For example, there is a reference to "the subject's [singular] real-life identity [equivalent to Entity] ... " (p.iv), the term 'identity' is defined as "An attribute or set of attributes that uniquely describe a subject [Entity] within a given context" (p.47), and the term 'digital identity' is described as "the [i.e. singular] online persona [Identity] of a subject [Entity]", is defined as "the unique representation [Identity] of a subject [Entity] ..." (p.9), and there is a reference to verifying "a subject's [Entity] association with their [singular] real-world identity [Entity]" (p.9, all emphases added). To the extent that NIST's guidance is interpreted by its users as limiting each human or artefactual Entity to a singular Identity, it is unworldly, and hence that aspect of the guidance needs to be ignored or worked around.

NIST (2022) is concerned specifically with US federal employees and contractors, and does not appear to contemplate the possibility of anything other than a 1:1 relationship between an identity and an entity. Its application within that context will map poorly to the needs of many government agencies. It is even more out of touch with the realities of private sector activities.

FIDO (2018), with its focus on the authentication of humanly-used-devices, can support multiple processes running in a single device, because registration is not based on a physical device-identifier, but rather on an asynchronous cryptographic key-pair. FIDO's scope, however, does not extend to the modelling of the human user and their relationship with the device, leaving that to the service-provider.

Id/EM theory enables the detection of conceptual conflations, ambiguities and imprecisions in standards and guidance documents. Further, it provides a suite of terms and definitions, situated within a coherent and comprehensive model and associated terminology, that can together enable adaptation of the standards and guidance documents to overcome the deficiencies discussed in this section.

7.2 Scope

The model in this article has comprehensive scope, including:

As regards the first aspect, the scope of ISO 24760-1 is similar, in that it defines identity as "set of attributes ... related to an entity", and entity as "item ... that has recognizably distinct existence", notes that "An entity can have a physical or a logical embodiment", and the examples provided include "a person, an organization, a device, a group of such items, a human subscriber to a telecom service, a SIM card, a passport, a network interface card, a software application, a service or a website" (p.1). Because that standard does not extend to authorization (which it refers to as "establishing entitlements for the entity to access resources and interact with services", p.16), the Resources and Actions aspects are out-of-scope.

NIST declares its scope as being humans only: "The guidelines cover identity proofing and authentication of users (such as employees, contractors, or private individuals) interacting with government IT systems over open networks" (NIST 2017, p.iii). It later qualifies the statement with "That said, these guidelines are written to refer to generic subjects wherever possible to leave open the possibility for applicability to devices" (p.5). It wanders from this commitment, e.g. in its definition of authentication as "Verifying the identity of a user, process, or device ..." (p.41, emphasis added), yet the specific term 'digital authentication' reverts to "the process of establishing confidence in user identities presented digitally to a system", p.45), where 'user identities' by implication encompass only human users. Later, it defines subject as "A person, organization, device, hardware, network, software, or service" (p.55, emphases added), encompassing both humans and artefacts.

All references in NIST (2017) are to 'access to a digital service', and are non-specific about the nature of the service, defining access as "To make contact with one or more discrete functions of an online, digital service" (p.55). Such examples as are provided are limited to Data and Processes, without mention of Physical Resources or Actions in the Real World. Recent literature, on the other hand, is concerned with features of cyber-physical systems such as avionics, electricity grids and medical devices, aiming to combat opportunistic attacks on physical detector and actuator designs (e.g. Akhuseyinoglu & Joshi 2020). Several focus on the limited capacity of early-generation IoT devices to support effective security measures (e.g. Liu et al. 2020), with a particular focus on Wireless Body Area Network (WBAN) contexts supporting communications from and to implanted medical devices (Bu et al. 2019, Arfaoui et al. 2020).

The terms 'masquerade' and 'attack' are used, but generally the assumption appears to be made that safeguards against malbehaviour are by definition successful, and hence the model does not encompass imposters within the system. There is also no notion of limitation of the purpose, use, task or function being performed by an authorized user, and hence Permission Breach is also out-of-scope of the NIST guidance.

FIDO is motivated by the desire to overcome Man in the Middle attacks, and is almost entirely concerned with the Authentication of client-devices to server-devices without the need for transmission of an Authenticator such as a password. On the client side, a user must demonstrate presence by gesture, making clear that the scope does not extend to autonomous artefacts. A service may specify stronger Authentication of the human user than a mere demonstration of presence, e.g. by locally-validated password/PIN, by challenge-response procedures, or using a locally-validated biometric. Those specifications, however, lie outside the scope of FIDO itself.

The comprehensiveness of Id/EM theory provides means whereby the many weaknesses in existing mechanisms can be recognised and addressed.

7.3 Conceptual Clarity

Id/EM theory features careful attention to details concerning each of the elements of a comprehensive model. This sub-section considers a number of specific elements and clusters of elements, in each case reviewing standards and guidance documents in light of the theory.

(a) Id/Entifier and Id/Entification

The foundational concepts of Id/Entity were discussed in an earlier sub-section. Two further, closely-related elements are:

IETF (2007) defines identification as "An act or process that presents an identifier to a system so that the system can recognize a system entity and distinguish it from other entities" (p.144). This is in some ways consistent with the Id/EM definition, but it fails to separate references to the Real World and the Abstract World, and conflates Physical Things with Virtual Things, and Entity with Identity.

The ISO 24760-1 standard was noted above as confusing the notions of Entity and Identity. An even stranger aspect of the standard is that, having defined 'identifier' as "attribute or set of attributes ... that uniquely characterizes an identity ... in a domain [or context]" (p.1), it defines 'identification' without reference to 'identifier'. Moreover, the definition of 'identification' as "process of recognizing an entity ... in a particular domain" (p.3) is immediately followed by "Note 1 to entry: The process of identification applies verification ..." (p.3). Given that 'verification' is defined as "process of establishing that identity information ... associated with a particular entity is correct" (p.3), the definitions not only conflate the notions of Identity and Entity but also the processes of Identification and Authentication.

NIST (2017) does not discuss 'identification' or 'identifier', despite the document's scope extending to "the online persona of a subject" (p.iv). The document does provide a definition of 'pseudonymous identifier', as "A meaningless but unique number that does not allow the [Relying Party (RP)] to infer anything regarding the subscriber but which does permit the RP to associate multiple interactions with the subscriberÅfs claimed identity" (p.57), but the term is used only twice, there is no indication of the purpose for it, and no recognition of the related but stronger notion of anonymity.

In FIPS-201-3 (2022), identification is defined as "The process of discovering the identity (i.e., origin or initial history) of a person or item from the entire collection of similar persons or items" (p.98). This has some broad consistency with the approach in Id/EM theory. NIST defines an identifier as "Unique data used to represent a person's identity and associated attributes. A name or a card number are examples of identifiers" (p.98). This appears to exclude biometrics, and no term is suggested for the means of distinguishing a person from other people, as distinct from an identity from other identities.

(b) Authentication

In Id/EM theory, Authentication is a process that uses Evidence to establish a degree of confidence in the reliability of an Assertion. The theory rejects the epistemological assumption of 'accessible truth' in favour of a relativistic interpretation, with degrees of reliability (or of 'strength', as security theory expresses it). Truth / verification / proof / validation notions are applicable within tightly-defined mathematical models, but not in the Real World in which Identity Management is applied. The complexities inherent in schemes that inter-relate humans and atrefacts, are such that socio-technical perspectives are essential to understanding, and to effective analysis and design of IS.

The IETF Security Glossary, on the other hand, defines 'authenticate' to mean "verify (i.e., establish the truth of) an attribute value claimed by or for a system entity or system resource" (2000, p.15, unchanged in 2007, p.26, emphasis added). Hence authentication is the process of verifying a claim. On the other hand, an adjacent passage states that "An authentication process consists of [the] Identification step: Presenting the claimed attribute value (e.g., a user identifier) to the authentication subsystem [and the] Verification step: Presenting or generating authentication information ... that acts as evidence to prove the binding between the attribute and that for which it is claimed" (IETF 2007 p.26, emphases added). The inconsistency between the two aspects has become embedded in the language adopted by standards and guidance documents.

The problem is also inherent in ISO24760-1, where "verification" is defined as "process of establishing that identity information ... associated with a particular entity ... is correct" and "authentication is defined as "formalized process of verification ... that, if successful, results in an authenticated identity ... for an entity"(p.3, emphases added), and 'identity proofing' (synonym: 'initial entity authentication') is defined as "verification ... based on identity evidence ..." (p.5, emphases added).

The ISO standard at one point appears to acknowledge that the 'accessible truth' postulate is inappropriate, by observing that authentication involves tests "to determine, with the required level of assurance, their correctness" (p.3, emphases added). On the other hand, a qualification to an absolute term like 'correctness' is incongruous. The inconsistency survived the standard's authoring, review and approval processes, but it is unclear what practitioners who consult the standard make of it. At the very least, the ambiguities appear likely to sow seeds of doubt, and cause confusions.

In NIST (2017), truth-related notions such as 'proofing', 'verification' and 'determination of validity' occur throughout the document. For example, the string 'verif' occurs more than 100 times in the document's 60 pp. The implication is that something approaching infallibility is achievable. However, a number of more circumspect expressions are also used in relation to the authentication process, such as 'reasonable risk based assurance' (p.2), and 'levels of assurance' and 'establishing confidence in' (in the definition of Digital Authentication on p.45).

Misinterpretations are invited by the conflicting definitions of the general term 'authentication' ("verifying the identity of a user, process, or device ...", p.41, emphasis added) and of the specific term 'digital authentication' ("the process of establishing confidence in user identities presented digitally to a system", p.45). The former uses a truth-related word, and the latter a practical and relativistic term. One encompasses both humans and artefacts, whereas the other refers only to users, which by implication means only human users. This is not just a minor editorial flaw, because the word 'authentication' appears 219 times in the document. The specific term accounts for 21 of them, but from the various contexts it appears likely that many others are actually intended to invoke the more specific concept.

NIST refers to the "classic paradigm" for authentication factors (what you know/have/are), without consideration of the Real-World nature of "what you are" and "what you have" compared with the Abstract-World nature of "what you know", and without distinguishing humans from active artefacts (NIST 2017, p.12). Further, NIST's definition of biometric characteristics ("unique personal attributes that can be used to verify the identity of a person who is physically present at the point of verification", on p.13) again conflates the notions of Physical Thing (modelled as an Entity) and Virtual Thing (modelled as an Identity). The quality of uniqueness is merely asserted without discussion, despite the assertion's shaky foundations and the challenges involved in gathering biometric data, and in comparing biometric samples gathered at different times under different conditions. Further confusion is created by a declaration that "In this volume, authenticators always contain a secret", followed three lines later by "Authentication factors classified as something you know are not necessarily secrets" (p.13).

There are many circumstances in which an Authenticator pre-exists its use to evaluate any particular Assertion, having been designed for purposes additional to, and even different from, the Authentication process. Examples include registries of people in controlled professions such as health care, driving licensing registries, and tertiary education testamurs. NIST appears to be concerned only with Authenticators generated specifically for the purpose of the Id/Entity Authentication process.

In the glossary, meanwhile, NIST defines biometrics as "automated recognition of individuals based on their biological and behavioral characteristics" (p.43, emphasis added). The term 'recognition' depicts biometrics as a tool for Identification (selecting 1-among-many) rather than for Authentication (involving the less unreliable process of 1-to-1 comparison) or, in NIST's terms, verification. Further, the text does not link the notion of biometrics to the term 'subject', and does not consider any equivalent to biometrics in the case of artefacts. The many and varied ambiguities arising from the imprecisions of the NIST text are bound to result in differing interpretations of various passages by different readers.

NIST (2017) defines the general term 'authentication' as "verifying the identity of a user, process, or device ...", p.41, emphasis added. It is clear that the undefined term 'user' is intended to refer only to humans, in that the examples provided are "employees, contractors, or private individuals" (p.iii) and "employees and contractors" (p.4). The contexts of all of the 48 occurrences of the term are consistent with the interpretation that computing devices and processes running in them are not within-scope, e.g. "this revision of these guidelines does not explicitly address device identity", and "specific requirements for issuing authenticators to devices when they are used in authentication protocols with people" are also excluded (p.5).

(c) Credential

In Id/EM theory, Credential means an Authenticator that carries the imprimatur of an Authentication Authority. This is consistent with dictionary definitions of the term.

The IETF set the industry off down an inappropriate path, defining the term as "A data object that is a portable representation of the association between an identifier and a unit of authentication information" (RFC4949 2007, p.84). This is consistent with IT industry usage of 'token', rather than with the notion of Credential.

The ISO 24760-1 standard invites confusions in this area. Despite defining the term 'evidence of identity', the document fails to refer to it when it defines credential, which is said to be "representation of an identity ... for use in authentication ... A credential can be a username, username with a password, a PIN, a smartcard, a token, a fingerprint, a passport, etc." (p.4). This muddles all of Evidence, Entity, Identity, Attribute, Identifier and Entifier, and omits any sense of a Credential being evidence of high reliability, having being issued or warranted by an Credential Authority.

The confusions generated by the standard are increased by the definition of 'identity proofing' as involving "a verification of provided identity information and can include uniqueness checks, possibly based on biometric techniques" (p.5, emphases added). A biometric cuts through all of a person's Identities, by providing evidence concerning the underlying Entity.

Early versions of NIST's guidance adopted the same approach to 'credential' as IETF. The confusion between credential and token was acknowledged in 2017, but the new version limits its use of credential solely to electronic means of associating an authenticator with a user (p.18). It also omits the notion of an authority. The NIST document makes the claim that "a credential binds an authenticator to the subscriber, via an identifier" (pp.15, 44). It defines a credential as "an object or data structure" that has the mandatory characteristic that it "authoritatively binds an identity ... to [an] authenticator" (NIST 2017 p.44, emphasis added). Unbreakable association may be achievable with artefacts; but the only way to implement it with humans is to reduce the human to an artefact, e.g. through chip-implantation. A more appropriate statement would be that 'a credential provides a considerable degree of confidence in an assertion, because it entails a combination of an assurance from a third party in which reliance is placed, together with a technical design that reliably associates the assertion with the party that is being authenticated'. Further uncertainty arises from ambiguity in the use of the expression "bound to". It appears to refer to binding between four different pairs of concepts, in the definition on p.43 and in passages on pp.10 and 14.

Another difficulty with the NIST approach is that a Credential may contain multiple items of information (e.g. a testamur evidencing not only the award of a degree, but also the units studied and the results achieved in each ? any or all of which may be relevant in any particular context). The Credential is the testamur. The various Data-Items it contains may be Authenticators for particular Assertions as to Fact, or as to Identity.

(d) Token

In Id/EM, a Token as "a recording medium on which useful Data is stored ...".

IETF (2007) deprecated the (early) use of the term 'token' in the 2004 version of NIST 800-63, recommending the use of the terms 'NIST hard token' for "a hardware device that contains a protected cryptographic key" and 'NIST one-time password device token' for "a personal hardware device that generates one-time passwords: (p.308).

ISO 24760-1, on the other hand, adopts the same approach to a Token as Id/EM theory: "The identity information represented by a credential can ... be printed on human-readable media, or stored within a physical token" (p.6). Immediately afterwards, however, it conflates the data with the storage-medium: "A credential can be ... a token" (p.6).

In NIST (2017), the previous editions' conflation of authenticators and tokens is acknowledged and removed (pp. 6, 42). However, no definition is provided of the term 'token', and hence ambiguity and the scope for confusion remain.

(e) Authorization

Authorization is defined in Id/EM theory as a process whereby an Authorization Authority decides whether or not to declare that an Actor has one or more Permissions in relation to a particular IS Resource or Physical Resource.

IETF (2007) defines authorization firstly as an Id/EM Permission, but secondly as "a process for granting approval to a system entity to access a system resource" (p.28). The Id/EM definition is very similar to that, but more precise (because it distinguishes the prior act of determining what Permissions an Actor has, from the later, operational activity of Access Control, which provides a Session in which those Permissions can be exercised).

ISO 24760-1 makes almost no mention of authorization, appearing to treat it as being out-of-scope of 'A framework for identity management'.

NIST (2017) does not define the term authorization, and an ambiguous sentence appears to say that determination of the claimant's authorizations or access privileges are outside the guidelines' scope (p.10). However, no cross-reference to authorization guidance is provided. The document makes use of the word 'authorization' to refer to the output of a process: "authorizations or access privileges" (p.10). Its main use, however, is as part of the compound noun 'authorization decision', which refers to the final action in a process that determines what permissions a user is to be granted (pp. 9,10,16). An ungrammatical definition is provided of (the verb) 'authorize' to mean (the noun) "a decision to grant access, typically automated by evaluating a subject's attributes" (p.42).

As noted earlier, another NIST document defines "Access control or authorization [as] the decision to permit or deny a subject access to system objects (network, data, application, service, etc.)" (NIST800-162 2014, p.2). Id/EM theory separates the two notions, with Authorization being restricted to the making of the decision, and Access Control applying to its application, which may occur many times in respect of the outcome of any one Authorization decision-process.

The vagueness and ambiguity of the NIST guidance invites variable intepretations and hence variability in the implementations of IdM schemes. More precise definitions within a coherent model and associated terminology could greatly reduce the variability in interpretations and implementation, improve access control quality, and facilitate federation among schemes.

(f) Authorization Criteria

The primary factors influencing the making of decisions by Authorization Authorities are:

The intentions of users of IdM or Id/EM schemes are confronted by challenges of both a technical and an economic nature. IBAC approaches do not scale well. RBAC schemes sacrifice precision for economy. ABAC schemes are complex, require customisation, and are expensive. TBAC approaches appear to have achieved little market penetration.

In some contexts, RBAC offers satisfactory trade-off among the objectives, while in others it manifestly falls short of the need. One issue is that RBAC was conceived at a time when most IS operated inside organisational boundaries. On the other hand, the notions of inter- and multi-organisational systems were already strongly in evidence, and extra-organisational systems extending out to individuals have been operational since c.1980 (Clarke 1992a). Yet the IETF Glossary of 2000, even after revision in 2007, defines role to mean "a job function or employment position" (IETF, p.254).

Authorization is treated by ISO 24760-1 as being out-of-scope.

The NIST exposition on ABAC adopts the narrow view that "a role has no meaning unless it is defined within the context of an organization" (NIST800-162 2014, p.26). Further, although the document suggests that ABAC supports "arbitrary attributes of the user and arbitrary attributes of the [IS resource]" (p.vii), the only examples provided for actor-attributes in the entire 50-page document are position descriptions internal to an organisation: "a Nurse Practitioner in the Cardiology Department" (p.viii, 10), "Non-Medical Support Staff" (p.10).

An assumption implicit in many interpretations of the NIST document is that a one-to-one relationship exists between organisational identity and organisational role. It is common for an Identity to have a primary organisational Identity, in the form of a job-title and associated job description. However, employees and on-site contractors commonly have additional Roles, e.g. as a fire warden, as a mentor to a junior assistant, as a member of an interview panel. Particular Permissions are needed for each Role (e.g. for access to messages intended only for Fire Wardens, and for access to the personal details of mentees and of job-applicants).

A scheme that limits Role to organisational positions or even part-time functions is inherently incapable of coping with boundary-spanning applications of IT. From the earliest period of online inter-organisational systems (IOS), onwards to multi-organisational systems (MOS), and outwards to extra-organisational systems, large numbers of individuals with no more formalised role than customer, client, supplier, registrant or enquirer have been granted Permissions. Such systems, particularly when dealing with sensitive data or enabling direct action on the Real World, need far more sophistication in Access Control Lists than most RBAC implementations offer. Further, to the extent that Attributes rather than Roles are used as the organising concept, Attributes must also reflect the whole context. Id/EM theory accommodates these ideas.

(g) Permission Specificity

Id/EM theory defines a Permission as (a) an entitlement granted to an Id/Entity-Instance (b) to be provided with the capability to perform a specified Action (c) in relation to a specified IS Resource or Physical Resource, (d) for a particular Purpose, Use, Function or Task. This is a relatively expansive notion. It goes beyond the scope of the Identity-based IBAC approach, and Role-based RBAC. It extends aspects of the ABAC notion, by defining the 'environmental variables' recognised in that approach to extend to the placing of limitations on the reasons why an Actor may be granted access. It can also be seen as broadening the notion of Task in TBAC.

A common weakness in authorization schemes is inadequate attention to the granularity of Data and Processes and/or of Actions that are the subject of the Permission. Many Permissions are provided at a gross level, with entire records accessible, well in excess of the Data justified on the bases of the 'need-to-know principle' and the 'principle of least privilege'. Excessive scope of Permissions invites Permission Breach, in the forms of misappropriation of data and performance of functions for purposes other than those for which the Permissions were intended. This may be done out of self-interest (commonly: curiosity, electronic stalking of celebrities, or the identification and location of individuals), as a favour for a friend, or for-fee. The threats of insider attack and data breach are inadequately controlled by RBAC. Nor do the standards acknowledge the need for proactive and reactive safeguards and mitigation measures. This is hardly a new insight. See, for example, Clarke (1992b).

A further concern arises with implied powers. "The RBAC object oriented model (Sandhu 1996) organises roles in a set/subset hierarchy where a senior role inherits access permissions from more junior roles" (p.135). Permission inheritance is a serious weakness. A superior does not normally have the need, and hence should not normally have the Permission. Access to such data is necessary for the performance of review and audit, but not for the performance of supervision.

The inheritance problem is one instance of a more general issue. RBAC approaches are unable to accommodate the reason for access as a determinant of Permission. A related perception from an organisation's own perspective is the need to limit access according to the general Function being performed (e.g. at a managerial or professional level) and/or the specific Task being undertaken (particularly at the level of operational or administrative staff-members). This may be looming as a larger issue in enterprise management, as the dream of 'data as the new oil' becomes complemented and perhaps even supplanted by notions of 'data risk' and 'toxic data'. There are already contexts in which a legal justification for access needs to be demonstrated. For example, in data protection law, access to personal data must be limited on the basis of the general Purpose and/or specific Use.

Currently, it appears to be uncommon for IdM schemes to embody even a proxy approach to safeguard against inappropriate access. For example, each User could be required to provide a brief declaration of the reason for each exercise of a Permission. In many IS, this need be no more than a Case-Id or other reference-number to a formal organisational register. Once that declaration is logged, along with the Username, Date-Time-Stamp, Record(s) accessed and Process(es) performed, a sufficient audit trail exists. That, plus the understanding that log-analysis is undertaken, anomalies are investigated, and sanctions for misuse exist and are applied, could act as a substantial deterrent safeguard against Users abusing their Permissions, and as an enabler of ex post facto detection, investigation and correction safeguards.

Inadequately specific Permissions invite Permission Breach by authorized Users. In addition, they enable Imposters to more easily gain access to relatively broad Permissions. Id/EM theory encompasses this, whereas none of IETF, NIST, ISO or FIDO embody such limitations.

(h) Access Control

In Id/EM theory, Access Control is defined as "a process that utilises previously recorded Permissions to establish a Session that enables a User to exercise the appropriate Permissions".

ISO 24760-1 states that "identity-based decisions can concern access to applications or other resources" (p.v), and mentions that the function of what Id/EM theory refers to as Enrolment is "to enable the entity to access resources and interact with services provided by a domain" (p.15). However it defers access control matters to ISO/IEC 29146 'A framework for access management'.

The sole occurrence of the expression 'Access Control' in NIST (2017) is in an Abbreviations list for ABAC (Attribute Based Access Control) (p.58). It is implicit, however, that permissions granted to a subscriber are exercised in a 'session', defined as "a persistent interaction between a subscriber and an endpoint ..." (p.53), and hence access control can be reasonably inferred to be the step that enables a session to come into existence, or the function of managing a subscriber's access to a session.

The lack of guidance in NIST (2017) in relation to the use of Permissions granted in an Authorization process creates uncertainties. These would have been relieved if a cross-reference had been provided to an appropriate document on the subject. A variety of ambiguities and uncertainties would still remain, however, and these could be addressed by adapting the model and terminology in the NIST document by reference to those in the theory presented in this article.

The terminology used in expressing Id/EM theory was devised with the intention of achieving completeness, consistency and coherence. If a similarly careful approach is adopted to a revision of the ISO and/or NIST model, terminology and guidance, many interpretation difficulties will be surfaced and addressed, and their next revisions would deliver far greater clarity and invite far fewer misunderstandings.

7.4 Architecture and Process Flow

Id/EM theory includes a generic process model, depicted in Figure 6, to distinguish two Phases, of respectively four and three steps, with Data stored in an Account intermediating between the core Authorization and Access Control activities, and with all terms defined in a clear and consistent manner, intended to minimise syntactic and semantic ambiguities.

Depictions of conventional authorization theory exhibit many variants and inconsistencies in architecture, process flow, terminology and definitions. Even at the most abstract level, there are considerable differences in interpretations of the notions of identity management and access control. The descriptions of identification, authentication and authorization functions evidence many overlaps. An early example of the conflation of the identification and authentication processes is "An authentication process consists of two basic steps: Identification step: Presenting the claimed attribute value (e.g., a user identifier) [and] Verification step ..." (IETF 2007 p.27, emphases added). Similar problems are evident in the international standard, as depicted in Figure 1. It states that "The process of identification applies verification to claimed or observed attributes" (ISO24760-1, p.3, emphases added).

The NIST document provides limited guidance in relation to the process, with the diagrammatic version of the Digital Identity Model showing three players (a 'relying party' seeking to authenticate a subject claiming to have permissions, a service that authenticates that claim, and a credential service provider) and three presentations of the subject (as the claimant of permissions, but previous to that as an applicant and, once pre-authenticated, as a subscriber). The textual description goes somewhat further towards a process definition, saying "the left side of the diagram shows the enrollment, credential issuance, lifecycle management activities, and various states of an identity proofing and authentication process" and "the right side of [the diagram] shows the entities and interactions involved in using an authenticator to perform digital authentication" (pp.10-11). The preliminary phase is referred to as Enrollment and Identity Proofing, and the second phase is called Digital Authentication, with little clarification of the phases' sub-structure (pp.10-12). Moreover, authorization is sometimes described in ways that suggest it occurs at the time a user is provided with a session in which they can exercise their permissions; whereas its primary usage has to do with a preparatory act: the making of a decision about what permissions a user is to be granted, resulting in data being created to enable effective and efficient performance during the operational phase.

Id/EM theory brings order to the use of terms and distinctions between the many functions that need to be performed, and avoid the ambiguities, uncertainties and divergent interpretations that arise from the NIST presentation.


8. Conclusions

The purpose of the research reported in this article was to contribute to improved IS practice and practice-oriented IS research in relation to the authorization process, within its broader context of identity management. The analysis has demonstrated that conventional theory relating to Identity Management (IdM) embodies inadequate modelling of the relevant real-world phenomena, internal inconsistencies, unhelpful terms, and confused definitions. It has demonstrated that by extending a previously-published pragmatic metatheoretic model (PMM), those inadequacies and inconsistencies can be overcome.

The practice of IdM since 2000 has been undermined by the many flaws in the underlying theory. The replacement Id/EM framework defined in this article provides a reference-point against which existing practices and designs can be reviewed, and consideration given to adaptations to address their weaknesses. To the extent that practices and designs are not capable of adaptation, the replacement theory supports the alternative approach of quickly and cleanly conceiving and implementing replacement products and services. Developments of such kinds would bring with them the opportunity for upgrading or replacement of defective standards, both internationally (ISO/IEC) and nationally (e.g. NIST/FIPS).

The benefits of substantial changes in this field would accrue to all stakeholders. Organisations can achieve greater effectiveness in their operations, and better manage business risks, and can do so in an efficient manner, by authenticating the Assertions that actually matter. Individuals will be relieved of the intrusions and inconveniences that are unnecessary or disproportionate to the need, and subjected to only the effort, inconvenience and costs that are justified by the nature of their interactions and dependencies. For this to be achieved, this research needs to be applied in the field, and the theory used as a lens by theorists, standards-producers, public policy organisations, designers and service-providers.


Reference List

Aboukadri S., Ouaddah A & Mezrioui A. (2024) 'Machine learning in identity and access management systems: Survey and deep dive' Computers & Security 139 (2024) 103729

Akhuseyinoglu B. & Joshi J. (2020) 'A constraint and risk-aware approach to attribute-based access control for cyber-physical systems' Computers & Security 96 (September 2020) 101802

Arfaoui A., Boudia O.R.M., Kribeche A., Senouci A.-M. & Hamdi M. (2020) 'Context-aware access control and anonymous authentication in WBAN' Computers & Security 88 (January 2020) 101496

Baumer E.P.S. (2015) 'Usees' Proc. 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI'15), April 2015, at http://ericbaumer.com/2015/01/07/usees/

Berleur J. & Drumm J. (Eds.) (1991) 'Information Technology Assessment' Proc. 4th IFIP-TC9 International Conference on Human Choice and Computers, Dublin, July 8-12, 1990, Elsevier Science Publishers (North-Holland), 1991

Brennen S. & Kreiss D. (2016) 'Digitalization and Digitization' International Encyclopedia of Communication Theory and Philosophy, October 2016, PrePrint at http://culturedigitally.org/2014/09/digitalization-and-digitization/

Bu L., Karpovsky M.G. & Kinsy M.A. (2019) 'Bulwark: Securing implantable medical devices communication channels' Computers & Security 86 (September 2019) 498-511

Cameron K. (2005) 'The Laws of Identity' Microsoft, 2005, at https://www.identityblog.com/stories/2005/05/13/TheLawsOfIdentity.pdf

Clarke R. (1992a) 'Extra-Organisational Systems: A Challenge to the Software Engineering Paradigm' Proc. IFIP World Congress, Madrid, September 1992, PrePrint at http://www.rogerclarke.com/SOS/PaperExtraOrgSys.html

Clarke R. (1992b) 'Practicalities of Keeping Confidential Information on a Database With Multiple Points of Access: Technological and Organisational Measures' Invited Paper for a Seminar of the Independent Commission Against Corruption (ICAC) of the State of N.S.W. on 'Just Trade? A Seminar on Unauthorised Release of Government Information', Sydney Opera House, 12 October 1992, at http://rogerclarke.com/DV/PaperICAC.html

Clarke R. (1994a) 'The Digital Persona and Its Application to Data Surveillance' The Information Society 10,2 (June 1994), PrePrint at http://www.rogerclarke.com/DV/DigPersona.html

Clarke R. (1994b) 'Human Identification in Information Systems: Management Challenges and Public Policy Issues' Information Technology & People 7,4 (December 1994) 6-37, PrePrint at http://www.rogerclarke.com/DV/HumanID.html

Clarke R. (2002a) 'Why Do We Need PKI? Authentication Re-visited' Proc. 1st Annual PKI Research Workshop, at NIST, Gaithersburg MD, April 24-25, 2002, at http://rogerclarke.com/EC/PKIRW02.html

Clarke R. (2002b) 'e-Consent: A Critical Element of Trust in e-Business' Proc. 15th Bled Electronic Commerce Conference, Bled, Slovenia, June 2002, PrePrint at http://www.rogerclarke.com/EC/eConsent.html

Clarke R. (2004) 'Identity Management: The Technologies, Their Business Value, Their Problems, Their Prospects' Xamax Consultancy Pty Ltd, , March 2004, ISBN 0-9589412-3-8, 66pp., at http://www.xamax.com.au/EC/IdMngt.html

Clarke R. (2009) 'A Sufficiently Rich Model of (Id)entity, Authentication and Authorisation' Proc. IDIS 2009 - The 2nd Multidisciplinary Workshop on Identity in the Information Society, LSE, London, 5 June 2009, at http://www.rogerclarke.com/ID/IdModel-090605.html

Clarke R. (2014) 'Promise Unfulfilled: The Digital Persona Concept, Two Decades Later' Information Technology & People 27, 2 (Jun 2014) 182-207, PrePrint at http://www.rogerclarke.com/ID/DP12.html

Clarke R. (2019) 'Risks Inherent in the Digital Surveillance Economy: A Research Agenda' Journal of Information Technology 34,1 (Mar 2019) 59-80, PrePrint at http://www.rogerclarke.com/EC/DSE.html

Clarke R. (2021) 'A Platform for a Pragmatic Metatheoretic Model for Information Systems Practice and Research' Proc. Austral. Conf. Infor. Syst, December 2021, PrePrint at http://rogerclarke.com/ID/PMM.html

Clarke R. (2022) 'A Reconsideration of the Foundations of Identity Management' Proc. Bled eConference, June 2022, PrePrint at http://rogerclarke.com/ID/IDM-Bled.html

Clarke R. (2023a) 'The Theory of Identity Management Extended to the Authentication of Identity Assertions' Proc. 36th Bled eConference, June 2023, PrePrint at http://rogerclarke.com/ID/IEA-Bled.html

Clarke R. (2023b) 'How Confident are We in the Reliability of Information? A Generic Theory of Authentication to Support IS Practice and Research' Proc. ACIS'23, December 2023, PrePrint at http://rogerclarke.com/ID/PGTA.html

Clarke R. (2024) 'A Pragmatic Model of (Id)Entity Management (IdEM) — Glossary' Xamax Consultancy Pty Ltd, April 2024, at http://rogerclarke.com/ID/IDM-G.html

Coiera E. & Clarke R. (2002) 'e-Consent: The Design and Implementation of Consumer Consent Mechanisms in an Electronic Environment' J Am Med Inform Assoc 11,2 (Mar-Apr 2004) 129?140, at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC353020/

Cuellar M.J. (2020) 'The Philosopher's Corner: Beyond Epistemology and Methodology - A Plea for a Disciplined Metatheoretical Pluralism' The DATABASE for Advances in Information Systems 51, 2 (May 2020) 101-112

Davis G.B. (1974) 'Management Information Systems: Conceptual Foundations, Structure, and Development' McGraw-Hill, 1974

FIDO (2022) 'User Authentication Specifications Overview' FIDO Alliance, 8 December 2022, at https://fidoalliance.org/specifications/

FIPS-201-3 (2022) 'Personal Identity Verification (PIV) of Federal Employees and Contractors' [US] Federal Information Processing Standards, January 2022, at https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.201-3.pdf

Fischer-Huebner S. (2001) 'IT-Security and Privacy: Design and Use of Privacy-Enhancing Security Mechanisms' LNCS Vol. 1958, Springer, 2001, at https://link.springer.com/content/pdf/10.1007/3-540-45150-1.pdf?pdf=button

Fischer-Huebner S. & Lindskog H. (2001) 'Teaching Privacy-Enhancing Technologies' Proc. IFIP WG 11.8 2nd World Conf. on Information Security Education, Perth, Australia

Hovav A. & Berger R. (2009) 'Tutorial: Identity Management Systems and Secured Access Control' Communications of the Association for Information Systems 25 (2009) 42, at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=6aa2f2d240eb8f171456b1b79666d4c0ab80b89e

IETF (2000) 'Internet Security Glossary', Internet Engineering Task Force, RFC2828, May 2000, at http://www.ietf.org/rfc/rfc2828.txt

IETF (2007) 'Internet Security Glossary, Version 2', Internet Engineering Task Force, RFC4949, August 2007, at http://www.ietf.org/rfc/rfc4949.txt

ISO 22600-1:2014 'Health informatics Å\ Privilege management and access control Å\ Part 1: Overview and policy management' International Standards Organisation TC 215 Health informatics, 2014

ISO 22600-2:2014 'Health informatics Å\ Privilege management and access control Å\ Part 2: Formal models' International Standards Organisation TC 215 Health informatics, 2014

ISO/IEC 24760-1 (2019) 'A Framework for Identity Management ? Part 1: Terminology and concepts' International Standards Organisation SC27 IT Security techniques, 2019, at https://standards.iso.org/ittf/PubliclyAvailableStandards/c077582_ISO_IEC_24760-1_2019(E).zip

ISO/IEC 24760-2 (2017) 'A Framework for Identity Management ? Part 2: Reference architecture and requirements' International Standards Organisation SC27 IT Security techniques, 2017

ISO/IEC 27001 (2018) 'Information technology Å\ Security techniques Å\ Information security management systems Å\ Overview and vocabulary' International Standards Organisation, 2018, at https://standards.iso.org/ittf/PubliclyAvailableStandards/c073906_ISO_IEC_27000_2018_E.zip

ISO/IEC 24760-3 (2019) 'A Framework for Identity Management ? Part 3: Practice' International Standards Organisation SC27 IT Security techniques, 2019

ISO/IEC 29146 (2024) 'Information technology Å\ Security techniques Å\ A framework for access management' International Standards Organisation, 2024, at https://www.iso.org/obp/ui/en/#iso:std:iso-iec:29146:ed-2:v1:en

ITG (2023) 'List of Data Breaches and Cyber Attacks' IT Governance Blog, monthly, at https://www.itgovernance.co.uk/blog/category/monthly-data-breaches-and-cyber-attacks

Iyer P., Masoumzadeh A. & Narendran P. (2021) 'On the Expressive Power of Negated Conditions and Negative Authorizations in Access Control Models' Computers & Security 116 (May 2022) 102586

Josang A. (2017) 'A Consistent Definition of Authorization' Proc. Int'l Wksp on Security and Trust Management, 2017, pp 134?144

Josang A. & Pope S. (2005) 'User Centric Identity Management' Proc. Conf. AusCERT, 2005, at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=58c591293f05bb21aa19d71990dbdda642fbf99a

Karjoth G., Schunter M. & Waidner M. (2002) 'Platform for Enterprise Privacy Practices: Privacy-enabled Management of Customer Data' Proc. 2nd Workshop on Privacy Enhancing Technologies, Lecture Notes in Computer Science. Springer Verlag, 2002, at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=9938422ed2c8b8cb045579f616e21f18b89c8e36

Karyda M. & Mitrou L. (2016) 'Data Breach Notification: Issues and Challenges for Security Management' Proc. 10th Mediterranean Conf. on Infor. Syst., Cyprus, September 2016, at https://www.researchgate.net/profile/Maria-Karyda/publication/309414062_DATA_BREACH_NOTIFICATION_ISSUES_AND_CHALLENGES_FOR_SECURITY_MANAGEMENT/links/580f4b4608aef2ef97afc0b2/DATA-BREACH-NOTIFICATION-ISSUES-AND-CHALLENGES-FOR-SECURITY-MANAGEMENT.pdf

Li N., Mitchell J.C. & Winsborough W.H. (2002) 'Design of a Role-based Trust-management Framework' IEEE Symposium on Security and Privacy, May 2002, pp.1-17, at https://web.cs.wpi.edu/~guttman/cs564/papers/rt_li_mitchell_winsborough.pdf

Liu H., Li J. & Gu D. (2020) 'Understanding the security of app-in-the-middle IoT' Computers & Security 97 (October 2020) 102000

Lupu E. & Sloman M. (1997) 'Reconciling Role Based Management and Role Based Access Control' Proc. ACM/NIST Workshop on Role Based Access Control, 1997, pp.135-141, at https://dl.acm.org/doi/pdf/10.1145/266741.266770

Michael M.G. & Michael K. (2014) 'Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies' IGI Global, 2014, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.643.3519&rep=rep1&type=pdf

Moore M. & Tambini D. (eds.) (2018) 'Digital Dominance: The Power of Google, Amazon, Facebook, and Apple' Oxford University Press, 2018

Myers M.D. (2018) 'The philosopher's corner: The value of philosophical debate: Paul Feyerabend and his relevance for IS research' The DATA BASE for Advances in Information Systems 49, 4 (November 2018) 11-14

NIST800-63-3 (2017) 'Digital Identity Guidelines' National Institute of Standards and Technology, 2017, at https://doi.org/10.6028/NIST.SP.800-63-3

NIST800-63-3A (2017) 'Digital Identity Guidelines: Enrollment and Identity Proofing' National Institute of Standards and Technology, 2017, at https://doi.org/10.6028/NIST.SP.800-63a

NIST800-63-3B (2017) 'Digital Identity Guidelines: Authentication and Lifecycle Management' National Institute of Standards and Technology, 2017, at https://doi.org/10.6028/NIST.SP.800-63b

NIST800-63-3C (2017) 'Digital Identity Guidelines: Federation and Assertions' National Institute of Standards and Technology, 2017, at https://doi.org/10.6028/NIST.SP.800-63bc

NIST800-63-4 (2022) 'Digital Identity Guidelines: Initial Public Draft' Special Publication SP 800-63-4 ipd, US National Institute of Standards and Technology, December 2022, at https://doi.org/10.6028/NIST.SP.800-63-4.ipd

NIST800-162 (2014) 'Guide to Attribute Based Access Control (ABAC) Definition and Considerations' NIST Special Publication 800-162, National Institute of Standards and Technology, updated to February 2019, at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-162.pdf

Pernul G. (1995) 'Information Systems Security ? Scope, State-of-the-art and Evaluation of Techniques' International Journal of Information Management 15,3 (1995) 165-180

Pfitzmann A. & Hansen M. (2006) 'Anonymity, Unlinkability, Unobservability, Pseudonymity, and Identity Management Å\ A Consolidated Proposal for Terminology' May 2009, at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=76296a3705d32a16152875708465c136c70fe109

Poehn D. & Hommel W. (2022) 'TaxIdMA: Towards a Taxonomy for Attacks related to Identities' Poc. 17th Int'l Conf. on Availability, Reliability and Security (ARES 2022), August 2022, Vienna, at https://arxiv.org/pdf/2301.00443.pdf

Pritee Z.T., Anik M.H., Alam S.B., Jim R.J., Kabir M.M. & Mridha M.F. (2024) 'Machine learning and deep learning for user authentication and authorization in cybersecurity: A state-of-the-art review' Computers & Security 140 (2024) 103747

Sandhu R.S., Coyne E.J., Feinstein H.L. & Youman C.E. (1996) 'Role-Based Access Control Models' IEEE Computer 29,2 (February 1996) 38-47, at https://csrc.nist.gov/csrc/media/projects/role-based-access-control/documents/sandhu96.pdf

Schlaeger C., Priebe T., Liewald M. & Pernul G. (2007) 'Enabling Attribute-based Access Control in Authentication and Authorisation Infrastructures' Proc. Bled eConference, June 2007, pp.814-826

Sindiren E. & Ciylan B. (2019) 'Application model for privileged account access control system in enterprise networks' Computers & Security 83 (June 2019) 52-67

Stallings W. & Brown L. (2015) 'Computer Security: Principles and Practice' Pearson, 3rd Ed., 2015

Thomas R.K. & Sandhu R.S. (1997) 'Task-based Authorization Controls (TBAC): A Family of Models for Active and Enterprise-oriented Authorization Management' Proc. IFIP WG11.3 Workshop on Database Security, Lake Tahoe Cal., August 1997, at https://profsandhu.com/confrnc/ifip/i97tbac.pdf

Zhao J. & Su Q. (2024) 'A threshold traceable delegation authorization scheme for data sharing in healthcare' Computers & Security 139 (2024) 103686ANSI (2012) 'Information Technology - Role Based Access Control' INCITS 359-2012, American National Standards Institute, 2012

Zhong J., Bertok P., Mirchandani V. & Tari Z. (2011) 'Privacy-Aware Granular Data Access Control For Cross-Domain Data Sharing' Proc. Pacific Asia Conf. Infor. Syst. 2011, 226


Acknowledgements

This paper builds on prior publications by the author on the topic of identity authentication, including Clarke (1994b, 2004, 2009), and several specific contributions in relation to PMM and GTA, in papers cited within the article. An early version of the theory in sections 3-6 of this article was presented at ACIS'23, Wellington, in December 2023. This much-expanded article benefited considerably from the comments and questions of the formal reviewers of the previous papers, and the participants in the presentation sessions.


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professorial Fellow associated with UNSW Law & Justice, and a Visiting Professor in the School of Computing at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 23 March 2023 - Last Amended: 2 May 2024 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/ID/GT-IdEM.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy