Trustworthy and responsible AI08 January 2024

AI readiness DNV (Image credit: AdobeStock by iaremenko)

Customers are investing significant amounts in AI and AI readiness, but often struggle to demonstrate trustworthiness of the emerging solutions to key stakeholders. This is the trust gap that DNV seeks to close with the launch of new recommended practices, according to Remi Eriksen, DNV Group president and CEO, on the publication of an AI assurance guide (available via www.is.gd/uwozoq) summarised here

The term artificial intelligence (AI) is used broadly to refer to technologies that generate content, predictions, recommendations, or decisions based on learnt transformations of data or other forms of automated reasoning. AI technologies are applied in every domain of society and the economy, and hold vast potential to improve lives, advance businesses, and tackle global challenges. At the same time, AI may also contribute to harm, perpetuate biases, introduce security vulnerabilities, and raise other ethical and societal concerns. The presence of new risks and vulnerabilities creates a need for trust in AI-enabled systems and among stakeholders. Actors involved in developing, deploying, and using AI must act with prudence and manage risks so that value is created and protected for all stakeholders of AI-enabled systems.

As AI continues to evolve, there seems to be a growing consensus that some form of regulation will be necessary to address the associated risks and make sure that people can trust the AI they are using. This recommended practice describes how assurance can be used to build warranted trust in the capabilities of AI-enabled systems, but it does not provide any legal advice to ensure regulatory compliance.

The term AI-enabled system is used to denote any system that contains or relies on one or more AI components. An AI component refers to a distinct unit of software that performs a specific function or task within an AI-enabled system, and consists of a set of AI elements. AI is built from models, data and algorithms, which through implementation create an AI component. A model can, for example, be a machine learning model, where algorithms refer to those used for inference and training. Implementation refers to the choice of programming language, hardware or platform and other software dependencies.

The features that distinguish AI components from other units of software is the ability to mimic human-like cognitive functions such as learning, reasoning, perception, and decision-making. Some common examples of systems with AI components include:

  • customer support systems using chatbots based on NLP
  • web stores using AI recommendation systems
  • driving assist systems relying on AI computer vision components.

  • ASSURANCE

    When assuring an AI-enabled system, the focus is on understanding how AI components affect the rest of the system and its stakeholders. This depends on both the characteristics and the agency of the AI components.

    The characteristics of an AI component that matter to stakeholders are context-specific and depend on what the AI component is used for and how it is used together with other AI and non-AI components in an AI-enabled system. Characteristics of AI components include accuracy and robustness, how well the AI components can be interpreted by humans, and how well decisions made using AI can be explained. They also include properties affecting potential bias or issues related to privacy or security. The effect of these characteristics, that is, how they affect stakeholders, can only be understood at the system level.

    At the system level, the AI component interacts with other components and subsystems, such as users, operators, sensors, devices, or other digital services. Passive parties such as bystanders may also be affected and should be considered as part of the system’s environment. How the AI component affects other parts of the system, that is, how the AI component characteristics manifest at the system level, is also determined by how it is used and its agency.

    Aspects of an AI component’s agency include: how the AI component interacts with or is used by humans and the system by which it is part of, the authority of the AI component in relation to humans and other components in the AI-enabled system and the autonomy of the AI component, that is, the extent to which the system is self-directing and able to make decisions or act solely based on its perceptions without human supervision and control.

    The agency of an AI component extends the more authority and autonomy it has and the less human oversight there is. This does not mean that less authority and autonomy and more human oversight always reduces risk. Often, AI components perform tasks that are difficult to perform by other means, or they may perform tasks better or faster than humans or other technologies can. In such cases, a higher degree of agency for AI components may reduce the overall associated risk.

    TRUSTWORTHY AND RESPONSIBLE AI

    AI technologies have vast potential to advance business, improve lives, and tackle global challenges. This enables AI-enabled systems to be trustworthy and managed responsibly, meaning that the legitimate interests of relevant stakeholders are adequately safeguarded.

    Governance is a way of steering various actors in society to align with public interest and to promote collective action. Governance imposes accountability mechanisms to regulate the behaviour of companies, align business practices with widely held ethical principles, and enable societal trust. However, the frameworks for AI governance differ among regions and industries. Furthermore, AI-enabled systems transcend borders, and the rapid advancements in AI technology are outpacing the evolution of regulations and standards.

    Here, trustworthy AI refers to the characteristics of the AI-enabled system and social aspects in the assurance process including involving stakeholders in assurance, prioritising between system characteristics, and negotiating trade-offs.

    Responsible management of AI, on the other hand, refers to the ethical and societal considerations that are taken into account during the entire lifecycle (for example, the design, development, and deployment) of AI-enabled systems. Responsible management of AI entails that the use of AI promotes and safeguards the values of society. For the purpose of this recommended practice, the values of society are represented by a set of core ethical principles including beneficence, non-maleficence, autonomy, justice, and explicability.

    BOX: AI ASSURANCE PRINCIPLES

    DNV’s overall approach to the assurance of AI-enabled systems is based on six principles, listed below.

    1. Stakeholder focus (to identify stakeholders and place focus on what matters to them)

    2. Evidence-based argumentation (to structure and present relevant knowledge about the system in scope)

    3. Systems approach (to uncover emergent behaviour and properties of AI-enabled systems in different contexts, which is needed to identify risks)

    4. Risk-based approach (to guide what needs to be assured based on the presence of uncertainty and knowledge gaps and potential effects on stakeholders’ interests)

    5. Modularity (to manage the complexity of AI-enabled systems, to structure assurance arguments, and to facilitate collaboration among stakeholders)

    6. Lifecycle perspective (to ensure that stakeholder interests are respected throughout the entire lifecycle of the AI-enabled system)

    DNV’s document also gives guidance on developing a suitable probabilistic model, which can be used to evaluate how uncertainty regarding components and subsystems affects claims about the system. It also discusses knowledge strength, to justify confidence in claims.

    Operations Engineer

    This material is protected by MA Business copyright
    See Terms and Conditions.
    One-off usage is permitted but bulk copying is not.
    For multiple copies contact the sales team.