Artificial Intelligence - What Is Explainable AI Or XAI?


AI that can be explained Explainable AI (XAI) refers to approaches or design decisions used in automated systems such that artificial intelligence and machine learning produce outputs with a logic that humans can understand and explain.

The extensive usage of algorithmically assisted decision-making in social situations has raised considerable concerns about the possibility of accidental prejudice and bias being encoded in the choice.

Furthermore, the application of machine learning in domains that need a high degree of accountability and transparency, such as medicine or law enforcement, emphasizes the importance of outputs that are easy to understand.

The fact that a human operator is not involved in automated decision-making does not rule out the possibility of human bias being embedded in the outcomes produced by machine computation.

Artificial intelligence's already limited accountability is exacerbated by the lack of due process and human logic.

The consequences of algorithmically driven processes are often so complicated that even their engineering designers are unable to understand or predict them.

The black box of AI is a term that has been used to describe this situation.

To address these flaws, the General Data Protection Regulation (GDPR) of the European Union contains a set of regulations that provide data subjects the right to an explanation.

Article 22, which deals with automated individual decision-making, and Articles 13, 14, and 15, which deal with transparency rights in relation to automated decision-making and profiling, are the ones to look out for.

When a decision based purely on automated processing has "legal implications" or "similarly substantial" effects on a person, Article 22 of the GDPR reserves a "right not to be subject to a decision based entirely on automated processing" (GDPR 2016).

It also provides three exceptions to this right, notably when it is required for a contract, when a member state of the European Union has approved a legislation establishing an exemption, or when a person has expressly accepted to algorithmic decision-making.

Even if an exemption to Article 22 applies, the data subject has the right to "request human involvement on the controller's side, to voice his or her point of view, and to challenge the decision" (GDPR 2016).

Articles 13 through 15 of the GDPR provide a number of notification rights when personal data is obtained (Article 13) or from third parties (Article 14), as well as the ability to access such data at any time (Article 15), including "meaningful information about the logic involved" (GDPR 2016).

Recital 71 protects the data subject's right to "receive an explanation of the conclusion taken following such evaluation and to contest the decision" where an automated decision is made that has legal consequences or has a comparable impact on the person (GDPR 2016).

Recital 71 is not legally binding, but it does give advice on how to interpret relevant provisions of the GDPR.

The question of whether a mathematically interpretable model is sufficient to account for an automated judgment and provide transparency in automated decision-making is gaining traction.

Ex-ante/ex-post auditing is an alternative technique that focuses on the processes around machine learning models rather than the models themselves, which may be incomprehensible and counterintuitive.

Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.

See also: 

Algorithmic Bias and Error; Deep Learning.

Further Reading:

Brkan, Maja. 2019. “Do Algorithms Rule the World? Algorithmic Decision-Making in the 

Framework of the GDPR and Beyond.” International Journal of Law and Information Technology 27, no. 2 (Summer): 91–121.

GDPR. 2016. European Union.

Goodman, Bryce, and Seth Flaxman. 2017. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (Fall): 50–57.

Kaminski, Margot E. 2019. “The Right to Explanation, Explained.” Berkeley Technology Law Journal 34, no. 1: 189–218.

Karanasiou, Argyro P., and Dimitris A. Pinotsis. 2017. “A Study into the Layers of Automated Decision-Making: Emergent Normative and Legal Aspects of Deep Learn￾ing.” International Review of Law, Computers & Technology 31, no. 2: 170–87.

Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87, no. 3: 1085–1139.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...