Download | - View author's version: Explanations in Artificial Intelligence decision making: a user acceptance perspective (PDF, 955 KiB)
|
---|
DOI | Resolve DOI: https://doi.org/10.4018/978-1-5225-9069-9.ch006 |
---|
Author | Search for: Vinson, Norman G.1; Search for: Molyneaux, Heather1; Search for: Martin, Joel D.1 |
---|
Affiliation | - National Research Council of Canada. Digital Technologies
|
---|
Format | Text, Book Chapter |
---|
Abstract | The opacity of AI systems' decision making has led to calls to modify these systems so they can provide explanations for their decisions. This chapter contains a discussion of what these explanations should address and what their nature should be to meet the concerns that have been raised and to prove satisfactory to users. More specifically, the chapter briefly reviews the typical forms of AI decision-making that are currently used to make real-world decisions affecting people's lives. Based on concerns about AI decision making expressed in the literature and the media, the chapter follows with principles that the systems should respect and corresponding requirements for explanations to respect those principles. A mapping between those explanation requirements and the types of explanations generated by AI decision making systems reveals the strengths and shortcomings of the explanations generated by those systems. |
---|
Publication date | 2019-05-31 |
---|
Publisher | IGI Global |
---|
In | |
---|
Series | |
---|
Language | English |
---|
Peer reviewed | Yes |
---|
Export citation | Export as RIS |
---|
Report a correction | Report a correction (opens in a new tab) |
---|
Record identifier | 496b8abe-aa36-493d-874b-4f7b321bb085 |
---|
Record created | 2021-09-10 |
---|
Record modified | 2021-10-29 |
---|