Download | - View final version: Examining gender and race bias in two hundred sentiment analysis systems (PDF, 1.2 MiB)
|
---|
Author | Search for: Kiritchenko, Svetlana1; Search for: Mohammad, Saif M.1 |
---|
Affiliation | - National Research Council of Canada. Digital Technologies
|
---|
Format | Text, Article |
---|
Conference | The 7th Joint Conference on Lexical and Computational Semantics, June 5-6 2018, New Orleans, USA |
---|
Abstract | Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems. Further, there is no benchmark dataset for examining inappropriate biases in systems. Here for the first time, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We use the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 'Affect in Tweets'. We find that several of the systems show statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available. |
---|
Publication date | 2018-05-11 |
---|
Publisher | Cornell University Library |
---|
In | |
---|
Language | English |
---|
Peer reviewed | No |
---|
NPARC number | 23003337 |
---|
Export citation | Export as RIS |
---|
Report a correction | Report a correction (opens in a new tab) |
---|
Record identifier | c0ba249c-450d-4a6a-b313-0c9168998b8e |
---|
Record created | 2018-05-18 |
---|
Record modified | 2022-02-21 |
---|