17. J. G. Robinson, E. L. Bennett, Hunting for Sustainability
in Tropical Forests (Columbia Univ. Press, New York,
2000).
18. W. F. Laurance, S. Sloan, L. Weng, J. A. Sayer, Curr. Biol. 25,
3202–3208 (2015).
19. G. R. Clements et al., PLOS ONE 9, e115376 (2014).
20. W. F. Laurance et al., Nature 489, 290–294 (2012).
21. A. Benítez-López, R. Alkemade, P. A. Verweij, Biol. Conserv.
143, 1307–1316 (2010).
22. S. L. Maxwell, R. A. Fuller, T. M. Brooks, J. E. Watson, Nature
536, 143–145 (2016).
23. C. Sandom, S. Faurby, B. Sandel, J.-C. Svenning, Proc. Biol. Sci.
281, 20133254 (2014).
24. C. A. Peres, J. Barlow, W. F. Laurance, Trends Ecol. Evol. 21,
227–229 (2006).
25. C. A. Peres, T. Emilio, J. Schietti, S. J. Desmoulière, T. Levi,
Proc. Natl. Acad. Sci. U.S.A. 113, 892–897 (2016).
26. J. Dulac, “Global land transport infrastructure requirements:
Estimating road and railway infrastructure capacity
and costs to 2050” (International Energy Agency,
Paris, 2013).
ACKNOWLEDGMENTS
Data reported in the paper are available at www.globio.info. We
are grateful to S. Blake, A. Bowkett, J. Demmer, and T. Gray
for kindly sharing their data with us. W. Viechtbauer helped
with data analyses. We thank the authors of the studies used in
this meta-analysis. D.J.I. was funded by a Doctoral Training Grant
from the University of Sussex.
SUPPLEMENTARY MATERIALS
www.sciencemag.org/content/356/6334/180/suppl/DC1
Materials and Methods
Figs. S1 to S6
Tables S1 to S7
References (27– 159)
13 September 2016; accepted 9 March 2017
10.1126/science.aaj1891
COGNITIVE SCIENCE
Semantics derived automatically
from language corpora contain
human-like biases
Aylin Caliskan,1 Joanna J. Bryson,1,2 Arvind Narayanan1*
Machine learning is a means to derive artificial intelligence by discovering patterns in
existing data. Here, we show that applying machine learning to ordinary human language
results in human-like semantic biases. We replicated a spectrum of known biases, as
measured by the Implicit Association Test, using a widely used, purely statistical
machine-learning model trained on a standard corpus of text from the World Wide Web.
Our results indicate that text corpora contain recoverable and accurate imprints of our
historic biases, whether morally neutral as toward insects or flowers, problematic as
toward race or gender, or even simply veridical, reflecting the status quo distribution of
gender with respect to careers or first names. Our methods hold promise for identifying
and addressing sources of bias in culture, including technology.
We show that standard machine learning can acquire stereotyped biases from tex- tual data that reflect everyday human cul- ture. The general idea that text corpora capture semantics, including cultural
stereotypes and empirical associations, has long
been known in corpus linguistics (1, 2), but our
findings add to this knowledge in three ways.
First, we used word embeddings (3), a powerful
tool to extract associations captured in text corpora; this method substantially amplifies the signal found in raw statistics. Second, our replication
of documented human biases may yield tools and
insights for studying prejudicial attitudes and
behavior in humans. Third, since we performed
our experiments on off-the-shelf machine learning components [primarily the Global Vectors for
Word Representation (GloVe) word embedding], we
show that cultural stereotypes propagate to artificial
intelligence (AI) technologies in widespread use.
Before presenting our results, we discuss key
terms and describe the tools we use. Terminology
varies by discipline; these definitions are intended
for clarity of the present article. In AI and ma-
chine learning, bias refers generally to prior infor-
mation, a necessary prerequisite for intelligent
action (4). Yet bias can be problematic where such
information is derived from aspects of human
culture known to lead to harmful behavior. Here,
we will call such biases “stereotyped” and actions
taken on their basis “prejudiced.”
We used the Implicit Association Test (IAT) as
our primary source of documented human biases
(5). The IAT demonstrates enormous differences in
response times when subjects are asked to pair
two concepts they find similar, in contrast to two
concepts they find different. We developed our
first method, the Word-Embedding Association
Test (WEAT), a statistical test analogous to the
IAT, and applied it to a widely used semantic representation of words in AI, termed word embeddings.
Word embeddings represent each word as a vector
in a vector space of about 300 dimensions, based
on the textual context in which the word is found.
We used the distance between a pair of vectors
(more precisely, their cosine similarity score, a
measure of correlation) as analogous to reaction
time in the IAT. The WEAT compares these vectors for the same set of words used by the IAT. We
describe the WEAT in more detail below.
Most closely related to this paper is concurrent
work by Bolukbasi et al. (6), who propose a meth-
od to “debias” word embeddings. Our work is
complementary, as we focus instead on rigorously
demonstrating human-like biases in word embed-
dings. Further, our methods do not require an al-
gebraic formulation of bias, which may not be
possible for all types of bias. Additionally, we studied
the relationship between stereotyped associations
and empirical data concerning contemporary society.
Using the measure of semantic association de-
scribed above, we have been able to replicate every
stereotype that we tested. We selected IATs that
studied general societal attitudes, rather than those
of subpopulations, and for which lists of target and
attribute words (rather than images) were avail-
able. The results are summarized in Table 1.
Greenwald et al. introduced and validated the
IAT by studying biases that they consider nearly
universal in humans and about which there is no
social concern (5). We began by replicating these
inoffensive results for the same purposes. Spe-
cifically, they demonstrated that flowers are sig-
nificantly more pleasant than insects, based on
the reaction latencies of four pairings (flowers +
pleasant, insects + unpleasant, flowers + unpleasant,
and insects + pleasant). Greenwald et al. measured
effect size in terms of Cohen’s d, which is the
difference between two means of log-transformed
latencies in milliseconds, divided by the standard
deviation. Conventional small, medium, and large
values of d are 0.2, 0.5, and 0.8, respectively. With
32 participants, the IAT comparing flowers and
insects resulted in an effect size of 1.35 (P < 10−8).
Applying our method, we observed the same
expected association with an effect size of 1.50
(P < 10−7). Similarly, we replicated Greenwald et al.’s
finding (5) that musical instruments are signifi-
cantly more pleasant than weapons (see Table 1).
Notice that the word embeddings “know” these
properties of flowers, insects, musical instruments,
and weapons with no direct experience of the
world and no representation of semantics other
than the implicit metrics of words’ co-occurrence
statistics with other nearby words.
We then used the same technique to demonstrate that machine learning absorbs stereotyped
biases as easily as any other. Greenwald et al. (5)
found extreme effects of race as indicated simply
by name. A bundle of names associated with being
European American was found to be significantly
more easily associated with pleasant than unpleasant terms, compared with a bundle of African-American names.
In replicating this result, we were forced to
slightly alter the stimuli because some of the
original African-American names did not occur
1Center for Information Technology Policy, Princeton
University, Princeton, NJ, USA. 2Department of Computer
Science, University of Bath, Bath BA2 7AY, UK.
*Corresponding author. Email: aylinc@princeton.edu (A.C.);
jjb@alum.mit.edu (J.J.B.); arvindn@cs.princeton.edu (A.N.)