thereby generating sparser representations. Our
results, capitalizing on the strength of fMRI to
allow for unbiased comparisons across multiple
brain regions, extend these findings to the large
number of core and extended face areas and
object-selective IT areas imaged here. Familiarity effects in putative additional face areas
(13) and cortical columns (37) remain to be determined. Second, visual and personal familiarity
differ fundamentally and even cause modulations of opposite sign. This difference could be
the result of the massive exposure occurring over
years for personally familiar faces, the quality
of this exposure (diversity of viewpoints, lighting,
and expression, as well as distance, depth, color,
and motion), or the social relevance and semantics associated with personally familiar individuals
(38). Third, the nature of familiarity (personal
versus visual) interacts with its material (object
or face) and circuit-specific functional selectivity
(object or face area). Fourth, generalizing past
electrophysiological findings of personal familiarity (39), which had suggested localized representations (39–41), response enhancement by
personal familiarity was ubiquitous within face-selective areas. Thus the faces that shape face-selective cortex throughout ontogeny appear to
alter all the different face representations that
the face-processing system harbors (15, 17, 42).
Finally, familiarity effects did not grow stronger
as face representations get transformed from
picture to identity-based formats from posterior
to anterior IT core face areas. Thus, anterior
IT, which had been suggested as a site for familiarity in humans, is not the only region of
The classic cognitive face-processing model
(43) postulates a structural encoding system
that has been interpreted (5) as the core face-
processing network. In this model, a core face-
processing system drives face-recognition units
using a different coding scheme for familiar
faces, which in turn interacts with person iden-
tity nodes. We found that personally familiar
faces engage this core and extended systems
differently (Fig. 1B, panel a). This might explain
the quantitative differences between familiar
and unfamiliar face recognition. The time course
of activation of TP and PR conforms to a pattern
predicted for familiar face recognition (31), a
property that the core and the extended pre-
frontal face-processing systems lack. Thus, these
two novel anterior temporal areas might ex-
plain the qualitative differences between familiar
and unfamiliar face recognition. This result
adds anatomical specificity to the earlier models
(5, 43) (Fig. 3F). Instead of a gradual change in
the slope from posterior to anterior areas (Fig.
1B, panel B), a categorical and specific distinction
between familiar and unfamiliar faces emerged
in TP and PR (Fig. 1A, panel c, and Fig. 3, C to E).
In contrast to previous models, the activation
boost by recognition appeared not to be fed back
into the core system. It is tempting to speculate
that PR might correspond to the face-recognition
unit and TP to the person identity nodes: face
area PR resides in perirhinal cortex, important
for declarative memory and perceptual discrim-
inations with high feature ambiguity, such as
faces (44, 45), whereas face area TP resides in the
temporal pole, whose lesioning causes person
agnosia (46). These two novel face areas are large
enough to be detected with fMRI at highly re-
producible and cytoarchitectonic-specific regions
of the temporal lobe, not in anterior IT but in
perirhinal cortex and the temporal pole. Similar
areas may exist in the human brain; however,
higher morphological intersubject variability
and larger technical difficulties imaging deep
temporal lobe areas, among other reasons, make
the precise localization of small functional-
specific areas harder than in the rhesus monkey
(see the supplementary text). Our results suggest
that there are two paths from generic face
recognition to familiar face recognition, not one.
These two paths from perception to memory are
face-specific, not generally familiarity-specific,
and thus the “modular” organization principle
of the face-processing system is taken, at least
partly, one level deeper into the temporal lobes.
At this level, perceptual and mnemonic systems
begin to interact to enable the recognition of
familiar individuals, and domain specificity then
transitions into spatially distributed representa-
tions, as they are for persons and places in the
REFERENCES AND NOTES
1. H. D. Ellis, J. W. Shepherd, G. M. Davies, Perception 8, 431–439
2. V. Bruce, Br. J. Psychol. 73, 105–116 (1982).
3. V. Bruce et al., J. Exp. Psychol. Appl. 5, 339–360 (1999).
4. V. Natu, A. J. O’Toole, Br. J. Psychol. 102, 726–747
5. J. V. Haxby, E. A. Hoffman, M. I. Gobbini, Trends Cogn. Sci. 4,
6. M. Mur, D. A. Ruff, J. Bodurka, P. A. Bandettini, N. Kriegeskorte,
Cereb. Cortex 20, 2027–2042 (2010).
7. V. Axelrod, G. Yovel, Neuroimage 81, 371–380 (2013).
8. N. Kriegeskorte, E. Formisano, B. Sorger, R. Goebel, Proc. Natl.
Acad. Sci. U.S.A. 104, 20600–20605 (2007).
9. D. Y. Tsao, W. A. Freiwald, T. A. Knutsen, J. B. Mandeville,
R. B. H. Tootell, Nat. Neurosci. 6, 989–995 (2003).
10. D. Y. Tsao, S. Moeller, W. A. Freiwald, Proc. Natl. Acad.
Sci. U.S.A. 105, 19514–19519 (2008).
11. D. Y. Tsao, N. Schweers, S. Moeller, W. A. Freiwald, Nat.
Neurosci. 11, 877–879 (2008).
12. M. A. Pinsk et al., J. Neurophysiol. 101, 2581–2600
13. S.-P. Ku, A. S. Tolias, N. K. Logothetis, J. Goense, Neuron 70,
14. S. Moeller, W. A. Freiwald, D. Y. Tsao, Science 320, 1355–1359
15. W. A. Freiwald, D. Y. Tsao, Science 330, 845–851 (2010).
16. N. Furl, F. Hadj-Bouziane, N. Liu, B. B. Averbeck,
L. G. Ungerleider, J. Neurosci. 32, 15952–15962 (2012).
17. C. Fisher, W. A. Freiwald, Curr. Biol. 25, 261–266 (2015).
18. M. E. Hasselmo, E. T. Rolls, G. C. Baylis, Behav. Brain Res. 32,
19. D. I. Perrett, J. K. Hietanen, M. W. Oram, P. J. Benson,
Philos. Trans. R. Soc. Lond. B Biol. Sci. 335, 23–30
20. J. A. Collins, I. R. Olson, Neuropsychologia 61, 65–79
21. Y. Benjamini, Y. Hochberg, J. R. Stat. Soc. B 57, 289–300
22. J. Sliwa, J.-R. Duhamel, O. Pascalis, S. Wirth, Proc. Natl. Acad.
Sci. U.S.A. 108, 1735–1740 (2011).
23. C. A. Conway, B. C. Jones, L. M. DeBruine, A. C. Little,
A. Sahraie, J. Vis. 8, 17-1–17-11 (2008).
24. V. S. Natu, A. J. O’Toole, Neuroimage 108, 151–159
25. C.-C. Carbon, Perception 37, 801–806 (2008).
26. W. A. Suzuki, D. G. Amaral, J. Comp. Neurol. 463, 67–91
27. H. Kondo, K. S. Saleem, J. L. Price, J. Comp. Neurol. 465,
28. D. J. Kravitz, K. S. Saleem, C. I. Baker, L. G. Ungerleider,
M. Mishkin, Trends Cogn. Sci. 17, 26–49 (2013).
29. A. M. Burton, S. Wilson, M. Cowan, V. Bruce, Psychol. Sci. 10,
30. P. Sinha, B. Balas, Y. Ostrovsky, R. Russell, Proc. IEEE 94,
31. M. Ramon, L. Vizioli, J. Liu-Shuang, B. Rossion, Proc. Natl.
Acad. Sci. U.S.A. 112, E4835–E4844 (2015).
32. M. Bar, Nat. Rev. Neurosci. 5, 617–629 (2004).
33. K. I. Naka, W. A. Rushton, J. Physiol. 185, 536–555
34. D. J. Freedman, M. Riesenhuber, T. Poggio, E. K. Miller,
Cereb. Cortex 16, 1631–1644 (2006).
35. C. I. Baker, M. Behrmann, C. R. Olson, Nat. Neurosci. 5,
36. E. Kobatake, G. Wang, K. Tanaka, J. Neurophysiol. 80, 324–330
37. G. Wang, K. Tanaka, M. Tanifuji, Science 272, 1665–1668
38. L. Schwartz, G. Yovel, J. Exp. Psychol. Gen. 145, 1493–1511
39. M. C. Booth, E. T. Rolls, Cereb. Cortex 8, 510–523
40. M. P. Young, S. Yamane, Science 256, 1327–1331 (1992).
41. S. Eifuku, W. C. De Souza, R. Nakata, T. Ono, R. Tamura,
PLOS ONE 6, e18913 (2011).
42. C. Fisher, W. A. Freiwald, Proc. Natl. Acad. Sci. U.S.A. 112,
43. V. Bruce, A. Young, Br. J. Psychol. 77, 305–327 (1986).
44. H. E. Moss, J. M. Rodd, E. A. Stamatakis, P. Bright, L. K. Tyler,
Cereb. Cortex 15, 616–627 (2005).
45. L. K. Tyler et al., J. Cogn. Neurosci. 25, 1723–1735
46. V. Gentileschi, S. Sperber, H. Spinnler, Cogn. Neuropsychol. 18,
47. R. Q. Quiroga, L. Reddy, G. Kreiman, C. Koch, I. Fried, Nature
435, 1102–1107 (2005).
We thank A. Gonzalez, M. Cano-Vinas, I. Sani, and S. Shepherd
for help with animal training and data collection; J. Sliwa for
discussion of methods; veterinary services and animal
husbandry staff of The Rockefeller University for care of the
subjects; and C. Jiang from the Biostatistics Team of The
Rockefeller University for her assistance with statistics. Unfamiliar
face stimuli were obtained from the PrimFace database
( http://visiome.neuroinf.jp/primface), funded by a Grant-in-Aid for
Scientific Research on Innovative Areas, “Face Perception and
Recognition” from the Ministry of Education, Culture, Sports,
Science, and Technology (MEXT), Japan. This work was supported
by the Howard Hughes Medical Institute International Student
Research Fellowship (to S.M.L.), the Center for Brains, Minds,
and Machines funded by National Science Foundation STC award
CCF-1231216, the National Eye Institute of the National Institutes
of Health (R01 EY021594 to W.A.F.), the National Institute of
Mental Health of the National Institutes of Health (R01 MH105397,
to W.A.F.), Human Frontier Science Program (RGP0015/2013
to W.A.F.), the McKnight Foundation (to W.A.F.), the Pew
Charitable Trust (to W.A.F.), and The New York Stem Cell
Foundation (to W.A.F.). W.A.F. is a New York Stem Cell Foundation
Robertson Investigator. The content is solely the responsibility
of the authors and does not necessarily represent the official
views of the National Institutes of Health. Data are available
from the Dryad Digital Repository: http://dx.doi.org/10.5061/
Materials and Methods
Figs. S1 to S4
Tables S1 and S2
6 March 2017; accepted 6 July 2017