Abstract
For humans, storing facial identities in visual working memory (VWM) is crucial. Despite vast research on VWM, it is not well known how face identity and physical features (e.g., eyes) are encoded in VWM representations. Moreover, while it is widely assumed that VWM face representations encode efficiently the subtle individual differences in facial features, this assumption has been difficult to investigate directly. Finally, it is not known how facial representations are forgotten. Some facial features could be more susceptible to forgetting than others, or conversely, all features could decay randomly. Here, we use a novel application of psychophysical reverse correlation, enabling us to estimate how various facial features are weighted in VWM representations, how statistically efficient these representations are, and how representations decay with time. We employed the same-different task with two retention times (1 s and 4 s) with morphed face stimuli, enabling us to control the appearance of each facial feature independently. We found that only a few features, most prominently the eyes, had high weighting, suggesting face VWM representations are based on storing a few key features. A classifier using stimulus information near-optimally showed markedly similar weightings to human participants-albeit weighing eyes less and other features more-suggesting that human VWM face representations are surprisingly close to statistically optimal encoding. There was no difference in weightings between retention times; instead, internal noise increased, suggesting that forgetting in face VWM works as a random process rather than as a change in remembered facial features.