Allocentric spatial representations dominate when switching between real and virtual worlds.

McManus, Meaghan, Franziska Seifert, Immo Schütz, and Katja Fiehler. 2025. “Allocentric Spatial Representations Dominate When Switching Between Real and Virtual Worlds.”. Journal of Vision 25 (13): 7.

Abstract

After removing a virtual reality headset, people can be surprised to find that they are facing a different direction than expected. Here, we investigated if people can maintain spatial representations of one environment while immersed in another. In the first three experiments, stationary participants were asked to point to previously seen targets in one environment, either the real world or a virtual environment, while in the other environment. We varied the amount of misalignment between the two environments (detectable or undetectable), the virtual environment itself (lab or kitchen), and the instructions (general or egocentric priming). Pointing endpoints were based primarily on the locations of objects in the currently seen environment, suggesting a strong reliance on allocentric cues. In the fourth experiment, participants moved in virtual reality while keeping track of an unseen real-world target. We confirmed that the pointing errors were due to a reliance on the currently seen environment. It appears that people hardly ever keep track of object positions in a previously seen environment and instead primarily rely on currently available spatial information to plan their actions.

Last updated on 11/13/2025
PubMed