Abstract
Generative artificial intelligence (AI) models unlock new ways to create images, emerging as a new medium alongside paintings, photographs, physically based renderings (PBR), etc. Generative AI images can be perceptually convincing without being physically plausible, allowing to investigate the boundaries of visual perception. This study examines whether generative AI images adhere to a medium-independent perceptual space converged from previous studies. We compared the perceptual similarity of images from three generative AI models against a bidirectional reflectance distribution functions (BRDFs) PBR image dataset, using human similarity judgments. In experiment 1, we used the text descriptions of 32 materials (e.g., blue acrylic) from the Mitsubishi Electric Research Laboratories (MERL) BRDF dataset, prompting two text-to-image models, DALL-E 2 and Midjourney v2, to generate 32 sphere-shaped stimuli per model. Perceptual spaces derived from similarity judgments revealed that both AI models resulted in two-dimensional spaces whereas the MERL space was confined to one dimension, probably owing to a lack of surface texture. These unrelated perceptual spaces suggest the AI models generated unique and different images from identical text prompts. In experiment 2 we used the text-to-image model Stable Diffusion v1.5 with ControlNet for additional depth-map constraints. Using the same 32 descriptions, we generated 3 sets using 3 different depth maps. The three resulting perceptual spaces are all two-dimensional, exhibiting high similarity, indicating a robust and non-random structure. They also show a similar structure to the MERL space and perceptual spaces from other material studies using photographs, PBR, and depictions, suggesting AI-generated imagery may indeed be used as a new medium to explore material perception.