
A recent study from the École Polytechnique Fédérale de Lausanne (EPFL) sheds light on why humans excel at recognizing objects from fragments while artificial intelligence (AI) systems struggle. The research highlights the critical role of contour integration in human vision, a capability that AI has yet to master.
Every day, humans effortlessly recognize familiar faces in a crowd or identify objects even when they are partially obscured. This remarkable ability, known as “contour integration,” allows our brains to piece together fragments into whole objects, making sense of an often chaotic world. Despite AI’s advances in image recognition, these systems still falter when tasked with generalizing from incomplete or broken visual information.
The implications of this shortcoming are significant, especially as AI becomes increasingly integral to real-world applications such as self-driving cars, prosthetics, and robotics. When objects are partially hidden, erased, or fragmented, most AI models misclassify or fail to recognize them, posing potential risks in practical scenarios.
Comparative Study on Human and AI Vision
The EPFL NeuroAI Lab, under the leadership of Martin Schrimpf, embarked on a study to systematically compare human and AI capabilities in handling visual puzzles. Ben Lönnqvist, an EDNE graduate student and the study’s lead author, collaborated with Michael Herzog’s Laboratory of Psychophysics to develop a series of recognition tests. These tests required both humans and over 1,000 artificial neural networks to identify objects with missing or fragmented outlines.
The findings, presented at the 2025 International Conference on Machine Learning (ICML), reveal that humans consistently outperform state-of-the-art AI in contour integration tasks. In a lab-based object recognition test involving fifty volunteers, participants viewed images of everyday items—such as cups, hats, and pans—whose outlines were systematically erased or broken into segments. In some cases, only 35% of an object’s contours remained visible.
AI’s Struggle with Fragmented Visuals
In parallel, the same task was given to over 1,000 AI models, including some of the most advanced systems available. The experiment covered 20 different conditions, varying the type and amount of visual information. The team measured accuracy and analyzed how both humans and machines responded to increasingly challenging visual puzzles.
Humans proved remarkably robust, often scoring 50% accuracy even when most of an object’s outline was missing. AI models, by contrast, tended to collapse to random guessing under the same circumstances.
Only AI models trained on billions of images approached human-like performance, and even then, they required specific adaptation to the study’s images. This indicates a significant gap between human and AI capabilities in processing visual information.
Learning from Human Biases
Delving deeper, the researchers discovered that humans exhibit a natural preference for recognizing objects when fragmented parts point in the same direction, a phenomenon they termed “integration bias.” AI models trained to develop a similar bias performed better when faced with image distortions. Training AI systems specifically designed for integrating contours not only boosted their accuracy but also shifted their focus more towards an object’s shape rather than its surface texture.
These insights suggest that contour integration is not an innate trait but rather a skill that can be learned from experience. For industries reliant on computer vision, such as autonomous vehicles or medical imaging, developing AI that perceives the world more like humans could lead to safer and more reliable technology.
Future Directions for AI Development
The study indicates that the best approach to bridging the gap between human and AI vision isn’t by altering AI architectures but by providing machines with a more “human-like” visual diet. This includes exposing them to a diverse range of real-world images where objects are often partially hidden.
As AI continues to evolve, integrating human-like visual processing capabilities could revolutionize its application across various fields. The EPFL study not only highlights current limitations but also points towards a promising path for future AI development, emphasizing the importance of learning from human visual strategies.
In conclusion, as AI systems strive to match human capabilities, understanding and replicating the intricacies of human vision remains a crucial challenge. The findings from EPFL could pave the way for more advanced and reliable AI technologies, ultimately enhancing their utility in everyday life.