Self-Awareness — the capacity of an individual to represent and understand itself as the subject of experience and action — is sustained as the foundation of intelligence and autonomous behavior. The most recent advances in AI have reached human-like performance in tasks that integrate multimodal information, especially in large language models (LLMs), which has raised interest in the embodiment capabilities of AI agents in non-human platforms such as robots.
For centuries, different fields of study, from philosophy to neuroscience, have devoted significant efforts to the definition and characterization of Self-Awareness. In the present study, the capabilities of a LLM to develop Self-Awareness are analyzed when embedded in an autonomous mobile robot, relying solely on sensorimotor experience.
By integrating a multimodal LLM into an autonomous mobile robot, we test its capacity to achieve artificial Self-Awareness. We find that the system demonstrates solid environmental awareness, self-recognition, and predictive awareness, which allows it to infer its robotic nature and movement characteristics. Structural Equation Modeling (SEM) reveals how sensory integration influences different dimensions of Self-Awareness and its coordination with past–present memory, as well as the hierarchical internal associations that drive self-identification. Moreover, through SEM we identify similarities between the cognitive constructs developed by the system and the human brain structures responsible for Self-Awareness.
Ablation tests of sensory inputs identify critical modalities for each dimension, demonstrate compensatory interactions between sensors, and confirm the essential role of structured episodic memory in coherent reasoning. These findings show that, given adequate sensory information about the world and itself, multimodal LLMs exhibit emergent Self-Awareness, opening the door to embodied artificial cognitive systems.
Self-Awareness — the capacity of an individual to represent and understand itself as the subject of experience and action — is sustained as the foundation of intelligence and autonomous behavior. The most recent advances in AI have reached human-like performance in tasks that integrate multimodal information, especially in large language models (LLMs), which has raised interest in the embodiment capabilities of AI agents in non-human platforms such as robots.
For centuries, different fields of study, from philosophy to neuroscience, have devoted significant efforts to the definition and characterization of Self-Awareness. In the present study, the capabilities of a LLM to develop Self-Awareness are analyzed when embedded in an autonomous mobile robot, relying solely on sensorimotor experience.
By integrating a multimodal LLM into an autonomous mobile robot, we test its capacity to achieve artificial Self-Awareness. We find that the system demonstrates solid environmental awareness, self-recognition, and predictive awareness, which allows it to infer its robotic nature and movement characteristics. Structural Equation Modeling (SEM) reveals how sensory integration influences different dimensions of Self-Awareness and its coordination with past–present memory, as well as the hierarchical internal associations that drive self-identification. Moreover, through SEM we identify similarities between the cognitive constructs developed by the system and the human brain structures responsible for Self-Awareness.
Ablation tests of sensory inputs identify critical modalities for each dimension, demonstrate compensatory interactions between sensors, and confirm the essential role of structured episodic memory in coherent reasoning. These findings show that, given adequate sensory information about the world and itself, multimodal LLMs exhibit emergent Self-Awareness, opening the door to embodied artificial cognitive systems. Read More


