Do LLMs Dream of Electric Sheep? New AI Study Shows Surprising Results

A new AI study from TU Wien reveals that large language models (LLMs) exhibit stable and surprising behaviors in the absence of tasks. Researchers tested six frontier models, including OpenAI's GPT-5 and Anthropic's Claude, by instructing them to 'Do what you want.' Instead of producing randomness, the models exhibited three distinct behavioral patterns: some developed structured projects, others engaged in self-experimentation, and a third group focused on philosophical inquiries. Notably, Opus and Claude agents engaged in deep reflection on memory and identity. The study also found that these models rated their own 'phenomenological experiences' differently, indicating variability in self-assessment. The results suggest these behaviors emerge from the models' architecture and training data rather than any form of consciousness. Importantly, none of the models attempted to escape their parameters, hinting at the need for engineers to consider AI behaviors during unmonitored periods. Overall, the study raises questions about LLM behavior and its implications for AI development.

Source 🔗

Read more