Arizona State University (Center for Artificial Intelligence Research) exposes the "chain of reasoning" illusion.

Arizona State University (Center for Artificial Intelligence Research) exposes the "chain of reasoning" illusion.

On September 7, 2025, Arizona State University (AI Research Center) revealed that the so-called "chain of reasoning" in large language models such as GPT-4 is not logical reasoning, but rather a reproduction of data patterns that the models were trained on. The study used a special simulation environment called "DataAlchemy" to train models from scratch on different data distributions, allowing the impact of the data distribution on the quality of responses to be measured. The results show that performance is directly related to the similarity of the test data to the training - with the model degrading sharply when faced with tasks outside this range. These findings place limitations on the use of commercial models in sectors that require precise inference, especially as companies increasingly rely on LLMs in research and financial markets. There is no reliable figure that quantifies the extent to which these findings affect commercially released models.