Tongyi DeepResearch: A New Era of Open-Source AI Researchers
GITHUB HUGGINGFACE MODELSCOPE SHOWCASE From Chatbot to Autonomous Agent We are proud to present Tongyi DeepResearch, the first fully open‑source Web A...
Recent research reveals that Large Reasoning Models (LRMs), despite showing improved performance in structured reasoning tasks, possess fundamental limitations as problem complexity grows.
They excel at simple and some intermediate tasks, but both LRMs and standard language models experience a complete collapse in accuracy when facing highly complex problems. LRMs often display inefficient “overthinking” on simple tasks and inconsistent computation, failing to generalize or apply explicit algorithms effectively.
These findings suggest that while LRMs can mimic detailed reasoning, they lack robust, general-purpose problem-solving abilities, highlighting the need for new architectural solutions and deeper evaluation of their internal reasoning processes.