Apple researchers find Large Language Models lack robust mathematical reasoning abilities; here's why

A team of Apple researchers has released a paper scrutinising the mathematical reasoning capabilities of large language models (LLMs), suggesting that while these models can exhibit abstract reasoning patterns, they fall short when it comes to precise logical reasoning. The researchers observed that LLMs, such as those used in AI tools today, display considerable variability in their responses to similar questions with slight variations in wording, indicating a lack of true formal reasoning abilities.

Their findings point to a fundamental limitation in how LLMs process and interpret mathematical problems. According to the study, titled GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models, LLMs rely on probabilistic pattern-matching, which is different from formal logical reasoning. This reliance on pattern recognition makes the models sensitive to minor input changes, revealing a strong token bias that impacts accuracy. For instance, small changes in wording can lead to dramatically different responses, underscoring the fragility and sensitivity of these models.

The paper further explains that tasks involving the selection of multiple tokens—a critical aspect in complex problem-solving—decrease in accuracy exponentially as the number of required tokens or steps increases. This characteristic makes LLMs less reliable in scenarios that require detailed, multi-step reasoning, a core aspect of mathematical problem-solving.

The research also addresses the GSM8K benchmark, commonly used to assess mathematical reasoning in AI models. Despite significant improvements in LLM performance on this benchmark in recent years, Apple’s team questions whether these models’ mathematical reasoning abilities have genuinely advanced or if the improved results simply reflect enhanced pattern-matching rather than deeper understanding.

Apple’s study ultimately calls attention to the limitations of LLMs in accurately handling complex reasoning tasks, particularly in mathematics, where reliable logic is essential. This research highlights the challenges facing developers as they work to refine these models and improve their capacity for consistent, reliable reasoning in complex tasks.

{Categories} _Category: Platforms{/Categories}
{URL}https://www.businesstoday.in/technology/news/story/apple-researchers-find-large-language-models-lack-robust-mathematical-reasoning-abilities-heres-why-449805-2024-10-12{/URL}
{Author}unknown{/Author}
{Image}https://akm-img-a-in.tosshub.com/businesstoday/images/story/202410/670a6658b4aae-ai-generated-image-120643366-16×9.jpeg{/Image}
{Keywords}{/Keywords}
{Source}Platforms{/Source}
{Thumb}{/Thumb}

Exit mobile version