Aristotle: Mastering Logical Reasoning with a Logic-Complete Decompose–Search–Resolve Framework

1 National University of Singapore, 2 University of Auckland, 3 University of Arizona, 4 University of California, Santa Barbara, 5 MBZUAI
(ACL 2025 Oral)
Aristotle teaser

Introduction

In the context of large language models (LLMs), current advanced reasoning methods have made impressive strides in various reasoning tasks. However, when it comes to logical reasoning tasks, major challenges remain in both efficacy and efficiency. This is rooted in the fact that these systems fail to fully leverage the inherent structure of logical tasks throughout the reasoning processes such as decomposition, search, and resolution. To address this, we propose a logic-complete reasoning framework, Aristotle, with three key components: Logical Decomposer, Logical Search Router, and Logical Resolver. In our framework, symbolic expressions and logical rules are comprehensively integrated into the entire reasoning process, significantly alleviating the bottlenecks of logical reasoning, i.e., reducing sub-task complexity, minimizing search errors, and resolving logical contradictions.

The experimental results on several datasets demonstrate that Aristotle consistently outperforms state-of-the-art reasoning frameworks in both accuracy and efficiency, particularly excelling in complex logical reasoning scenarios.

Aristotle Methodology

Aristotle methodology (placeholder)

Experiments

Experiments summary (placeholder)

BibTeX

@inproceedings{Aristotle25,
  author       = {Jundong Xu and
                  Hao Fei and
                  Meng Luo and
                  Qian Liu and
                  Liangming Pan and
                  William Yang Wang and
                  Preslav Nakov and
                  Mong{-}Li Lee and
                  Wynne Hsu},
  title        = {Aristotle: Mastering Logical Reasoning with {A} Logic-Complete Decompose-Search-Resolve Framework},
  booktitle    = {Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics},
  year         = {2025},
  url          = {https://arxiv.org/abs/2412.16953}
}