Faithful Logical Reasoning via Symbolic Chain-of-Thought (SymbCoT)

1National University of Singapore, 2University of California, Santa Barbara, 3University of Auckland
(ACL 2024 Main)

Introduction

While the recent Chain-of-Thought (CoT) technique enhances the reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. To strengthen the logical reasoning capability of LLMs, we propose a novel Symbolic Chain-of-Thought, namely SymbCoT, a fully LLM-based framework that integrates symbolic expressions and logic rules with CoT prompting. Technically, building upon an LLM, SymbCoT 1) first translates the natural language context into the symbolic format, and then 2) derives a step-by-step plan to solve the problem with symbolic logical rules, 3) followed by a verifier to check the translation and reasoning chain. Via thorough evaluations on 5 standard datasets with both First-Order Logic and Constraint Optimization symbolic expressions, SymbCoT shows striking improvements over the CoT method consistently, meanwhile refreshing the current stateof-the-art performances. We further demonstrate that our system advances in more faithful, flexible, and explainable logical reasoning. To our knowledge, this is the first to combine symbolic expressions and rules into CoT for logical reasoning with LLMs.
Introductory SymbCoT example placeholder

Method

SymbCoT is a neuro-symbolic reasoning framework that keeps everything inside the LLM. It decomposes logical reasoning into three key stages: (1) a Translator maps natural language premises and questions into symbolic expressions (FOL or CO); (2) a plan-then-solve stage uses the LLM as a Planner to outline symbolic inference steps and a Solver to execute them as a symbolic chain-of-thought; and (3) a Verifier retrospectively checks both the faithfulness of the symbolic translation and the validity of each reasoning step. All modules are implemented via prompting the same backbone LLM, avoiding external solvers while tightly coupling natural language, symbolic forms, and formal logical rules.

SymbCoT method / framework placeholder

Experiments

The paper evaluates SymbCoT on five logical reasoning benchmarks that cover both First-Order Logic and Constraint Optimization settings (e.g., PrOntoQA, ProofWriter, FOLIO, LogicalDeduction, AR-LSAT), comparing against vanilla CoT and strong neuro-symbolic baselines. SymbCoT yields substantial accuracy gains, especially on problems that require deeper, more structured reasoning, and produces reasoning chains that are more logically faithful and easier to inspect.

SymbCoT experiments / results placeholder

For more details and analysis, please refer to the paper and supplementary materials provided in the project page.

BibTeX

@inproceedings{
    author={Jundong Xu and Hao Fei and Liangming Pan and Qian Liu and Mong-Li Lee and Wynne Hsu},
    title={Faithful Logical Reasoning via Symbolic Chain-of-Thought},
    booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
    year={2024},
    url={https://arxiv.org/abs/2405.18357}
}