Training LLMs with LogicReward for Faithful and Rigorous Reasoning

1National University of Singapore, 2University College London, 3University of Manchester, 4University of Melbourne, 5University of California, Santa Barbara
(Preprint)

TL;DR

LogicReward is a reward function that evaluates unstructured natural language reasoning and provides step-level, symbolically guided rewards.

Introduction figure placeholder

Introduction

Although LLMs exhibit strong reasoning capabilities, existing training methods largely depend on outcome-based feedback, which can produce correct answers with flawed reasoning. Prior work introduces supervision on intermediate steps but still lacks guarantees of logical soundness, which is crucial in high-stakes scenarios where logical consistency is paramount. To address this, we propose LogicReward, a novel reward system that guides model training by enforcing step-level logical correctness with a theorem prover. We further introduce Auto-formalization with Soft Unification, which reduces natural language ambiguity and improves formalization quality, enabling more effective use of the theorem prover. An 8B model trained on data constructed with LogicReward surpasses GPT-4o and o4-mini by 11.6% and 2% on natural language inference and logi- cal reasoning tasks with simple training procedures. Further analysis shows that LogicReward enhances reasoning faithfulness, improves generalizability to un- seen tasks such as math and commonsense reasoning, and provides a reliable re- ward signal even without ground-truth labels.

Methodology

  • LogicReward Design (Premise Validity & Logic Validity)
    Motivation: Correct answers can come from incorrect reasoning, so rewards must evaluate the reasoning itself.
    Implementation: LogicReward scores each step using premise validity (grounding in the given context) and logic validity (theorem-prover verification that the inference is logically entailed).
  • Soft Unification
    Motivation: Natural language reasoning often omits implicit assumptions, making direct symbolic checking fragile.
    Implementation: Soft Unification adds missing but necessary assumptions to reduce ambiguity before formal verification.
  • Refinement Loop
    Motivation: One-shot formalization can fail due to ambiguity or incomplete structure.
    Implementation: The system iteratively refines the formalized representation until it becomes verifiable or is rejected, enabling reliable step-level evaluation for unstructured reasoning.
LogicReward method overview placeholder

📈 Performance

  • 🚀 Consistent gains across benchmarks:
    • LogicReward improves LLaMA-3.1-8B by +11% and Qwen-3-8B by +3.2% on average across 8 logical reasoning and natural language inference benchmarks.
    • Using an 8B model, we outperform strong baselines such as GPT-4o and o4-mini by +11.6% and +2%.
  • 🏆 Outperforms existing reward signals: LogicReward demonstrates stronger performance than alternative reward functions, including confidence-based rewards, LLM-as-a-Judge, and Process Reward Models (PRMs).
  • 🌍 Stronger out-of-distribution generalization: Models trained with LogicReward generalize better to OOD tasks such as:
    • Commonsense reasoning (CommonsenseQA)
    • Mathematical reasoning (GSM8K)
    • Deductive reasoning (BoardGameQA)
  • 🧠 Faithful reasoning beyond accuracy: LogicReward improves not only final-task accuracy, but also the faithfulness, logical consistency, and rigor of intermediate reasoning steps.
Performance figure 1 placeholder
Performance figure 2 placeholder
Performance figure 3 placeholder

BibTeX

@article{logicreward2025,
  title   = {Training LLMs with LogicReward for Faithful and Rigorous Reasoning},
  author  = {Jundong Xu, Hao Fei, Huichi Zhou, Xin Quan, Qijun Huang, Shengqiong Wu, William Yang Wang, Mong-Li Lee, Wynne Hsu},
  journal = {arXiv preprint arXiv:TODO},
  year    = {2025}
}

Placeholder: replace with the official BibTeX from arXiv / venue.