Large language models (LLMs) and multimodal large langugage models (MLLMs) have unlocked impressive reasoning ability. However, when tasks demand precise logical guarantees, explicit structure, or reliable multi-step deduction, purely neural approaches often fall short. Their reasoning can be fluent yet brittle: explanations may sound convincing while quietly violating basic logical constraints.
Neuro-symbolic approaches aim to close this gap by coupling LLMs with symbolic representations and reasoning procedures. Symbolic reasoning enable compositionality, support program-like manipulation of knowledge, and good tracibility and interpretability. Neural models, in turn, offer strong capabilities in learning, perception, and pattern discovery. When combined, they allow us to:
This page collects a set of Neuro-Symbolic Large Model projects—covering both symbolic-guided LLM reasoning and multimodal symbolic reasoning—that explore these ideas in different settings and benchmarks.
These projects use symbolic structures and logic to guide LLM reasoning, yielding more faithful, controllable, and interpretable decision processes.
SymbCoT (ACL 2024)
Faithful Logical Reasoning via Symbolic Chain-of-Thought
Aristotle (ACL 2025 Oral)
Mastering Logical Reasoning with a Logic-Complete Decompose–Search–Resolve Framework
These projects extend symbolic reasoning beyond text, grounding logical rules in images and other modalities to assess and improve the multimodal reasoning ability of large models.
MuSLR (NeurIPS 2025)
Multimodal Symbolic Logical Reasoning
If you are interested in these projects, feel free to contact Jundong Xu to discuss research ideas and potential collaborations!