Large language models (LLMs) and multimodal large language models (MLLMs) have unlocked impressive reasoning ability. However, when tasks demand precise logical guarantees, explicit structure, or reliable multi-step deduction, purely neural approaches often fall short. Their reasoning can be fluent yet brittle: explanations may sound convincing while quietly violating basic logical constraints.
Neuro-symbolic approaches aim to close this gap by coupling LLMs with symbolic representations and reasoning procedures. Symbolic reasoning enable compositionality, support program-like manipulation of knowledge, and good traceability and interpretability. Neural models, in turn, offer strong capabilities in learning, perception, and pattern discovery. When combined, they allow us to:
This page collects a set of Neuro-Symbolic Large Model projects—covering both symbolic-guided LLM reasoning and multimodal symbolic reasoning—that explore these ideas in different settings and benchmarks.
These projects use symbolic structures and logic to guide LLM reasoning, yielding more faithful, controllable, and interpretable decision processes.
These projects extend symbolic reasoning beyond text, grounding logical rules in images and other modalities to assess and improve the multimodal reasoning ability of large models.
If you are interested in these projects, feel free to contact Jundong Xu to discuss research ideas and potential collaborations!