Neuro-Symbolic Large Models

Neuro-Symbolic Large Models Hero

Bridging large models with symbolic reasoning for faithful, interpretable intelligence.

Why Neuro-Symbolic in the Era of LLMs?

Large language models (LLMs) and multimodal large langugage models (MLLMs) have unlocked impressive reasoning ability. However, when tasks demand precise logical guarantees, explicit structure, or reliable multi-step deduction, purely neural approaches often fall short. Their reasoning can be fluent yet brittle: explanations may sound convincing while quietly violating basic logical constraints.

Neuro-symbolic approaches aim to close this gap by coupling LLMs with symbolic representations and reasoning procedures. Symbolic reasoning enable compositionality, support program-like manipulation of knowledge, and good tracibility and interpretability. Neural models, in turn, offer strong capabilities in learning, perception, and pattern discovery. When combined, they allow us to:

  • Improve faithfulness: Reasoning steps can be tractable and verifiable, thus enabling faithful reasoning.
  • Enhance interpretability: Symbolic traces and proofs provide human-inspectable explanations.
  • Enable compositional generalization: Logic and constraints naturally capture reusable structures and rules.
  • Support robustness & safety: Formal checks help enforce invariants in high-stakes scenarios.
  • Extend beyond text: In multimodal settings, symbols provide a common reasoning layer over images, text, and other signals.

This page collects a set of Neuro-Symbolic Large Model projects—covering both symbolic-guided LLM reasoning and multimodal symbolic reasoning—that explore these ideas in different settings and benchmarks.

Projects

Contact

If you are interested in these projects, feel free to contact Jundong Xu to discuss research ideas and potential collaborations!