Publications
Building Robust and Explainable AI with Commonsense Knowledge Graphs and Neural Models
Abstract
Commonsense reasoning is an attractive test bed for neuro-symbolic techniques, because it is a difficult challenge where pure neural and symbolic methods fall short. In this chapter, we review commonsense reasoning methods that combine large-scale knowledge resources with generalizable neural models to achieve both robustness and explainability. We discuss knowledge representation and consolidation efforts that harmonize heterogeneous knowledge. We cover representative neuro-symbolic commonsense methods that leverage this commonsense knowledge to reason over questions and stories. The range of reasoning mechanisms includes procedural reasoning, reasoning by analogy, and reasoning by imagination. We discuss different strategies to design systems with native explainability, such as engineering the knowledge dimensions used for pretraining, generating scene graphs, and learning to …
- Date
- February 6, 2026
- Authors
- Filip Ilievski, Kaixin Ma, Alessandro Oltramari, Peifeng Wang, Jay Pujara
- Book
- Compendium of Neurosymbolic Artificial Intelligence
- Pages
- 178-209
- Publisher
- IOS Press