Research Interests
My research focuses on developing the next generation of AI systems that can use explanation as a core mechanism for learning and reasoning in natural language, particularly in complex domains such as science, mathematics, and healthcare.
I investigate the integration of neural and symbolic AI methods to enhance the robustness, controllability, and faithfulness of AI-generated explanations and, ultimately, to uncover the principles governing explanatory reasoning in humans.
Moreover, I am interested in developing methodologies to interpret and evaluate deep learning models, particularly Large Language Models, with a focus on generalisation in logical and mathematical reasoning.
Main research interests:
- Natural Language Processing
- Explanation-Based Learning and Reasoning
- Neurosymbolic AI
- Reasoning and Generalisation in Large Language Models
- AI for Science, Mathematics and Healthcare
- Mechanistic Interpretability