Sign Up

Xi Ye 
PhD Candidate
Department of Computer Science
University of Texas at Austin


Large language models (LLMs) have significantly extended the boundaries of NLP's potential applications, partially because of their increased ability to do complex reasoning. However, LLMs have well-documented reasoning failures, such as hallucinations and inability to systematically generalize. In this talk, I describe my work on enhancing LLMs in reliably performing textual reasoning, with a particular focus on leveraging explanations. I will first introduce a framework for automatically assessing the robustness of black-box models using explanations. The framework first extracts features to describe the “reasoning process” disclosed by the explanations, and then uses a trained verifier to judge the reliability of predictions based on these features. I will then describe how to form effective explanations for better teaching LLMs to reason. My work uses declarative formal specifications as explanations, which enables using an SMT solver to amend the limited planning capabilities of LLMs. Finally, I will describe future directions for further enhancing LLMs to better aid humans in challenging real-world applications demanding deep reasoning.

 

Talk Location: McKelvey 1020