Talk Info: Vered Shwartz
Time: July 16th 9:30 AM SGT
Title: Commonsense Knowledge and Reasoning in Natural Language
Abstract: Natural language understanding models are trained on a sample of the real-world situations they may encounter. Commonsense and world knowledge, language, and reasoning skills can help them address unknown situations sensibly. In this talk I will discuss two lines of work, addressing knowledge and reasoning respectively. I will first present a method for discovering relevant knowledge which is unstated but may be required for solving a particular problem, through a process of asking information seeking questions. I will then discuss nonmonotonic reasoning in natural language, a core human reasoning ability that has been studied in classical AI but mostly overlooked in modern NLP. I will talk about several recent papers addressing abductive reasoning (reasoning about plausible explanations), counterfactual reasoning (what if?) and defeasible reasoning (updating beliefs given additional information). Finally, I will discuss open problems and future directions in building NLP models with commonsense reasoning abilities.
Bio: Vered Shwartz is a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington. She will join the Department of Computer Science at the University of British Columbia as an Assistant Professor in fall 2021. Previously, Vered completed her PhD in Computer Science from Bar-Ilan University in 2019. Her research interests include computational semantics and pragmatics, multiword expressions, and commonsense reasoning.
Video:
Slides: