About this Event
Wenhu Chen
Ph.D. student, University of California, Santa Barbara
One of the ultimate goals of artificial intelligence is to build a knowledgable virtual assistant (like Google Home, Amazon Alexa) that can understand natural language inputs and ground on world knowledge to provide accurate information to humans. Building such a virtual assistant requires the NLP models to have the capability to interact and reason over diverse forms of world knowledge. In this talk, I will cover the following two core problems in knowledge-grounded NLP.
1) How to reason over the world knowledge under different scenarios? I will first discuss my effort in building neural symbolic reasoning models over the structured knowledge graph (Freebase, YAGO). Then I will further talk about my work in building unified reasoning models over heterogeneous structured and unstructured web knowledge.
2) How to encode world knowledge in NLP models, should we encode them externally (i.e. encode knowledge as an external memory bank for the model to retrieve) or internally (i.e. memorize the knowledge in the model parameters)? The existing NLP pre-training is mostly relying on internalized memorization, which leads to the hallucination problem. I will detail my recent efforts to externalize explicit factual knowledge in the generative pre-training to greatly alleviate the model’s hallucination problem.
Finally, I will talk about how my work is related to humanity and propose new directions to follow to address humanity issues when building knowledge-grounded NLP systems.
Event Details
See Who Is Interested
Dial-In Information
Register in advance for this meeting:
https://wustl.az1.qualtrics.com/jfe/form/SV_bPml2kMrXPZemea
After registering, you will receive a confirmation email containing information about joining the meeting.
User Activity
No recent activity
About this Event
Wenhu Chen
Ph.D. student, University of California, Santa Barbara
One of the ultimate goals of artificial intelligence is to build a knowledgable virtual assistant (like Google Home, Amazon Alexa) that can understand natural language inputs and ground on world knowledge to provide accurate information to humans. Building such a virtual assistant requires the NLP models to have the capability to interact and reason over diverse forms of world knowledge. In this talk, I will cover the following two core problems in knowledge-grounded NLP.
1) How to reason over the world knowledge under different scenarios? I will first discuss my effort in building neural symbolic reasoning models over the structured knowledge graph (Freebase, YAGO). Then I will further talk about my work in building unified reasoning models over heterogeneous structured and unstructured web knowledge.
2) How to encode world knowledge in NLP models, should we encode them externally (i.e. encode knowledge as an external memory bank for the model to retrieve) or internally (i.e. memorize the knowledge in the model parameters)? The existing NLP pre-training is mostly relying on internalized memorization, which leads to the hallucination problem. I will detail my recent efforts to externalize explicit factual knowledge in the generative pre-training to greatly alleviate the model’s hallucination problem.
Finally, I will talk about how my work is related to humanity and propose new directions to follow to address humanity issues when building knowledge-grounded NLP systems.