# A Controllable Model of Grounded Response Generation

Published in Pre-print, 2020

Abstract:

Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process. This control is essential to ensure that users’ semantic intents are satisfied and to impose a degree of specificity on generated outputs. Attempts to boost informativeness alone come at the expense of factual accuracy, as attested by GPT-2’s propensity to “hallucinate” facts. While this may be mitigated by access to background knowledge, there is scant guarantee of relevance and informativeness in generated responses. We propose a framework that we call controllable grounded response generation (CGRG), in which lexical control phrases are either provided by an user or automatically extracted by a content planner from dialogue context and grounding knowledge. Quantitative and qualitative results show that, using this framework, a GPT-2 based model trained on a conversation-like Reddit dataset outperforms strong generation baselines.

PDF

Bibtex:

@misc{wu2020controllable,
title={A Controllable Model of Grounded Response Generation},
author={Zeqiu Wu and Michel Galley and Chris Brockett and Yizhe Zhang and Xiang Gao and Chris Quirk and Rik Koncel-Kedziorski and Jianfeng Gao and Hannaneh Hajishirzi and Mari Ostendorf and Bill Dolan},
year={2020},
eprint={2005.00613},
archivePrefix={arXiv},
primaryClass={cs.CL}
}