Referring expression generation in context
Author | : Fahime Same |
Publisher | : Language Science Press |
Total Pages | : 276 |
Release | : 2024-06-03 |
ISBN-10 | : 9783961104710 |
ISBN-13 | : 3961104719 |
Rating | : 4/5 (719 Downloads) |
Download or read book Referring expression generation in context written by Fahime Same and published by Language Science Press. This book was released on 2024-06-03 with total page 276 pages. Available in PDF, EPUB and Kindle. Book excerpt: Reference production, often termed Referring Expression Generation (REG) in computational linguistics, encompasses two distinct tasks: (1) one-shot REG, and (2) REG-in-context. One-shot REG explores which properties of a referent offer a unique description of it. In contrast, REG-in-context asks which (anaphoric) referring expressions are optimal at various points in discourse. This book offers a series of in-depth studies of the REG-in-context task. It thoroughly explores various aspects of the task such as corpus selection, computational methods, feature analysis, and evaluation techniques. The comparative study of different corpora highlights the pivotal role of corpus choice in REG-in-context research, emphasizing its influence on all subsequent model development steps. An experimental analysis of various feature-based machine learning models reveals that those with a concise set of linguistically-informed features can rival models with more features. Furthermore, this work highlights the importance of paragraph-related concepts, an area underexplored in Natural Language Generation (NLG). The book offers a thorough evaluation of different approaches to the REG-in-context task (rule-based, feature-based, and neural end-to-end), and demonstrates that well-crafted, non-neural models are capable of matching or surpassing the performance of neural REG-in-context models. In addition, the book delves into post-hoc experiments, aimed at improving the explainability of both neural and classical REG-in-context models. It also addresses other critical topics, such as the limitations of accuracy-based evaluation metrics and the essential role of human evaluation in NLG research. These studies collectively advance our understanding of REG-in-context. They highlight the importance of selecting appropriate corpora and targeted features. They show the need for context-aware modeling and the value of a comprehensive approach to model evaluation and interpretation. This detailed analysis of REG-in-context paves the way for developing more sophisticated, linguistically-informed, and contextually appropriate NLG systems.