Natural Language Generation in Interactive Systems 1st edition by Amanda Stent, Srinivas Bangalore – Ebook PDF Instant Download/Delivery: 978-1107010024,1107010020
Full download Natural Language Generation in Interactive Systems 1st edition after payment
Product details:
ISBN-10: 1107010020
ISBN-13 : 978-1107010024
Author: Amanda Stent, Srinivas Bangalore
An informative and comprehensive overview of the state-of-the-art in natural language generation (NLG) for interactive systems, this guide serves to introduce graduate students and new researchers to the field of natural language processing and artificial intelligence, while inspiring them with ideas for future research. Detailing the techniques and challenges of NLG for interactive applications, it focuses on the research into systems that model collaborativity and uncertainty, are capable of being scaled incrementally, and can engage with the user effectively. A range of real-world case studies is also included. The book and the accompanying website feature a comprehensive bibliography, and refer the reader to corpora, data, software and other resources for pursuing research on natural language generation and interactive systems, including dialog systems, multimodal interfaces and assistive technologies. It is an ideal resource for students and researchers in computational linguistics, natural language processing and related fields.
Natural Language Generation in Interactive Systems 1st Table of contents:
Part I Joint construction
2 Communicative intentions and natural language generation
2.1 Introduction
2.2 What are communicative intentions?
2.3 Communicative intentions in interactive systems
2.3.1 Fixed-task models
2.3.2 Plan-based models
2.3.3 Conversation Acts Theory
2.3.4 Rational behavior models
2.4 Modeling communicative intentions with problem solving
2.4.1 Collaborative problem solving
2.4.2 Collaborative problem solving state
2.4.3 Grounding
2.4.4 Communicative intentions
2.5 Implications of collaborative problem solving for NLG
2.6 Conclusions and future work
References
3 Pursuing and demonstrating understanding in dialogue
3.1 Introduction
3.2 Background
3.2.1 Grounding behaviors
3.2.2 Grounding as a collaborative process
3.2.3 Grounding as problem solving
3.3 An NLG model for flexible grounding
3.3.1 Utterances and contributions
3.3.2 Modeling uncertainty in interpretation
3.3.3 Generating under uncertainty
3.3.4 Examples
3.4 Alternative approaches
3.4.1 Incremental common ground
3.4.2 Probabilistic inference
3.4.3 Correlating conversational success with grounding features
3.5 Future challenges
3.5.1 Explicit multimodal grounding
3.5.2 Implicit multimodal grounding
3.5.3 Grounding through task action
3.6 Conclusions
References
4 Dialogue and compound contributions
4.1 Introduction
4.2 Compound contributions
4.2.1 Introduction
4.2.2 Data
4.2.3 Incremental interpretation vs. incremental representation
4.2.4 CCs and intentions
4.2.5 CCs and coordination
4.2.6 Implications for NLG
4.3 Previous work
4.3.1 Psycholinguistic research
4.3.2 Incrementality in NLG
4.3.3 Interleaving parsing and generation
4.3.4 Incremental NLG for dialogue
4.3.5 Computational and formal approaches
4.3.6 Summary
4.4 Dynamic Syntax (DS) and Type Theory with Records (TTR)
4.4.1 Dynamic Syntax
4.4.2 Meeting the criteria
4.5 Generating compound contributions
4.5.1 The DyLan dialogue system
4.5.2 Parsing and generation co-constructing a shared data structure
4.5.3 Speaker transition points
4.6 Conclusions and implications for NLG systems
References
Part II Reference
5 Referability
5.1 Introduction
5.2 An algorithm for generating boolean referring expressions
5.3 Adding proper names to REG
5.4 Knowledge representation
5.4.1 Relational descriptions
5.4.2 Knowledge representation and REG
5.4.3 Description Logic for REG
5.5 Referability
5.6 Why study highly expressive REG algorithms?
5.6.1 Sometimes the referent could not be identified before
5.6.2 Sometimes they generate simpler referring expressions
5.6.3 Simplicity is not everything
5.6.4 Complex content does not always require a complex form
5.6.5 Characterizing linguistic competence
5.7 Whither REG?
References
6 Referring expression generation in interaction: A graph-based perspective
6.1 Introduction
6.1.1 Referring expression generation
6.1.2 Preferences versus adaptation in reference
6.2 Graph-based referring expression generation
6.2.1 Scene graphs
6.2.2 Referring graphs
6.2.3 Formalizing reference in terms of subgraph isomorphism
6.2.4 Cost functions
6.2.5 Algorithm
6.2.6 Discussion
6.3 Determining preferences and computing costs
6.4 Adaptation and interaction
6.4.1 Experiment I: adaptation and attribute selection
6.4.2 Experiment II: adaptation and overspecification
6.5 General discussion
6.6 Conclusion
References
Part III Handling uncertainty
7 Reinforcement learning approaches to natural language generation in interactive systems
7.1 Motivation
7.1.1 Background: Reinforcement learning approaches to NLG
7.1.2 Previous work in adaptive NLG
7.2 Adaptive information presentation
7.2.1 Corpus
7.2.2 User simulations for training NLG
7.2.3 Data-driven reward function
7.2.4 Reinforcement learning experiments
7.2.5 Results: Simulated users
7.2.6 Results: Real users
7.3 Adapting to unknown users in referring expression generation
7.3.1 Corpus
7.3.2 Dialogue manager and generation modules
7.3.3 Referring expression generation module
7.3.4 User simulations
7.3.5 Training the referring expression generation module
7.3.6 Evaluation with real users
7.4 Adaptive temporal referring expressions
7.4.1 Corpus
7.4.2 User simulation
7.4.3 Evaluation with real users
7.5 Research directions
7.6 Conclusions
References
8 A joint learning approach for situated language generation
8.1 Introduction
8.2 GIVE
8.2.1 The GIVE-2 corpus
8.2.2 Natural language generation for GIVE
8.2.3 Data annotation and baseline NLG system
8.3 Hierarchical reinforcement learning for NLG
8.3.1 An example
8.3.2 Reinforcement learning with a flat state–action space
8.3.3 Reinforcement learning with a hierarchical state–action space
8.4 Hierarchical reinforcement learning for GIVE
8.4.1 Experimental setting
8.4.2 Experimental results
8.5 Hierarchical reinforcement learning and HMMs for GIVE
8.5.1 Hidden Markov models for surface realization
8.5.2 Retraining the learning agent
8.5.3 Results
8.6 Discussion
8.7 Conclusions and future work
References
Part IV Engagement
9 Data-driven methods for linguistic style control
9.1 Introduction
9.2 PERSONAGE: personality-dependent linguistic control
9.3 Learning to control a handcrafted generator from data
9.3.1 Overgenerate and rank
9.3.2 Parameter estimation models
9.4 Learning a generator from data using factored language models
9.5 Discussion and future challenges
References
10 Integration of cultural factors into the behavioral models of virtual characters
10.1 Introduction
10.2 Culture and communicative behaviors
10.2.1 Levels of culture
10.2.2 Cultural dichotomies
10.2.3 Hofstede’s dimensional model and synthetic cultures
10.3 Levels of cultural adaptation
10.3.1 Culture-specific adaptation of context
10.3.2 Culture-specific adaptation of form
10.3.3 Culture-specific communication management
10.4 Approaches to culture-specific modeling for embodied virtual agents
10.4.1 Top-down approaches
10.4.2 Bottom-up approaches
10.5 A hybrid approach to integrating culture-specific behaviors into virtual agents
10.5.1 Cultural profiles for Germany and Japan
10.5.2 Behavioral expectations for Germany and Japan
10.5.3 Formalization of culture-specific behavioral differences
10.5.4 Computational models for culture-specific conversational behaviors
10.5.5 Simulation
10.5.6 Evaluation
10.6 Conclusions
References
11 Natural language generation for augmentative and assistive technologies
11.1 Introduction
11.2 Background on augmentative and alternative communication
11.2.1 State of the art
11.2.2 Related research
11.2.3 Diversity in users of AAC
11.2.4 Other AAC challenges
11.3 Application areas of NLG in AAC
11.3.1 Helping AAC users communicate
11.3.2 Teaching communication skills to AAC users
11.3.3 Accessibility: Helping people with visual impairments access information
11.3.4 Summary
11.4 Example project: “How was School Today…?”
11.4.1 Use case
11.4.2 Example interaction
11.4.3 NLG in “How was School Today…?”
11.4.4 Current work on “How was School Today…?”
11.5 Challenges for NLG and AAC
11.5.1 Supporting social interaction
11.5.2 Narrative
11.5.3 User personalization
11.5.4 System evaluation
11.5.5 Interaction and dialogue
11.6 Conclusions
References
Part V Evaluation and shared tasks
12 Eye tracking for the online evaluation of prosody in speech synthesis
12.1 Introduction
12.2 Experiment
12.2.1 Design and materials
12.2.2 Participants and eye-tracking procedure
12.3 Results
12.4 Interim discussion
12.5 Offline ratings
12.5.1 Design and materials
12.5.2 Results
12.6 Acoustic analysis using Generalized Linear Mixed Models (GLMMs)
12.6.1 Acoustic factors and looks to the area of interest
12.6.2 Relationship between ratings and looks
12.6.3 Correlation between rating and acoustic factors
12.7 Discussion
12.8 Conclusions
References
13 Comparative evaluation and shared tasks for NLG in interactive systems
13.1 Introduction
13.2 A categorization framework for evaluations of automatically generated language
13.2.1 Evaluation measures
13.2.2 Higher-level quality criteria
13.2.3 Evaluation frameworks
13.2.4 Concluding comments
13.3 An overview of evaluation and shared tasks in NLG
13.3.1 Component evaluation: Referring Expression Generation
13.3.2 Component evaluation: Surface Realization
13.3.3 End-to-end NLG systems: data-to-text generation
13.3.4 End-to-end NLG systems: text-to-text generation
13.3.5 Embedded NLG components
13.3.6 Embedded NLG components: the GIVE shared task
13.3.7 Concluding comments
13.4 An overview of evaluation for spoken dialogue systems
13.4.1 Introduction
13.4.2 Realism and control
13.4.3 Evaluation frameworks
13.4.4 Shared tasks
13.4.5 Discussion
13.4.6 Concluding comments
13.5 A methodology for comparative evaluation of NLG components in interactive systems
13.5.1 Evaluation model design
13.5.2 An evaluation model for comparative evaluation of NLG modules in interactive systems
13.5.3 Context-independent intrinsic output quality
13.5.4 Context-dependent intrinsic output quality
13.5.5 User satisfaction
13.5.6 Task effectiveness and efficiency
13.5.7 System purpose success
13.5.8 A proposal for a shared task on referring expression generation in dialogue context
13.5.9 GRUVE: A shared task on instruction giving in pedestrian navigation
13.5.10 Concluding comments
13.6 Conclusion
People also search for Natural Language Generation in Interactive Systems 1st:
natural language generation in interactive systems
natural language interaction
natural language generation ai
e interactive
Tags:
Amanda Stent,Srinivas Bangalore,Natural Language
Reviews
There are no reviews yet.