This monograph is the first survey of neural approaches to conversational AI that targets Natural Language Processing and Information Retrieval audiences. It provides a comprehensive survey of the neural approaches to conversational AI that have been developed in the last few years, covering QA, task-oriented and social bots with a unified view of optimal decision making.

The authors draw connections between modern neural approaches and traditional approaches, allowing readers to better understand why and how the research has evolved and to shed light on how they can move forward. They also present state-of-the-art approaches to training dialogue agents using both supervised and reinforcement learning. Finally, the authors sketch out the landscape of conversational systems developed in the research community and released in industry, demonstrating via case studies the progress that has been made and the challenges that are still being faced.

Neural Approaches to Conversational AI is a valuable resource for students, researchers, and software developers. It provides a unified view, as well as a detailed presentation of the important ideas and insights needed to understand and create modern dialogue agents that will be instrumental to making world knowledge and services accessible to millions of users in ways that seem natural and intuitive.

Online evaluation is one of the most common approaches to measure the effectiveness of an information retrieval system. It involves fielding the information retrieval system to real users, and observing these users' interactions in situ while they engage with the system. This allows actual users with real world information needs to play an important part in assessing retrieval quality.

Online Evaluation for Information Retrieval provides the reader with a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. The presentation of different metrics further partitions online evaluation based on different sized experimental units commonly of interest: documents, lists, and sessions. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.

This book pays particular attention to practical issues: How to run evaluations in practice, how to select experimental parameters, how to take into account ethical considerations inherent in online evaluations, and limitations that experimenters should be aware of. While most published work on online experimentation today is on a large scale in systems with millions of users, this monograph also emphasizes that the same techniques can be applied on a small scale. To this end, it highlights recent work that makes it easier to use at smaller scales and encourages studying real-world information seeking in a wide range of scenarios.

The monograph concludes with a summary of the most recent work in the area, and outlines some open problems, as well as postulating future directions.