Diverse Beam Search for Increased Novelty in Abstractive Summarization

Abstract

Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.

Downloads:

Publication info

Date
February 2018
Author(s)
Andreea Hossmann
Principal Product Manager

Michael Baeriswyl
Executive Vice President of Data, Analytics & AI

Andre Cibils
ex-Master Student

Claudiu Musat
Director of Research for Data, Analytics & AI

Conference