As an Amazon Associate I earn from qualifying purchases.

Automatically generating text from structured data

[ad_1]

Data-to-text generation converts information from a structured format such as a table into natural language. This allows structured information to be read or listened to, as when a device displays a weather forecast or a voice assistant answers a question.

Language models trained on billions of sentences learn common linguistic patterns and can generate natural-sounding sentences by predicting likely sequences of words. However, in data-to-text generation we want to generate language that not only is fluent but also conveys content accurately. 

Some approaches to data-to-text generation use a pipeline of machine learning models to turn the data into text, but this can be labor intensive to create, and pipelining poses the risk that errors in one step will compound in later steps.

In the Alexa AI organization, we’ve developed a neural, end-to-end, data-to-text generation system called DataTuner, which can be used for a variety of data types and topics to generate fluent and accurate texts. We’ve released the DataTuner code on GitHub under a noncommercial license.

Alexa AI’s new DataTuner software can convert structured information, such as the relationships encoded by knowledge graphs, into texts that are both semantically faithful and fluent.

Credit: Glynis Condon

At last year’s International Conference on Computational Linguistics (COLING), we presented a paper in which we compared our approach to its best-performing predecessors, using four data-to-text data sets. On automated metrics, DataTuner pushes the state of the art by significant margins, from 1.2 to 5.9 points according to the BLEU algorithm for evaluating text quality.

Human annotators also graded our responses as both more natural-sounding and more accurate. In fact, on two of the four data sets, our texts were judged to be more natural-sounding, on average, than human-written texts.

Annotator evaluations showed that DataTuner improved the semantic accuracy of generated texts, with margins ranging from 5.3% to 40%. Our paper also introduces a model-based approach for measuring the accuracy of generated texts, an approach that is 4.2% to 14.2% more accurate at detecting errors than previous hand-crafted approaches. 

Semantic fidelity vs. fluency

To get a sense of the problem we address, consider an example in which we have some structured information about Michelle Obama that we want to convey to our readers or listeners. That information is organized in the entity-relation-entity format typical of knowledge graphs.

Michelle Obama | author of | Becoming 
Michelle Obama | birthplace | Chicago, Illinois, USA
Princeton University | alma mater of | Michelle Obama
Harvard University | alma mater of | Michelle Obama

We could imagine a text that conveys the meaning accurately but doesn’t sound very natural:

Michelle Obama is the author of Becoming. Michelle Obama was born in Chicago, Illinois, USA. Michelle Obama was educated at Princeton University. Michelle Obama was educated at Harvard University.

This text has high semantic fidelity but low fluency.

Alternatively, we could imagine a text that sounds very fluent but doesn’t accurately convey the information: 

Born in Chicago, Illinois, and educated at Harvard, Michelle Obama is the author of A Promised Land

This text has added some information and missed some out, so it has low semantic fidelity even though it has high fluency.

Pipeline-based approaches to data-to-text generation typically consist of steps such as (1) ordering the content; (2) dividing the content into sentences; (3) finding the right words and phrases to express the data (lexicalization and referring-expression generation), and (4) joining it all together to produce the final text (realization). These approaches usually generalize well to new concepts because of the separate lexicalization step, but they can be difficult to maintain and require training data for each step that can be labor intensive to acquire. 

End-to-end approaches are trained on [data, text] pairs that can be gathered more easily, but it’s difficult to guarantee the semantic fidelity of the results. This is the problem we address with DataTuner.

The DataTuner model

DataTuner’s approach has two steps, generation and reranking. 

First, our language model generates texts from data. In our experiments, we started with a pretrained language model that could generate text, the GPT-2 model. To adapt it to the data-to-text task, we trained it on concatenated data and text, using the special tokens <data> and <text> to indicate which was which. When we use the trained model to generate text, the only input is the data.

During training, the inputs to DataTuner’s data-to-text model are data and text, separated by the special tokens <data> and <text>. At runtime, the only input is the data.

Credit: Hamza Harkous

Inside the model, we concatenate several types of embeddings, or vector representations whose spatial relationships indicate relationships between data (see figure above). The first type is token embeddings, which encode semantic information about individual input words. The other is an embedding that represents words’ positions in the text. 

We also introduce what we call fine-grained state embeddings. To produce these, we use special tokens that indicate structural relationships between data items.

For example, we would convert the data triple Michelle Obama | author of | Becoming into the string <subject> Michelle Obama <predicate> author of <object> Becoming, with <subject>, <object>, and <predicate> as special tokens. The state embedding for any token is that of the special token that most recently precedes it; for example, the token Becoming will get the state embedding of <object>. 

Secondly, we train a semantic-fidelity classifier. This takes the input data and a generated text and identifies whether the text accurately conveys the data or whether it adds, repeats, omits, or changes any of the content. We use this to rerank the generated texts according to accuracy. 

The classifier is trained using the same data we used to train our language model. Our original [data, text] pairs give us the examples that are to be classified as accurate. To get inaccurate examples, we use rule-based corruptions of the accurate [data, text] pairs. For example, we could take the training pair (Michelle Obama | author of | Becoming) and “Michelle Obama wrote Becoming and swap the entities to create the inaccurate [data, text] pair (Michelle Obama | author of | the Gruffalo) and “Michelle Obama wrote Becoming”.

For this classifier we use the RoBERTA language model with an additional classification layer, an approach that has been successful in other tasks, such as natural-language inference. For each input token (either data or text), we take the token embeddings, positional embeddings, and segment embeddings (embeddings of the tokens that distinguish text and data) and sum these element-wise to provide the input to RoBERTa’s first layer. A final single-layer neural network produces a classification label. 

Evaluation

We experimented with four different data sets in different formats, including news texts, restaurant reviews, and chats about video games. We evaluated the texts we generated both with automated metrics and by asking human annotators to rate fluency and accuracy via Amazon Mechanical Turk. 

In our experiments, we saw that a model trained without the fine-grained state embeddings is less accurate than a model with them and that adding the semantic-fidelity classifier boosts accuracy further.

We also examined the cases in which our generated texts were assessed as better than human-written texts, and we suspect that the reason is that our model learned to produce standard formulations, whereas humans sometimes write in non-standard or informal ways that other people might find less fluent.

We also investigated the use of our semantic-fidelity classifier as a method for automatically evaluating the accuracy of texts generated by different models and found that, for two datasets, it was a significantly better predictor of annotators’ evaluations than existing heuristic approaches.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo