As an Amazon Associate I earn from qualifying purchases.

3 questions with Kathleen McKeown: Controlling model hallucinations in natural language generation

[ad_1]

The first Amazon Web Services (AWS) Machine Learning Summit on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including Science of Machine Learning.

The science track is focused on the data science and advanced practitioner audience and will highlight the work AWS and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.

Amazon Science is featuring interviews with speakers from the Science of Machine Learning track. For the fifth edition of the series, we spoke to Kathleen McKeown, Henry and Gertrude Rothschild professor of computer science at Columbia University, and founding director of the school’s Data Science Institute. Her research has been focused on text summarization, natural language generation, multi-media explanation, question-answering, and multilingual applications.

McKeown is also an Amazon Scholar, an expanding group of academics who work on large-scale technical challenges for Amazon while continuing to teach and conduct research at their universities.

Q. What is the subject of your talk going to be at the ML Summit?

Over the last few years, my research has focused on — among other areas — the field of natural language generation. At the AWS ML Summit, I will be talking about the need to control choices related to parameters when generating language.

A program that generates language must make choices about how to express information. For example, algorithms have to determine what content to convey, what words to use, and how to order the words to form a sentence. Deep learning can generate remarkably fluent text. However, it is often not possible to control how these choices are made based on the input goals specified.

A primary concern for controllable language generation is that the output should be faithful to the input. Neural language generation approaches are known to hallucinate content, resulting in generated text that conveys information that did not appear in the input. Factual inconsistency resulting from model hallucinations can occur at either the entity or the relation level.

In the former case, a model-generated summary may contain entities that are completely absent in the source document. We also have other kind of hallucinations that are more difficult to spot: relational inconsistencies, where the entities exist in the source document, but the relations between these entities are absent.

Current language generation techniques that rely on large-scale language models can generate fluent text. However, they find it difficult to control word choice or word order in ways that guarantee that the speaker intent comes across accurately. During my talk, I will detail techniques that allow models to render speaker intent faithfully.

Q. Why is this topic especially relevant within the science community today?

There are many different use cases for natural language generation. These include generating language from data, summarizing text, and generating accurate responses in dialogue and machine translation.

In all these use cases, hallucination has been a problem. A system that generates text that is inaccurate is a problem — and is far worse than one that generates less fluent text.

Q. Can you elaborate on some of the techniques for faithful generation from data, faithful summarization of input text, and methods for controlling ordering of content?

One of the ways hallucination can occur is when the training data does not contain data or phrases that occur in the input during training. Approaches to faithful generation and summarization primarily rely on techniques involving data augmentation.

For language generation from data, we use a self-training method to augment the original training data with instances consisting of pairs of structured data, and related text that conveys the information, each of which is completely novel.

Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.

Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding can generate semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.

We also studied the degree to which neural sequence-to-sequence models exhibit fine-grained controllability when performing natural language generation from a meaning representation. More specifically, we looked at the effects of controlling the word order.

Suppose we have a meaning representation that gives the name of a restaurant, the location, and the type of food it serves. The sentence could order information so that the restaurant name appears first. Or it could put the food first before the restaurant name and the location. Or it could begin by providing the location.

We can imagine scenarios in which the different orderings are more appropriate. For example, suppose that someone asks a question, “Where can I find a good place for coffee?” Here, the focus in the response might be on the location. However, for a question like “Where is Café Aramis located and what does it serve?”, we would expect the response to start with “Café Aramis.”

We systematically compared the effect of four input linearization strategies on controllability and faithfulness. Linearization refers to the task of finding the correct grammatical order of a given set of words. We also used data augmentation to improve performance. We have found that properly aligning input sequences during training leads to highly controllable generation, both when training from scratch, or when fine-tuning a larger pre-trained model.

You can learn more about McKeown’s research here, and watch her free talk at the virtual AWS Machine Learning Summit on June 2 by registering at the link below.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo