As an Amazon Associate I earn from qualifying purchases.

At NeurIPS, what’s old is new again

[ad_1]

The current excitement around large language models is just the latest aftershock of the deep-learning revolution that started in 2012 (or maybe 2010), but Columbia professor and Amazon Scholar Richard Zemel was there before the beginning. As a PhD student at the University of Toronto in the late ’80s and early ’90s, Zemel wrote his dissertation on representation learning in unsupervised machine learning systems for Geoffrey Hinton, one of the three “godfathers of deep learning”.

Richard Zemel, a professor of computer science at Columbia University, an Amazon Scholar, and a member of the advisory board of the Conference on Neural Information Processing (NeurIPS).

Zemel is also on the advisory board of the main conference in the field of deep learning, the Conference on Neural Information Processing (NeurIPS), which takes place this week. His breadth of experience gives him a rare perspective on the field of deep learning — both how far it’s come and where it’s going.

“It’s come a very long way in some sense, in terms of the scope of problems that are relevant and the whole real-world applicability of it,” Zemel says. “But a lot of the same problems still exist. There are just many more facets than there used to be.”

For example, Zemel says, take the concept of robustness, the ability of a machine learning model to maintain performance when the data it sees at inference time differs from the data it was trained on, because of noise, drift in the data distribution, or the like.

“One of the original neural-net applications was ALVINN, the automated land vehicle in a neural network, in the late ’80s,” Zemel says. “It was a neural net that had 29 hidden units, and it was an answer to DARPA’s self-driving challenge. It was a big success for neural nets at the time.

“Robustness came up there because they were worried about the car going off the road, and they didn’t have any training examples of that. They worked out how to augment the data with those kinds of training examples. So thirty years ago, robustness was seen as an important question, and some ideas came up.”

Today, data augmentation remains one of the main ways to ensure robustness. But as Zemel says, the problem of robustness has new facets.

Neural attentive circuits.16x9.png

Related content

Francesco Locatello on the four NeurIPS papers he coauthored this year, which largely concern generalization to out-of-distribution test data.

“For instance, we can consider algorithmic fairness as a form of robustness,” he says. “It’s robustness with respect to particular groups. A lot of the methods that are used for that are methods that have also been developed for robustness, and vice versa. For example, they’re formulated as trying to develop a prediction that has some invariance properties. And it could be that you’re not just developing a prediction: in the deep-learning world, you’re trying to develop a representation that has these properties. The final layer of representation should be invariant. Think of multiclass object recognition: anything that has a label of class K should have a very similar kind of distribution over representations, no matter what environment it comes from.”

With generative-AI models, Zemel says, evaluating robustness becomes even more difficult. In practice, the most common machine learning model has, until recently, been the classifier, which outputs the probabilities that a given input belongs to each of several classes. One way to gauge a classifier’s robustness is to determine whether its predicted probabilities — its confidence in its classifications — accurately reflects its performance on data. If the model is overconfident, it probably won’t generalize well to new settings.

But with generative AI models, there’s no such confidence metric to appeal to.

“If now the system is busy writing sentences, what does the uncertainty mean?” Zemel asks. “How do you talk about uncertainty? The whole question about building robust, properly confident, responsible systems becomes that much harder in the in the era where generative models are actually working well.”

The neural analogy

NeurIPS was first held in 1986, and in the early years, the conference was as much about neuroscientists using computational tools to model the brain as about computer scientists using brain-like models to do computation.

“The neural part of it has been drowned out by the engineering side of things,” Zemel says, “but there’s always been a lively interest in it. And there’s been some loose — and not so loose — inspiration that has gone that way.”

Today’s generative-AI models, for instance, are usually transformer models, whose signature component is the attention mechanism that decides which aspects of the input to focus on when generating outputs.

Amazon Scholars Michael I. Jordan and Michael Kearns and Amazon distinguished scientist Bernhard Scholkopf NeurIPS Amazon Science.jpg

Related content

Amazon Science hosts a conversation with Amazon Scholars Michael I. Jordan and Michael Kearns and Amazon distinguished scientist Bernhard Schölkopf.

“Some of that work actually has its roots in cognitive science and to some extent in neuroscience,” Zemel says. “Neuroscience and cognitive science have studied attention for a long time now, particularly spatial attention: what do you focus on when viewing a scene? We have also been considering spatial attention in our models. About a decade ago, we were working on image captioning, and the idea was that when the system was generating the text for the caption, you could see what part of the image it was attending to. When it was entering the next word, it was focusing on some part of the image.

“It’s a little different than the attention in the transformers, where they took it a step further, as one layer can attend to activities in another layer of a network. It’s a similar idea, but it was a natural deep-learning version — learning applied to that same idea.”

Recently, Zemel says, computer scientists seem to be showing a renewed interest in what neuroscience and cognitive science have to teach them.

“I think it’s coming back as people try to scale up the systems and make them work with less data, or as the models become bigger and bigger, and it’s very inefficient and sometimes impossible to back-propagate through the whole system,” he says. “Brains have interesting structure at different scales. There are different kinds of neurons that have different functions, and we don’t really have that in our neural nets. And there’s no clear place where there’s short-term memory and long-term memory that are thought to be important parts of the brain. Maybe there are ways of getting that kind of architectural scaffolding structure that could be useful in improving neural nets and improving machine learning.”

New frontiers

As Zemel considers the future of deep learning, two areas of research strike him as particularly intriguing.

Mike Jordan.jpg

Related content

In a plenary talk, the Berkeley professor and Distinguished Amazon Scholar will argue that AI research should borrow concepts from economics and focus on social collectives.

“One of them is this area called mechanistic interpretability,” he says. “Can you both understand and affect what’s going on inside these systems? One way of demonstrating that you understand what’s going on is to make some change and predict what that change is. I’m not talking about understanding what a particular unit or a particular neuron does. It’s more like, we’d like to be able to make this change to the generative model; how do we achieve that without adding new data or post hoc processing? Can you actually go in and change how the network behaves?

“The other one is this idea that we talked about: can we add inductive biases, add structure to the system, add some sort of knowledge — it could be a logic, it could be a probability —to enable these systems to become much more efficient, to learn with less data, with less energy? There are just so many problems that are now open and unsolved that I think it’s a great time to be doing research in the area.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo