As an Amazon Associate I earn from qualifying purchases.

A quick guide to Amazon’s papers at ICML

[ad_1]

At this year’s International Conference on Machine Learning (ICML), Amazon researchers have several papers on bandit problems and differential privacy, two topics of perennial interest. But they also explore a variety of other subjects, with a mix of theoretical analysis and practical application.

Adaptive neural computation

Adaptive neural computation means tailoring the number of computations performed by a neural model to the input, on the fly. At ICML, Amazon researchers apply this approach to automatic speech recognition.

Lookahead when it matters: Adaptive non-causal transformers for streaming neural transducers
Grant Strimel, Yi Xie, Brian King, Martin Radfar, Ariya Rastrow, Athanasios Mouchtaris

Bandit problems

Bandit problems — which take their name from slot machines, or one-armed bandits — involve the explore-exploit dilemma: an agent interacting with the environment must simultaneously maximize some reward and learn how to maximize that reward. They commonly arise in reinforcement learning.

Delay-adapted policy optimization and improved regret for adversarial MDP with delayed bandit feedback
Tal Lancewick, Aviv Rosenberg, Dmitry Sotnikov

Incentivizing exploration with linear contexts and combinatorial actions
Mark Sellke

Multi-task off-policy learning from bandit feedback
Joey Hong, Branislav Kveton, Manzil Zaheer, Sumeet Katariya, Mohammad Ghavamzadeh

Thompson sampling with diffusion generative prior
Yu-Guan Hsieh, Shiva Kasiviswanathan, Branislav Kveton, Patrick Bloebaum

Differential privacy

Differential privacy is a statistical guarantee of privacy that involves minimizing the probability that an attacker can determine whether a given data item is or is not included in a particular dataset.

Differentially private optimization on large model at small cost
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

Fast private kernel density estimation via locality sensitive quantization
Tal Wagner, Yonatan Naamad, Nina Mishra

Distribution shift

Distribution shift is the problem that real-world data may turn out to have a different distribution than the datasets on which a machine learning model is trained. At ICML, Amazon researchers present a new dataset that can help combat this problem.

RLSbench: Domain adaptation under relaxed label shift
Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, Zachary Lipton

Overview of setup considered in “RLSbench: Domain adaptation under relaxed label shift”. In existing benchmarks, the label marginal p(y) doesn’t shift, but with RLSbench, p(y) can shift arbitrarily. The class conditionals p(x|y) shift in seemingly natural ways following popular benchmarks.

Ensemble methods

Ensemble methods combine the outputs of several different models to arrive at a final conclusion. At ICML, Amazon researchers prove new theoretical results about stacked-generalization ensembles, in which a higher-level model combines the outputs of lower-level models.

Theoretical guarantees of learning ensembling strategies with applications to time series forecasting
Hilaf Hasson, Danielle Maddix Robinson, Bernie Wang, Youngsuk Park, Gaurav Gupta

Explainable AI

Neural networks’ training methods mean that even their developers may have no idea what computations they’re performing. At ICML, Amazon researchers explore an explainable-AI method called sample-based explanation, which seeks to identify the training examples most responsible for a given model output.

Representer point selection for explaining regularized high-dimensional models
Che-Ping Tsai, Jiong Zhang, Hsiang-Fu Yu, Jyun-Yu Jiang, Eli Chien, Cho-Jui Hsieh, Pradeep Ravikumar

Extreme multilabel classification

Extreme multilabel classification is the problem of classifying data when the space of labels (classification categories) is enormous. At ICML, Amazon researchers explore the use of side information, such as label metadata or instance correlation signals, to improve classifiers’ performance.

PINA: Leveraging side information in eXtreme multi-label classification via predicted instance neighborhood aggregation
Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu

Graph neural networks

Graph neural networks produce vector representations of graph nodes that factor in information about the nodes themselves and their neighbors in the graph. At ICML, Amazon researchers investigate techniques for better initializing such networks.

On the initialization of graph neural networks
Jiahang Li, Yakun Song, Xiang Song, David Paul Wipf

Hypergraphs

A hypergraph is a generalization of the graph structure: where an edge in an ordinary graph links exactly two nodes, an edge in a hypergraph can link multiple nodes. At ICML, Amazon researchers present a novel approach to constructing hypergraph neural networks.

From hypergraph energy functions to hypergraph neural networks
Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Paul Wipf

Hyperparameter optimization

Hyperparameters are attributes of neural-network models such as the learning rate and the number and width of network layers, and optimizing them is a standard step in model training. At ICML, Amazon researchers propose conformal quantile regression as an alternative to the standard Gaussian processes as a way to model functions during hyperparameter optimization.

Optimizing hyperparameters with conformal quantile regression
David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cédric Archambeau

Independence testing

Independence testing is a crucial step in many statistical analyses, aiming to determine whether two variables are independent or not. At ICML, Amazon researchers present an approach to independence testing that adjusts the number of samples collected to the difficulty of determining independence.

Sequential kernelized independence testing
Sasha Podkopaev, Patrick Bloebaum, Shiva Kasiviswanathan, Aaditya Ramdas

Model selection

Model selection, which determines the particular model architecture and hyperparameter settings for a given task, typically involves a validation set split off from the model’s training data. At ICML, Amazon researchers propose using synthetic data for validation in situations where training data is sparse.

Synthetic data for model selection
Alon Shoshan, Nadav Bhonker, Igor Kviatkovsky, Matan Fintz, Gérard Medioni

Physical models

Deep-learning methods have shown promise for scientific computing, where they can be used to predict solutions to partial differential equations (PDEs). At ICML, Amazon researchers investigate the problems of adding known physics constraints — such as adherence to conservation laws — to the predictive outputs of machine learning models when computing the solutions to PDEs.

Learning physical models that can respect conservation laws
Derek Hansen, Danielle Maddix Robinson, Shima Alizadeh, Gaurav Gupta, Michael Mahoney

Tabular data

Extending the power of Transformer models to tabular data has been a burgeoning area of research in recent years. At ICML, Amazon researchers show how to improve the generalizability of models trained using tabular data.

XTab: Cross-table pretraining for tabular transformers
Bingzhao Zhu, Xingjian Shi, Nick Erickson, Mu Li, George Karypis, Mahsa Shoaran



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo