As an Amazon Associate I earn from qualifying purchases.

Fostering a culture of innovation

[ad_1]

Editor’s Note: Andrew Borthwick is a principal scientist at Amazon; he leads a team focusing on challenges of automatic machine learning over Amazon’s expansive product catalog. In this article, he describes his experience in helping organize a Challenge within the company’s annual, internal machine-learning conference, which brings together thousands of scientists and engineers from across the company to showcase their work, network with peers, and raise the quality of science at the company.

More than 4,000 scientists and engineers attended last fall’s virtual, online event, with the opportunity to view keynote, oral paper, and poster presentations, along with workshops, training sessions, and other activities.

In this article, Borthwick shares his experience in helping organize one of the conference’s Challenge events, and provides insight into how, despite the company’s highly decentralized approach to science and engineering, the company fosters collaboration and a sense of community among scientists.

There is a huge amount of innovation in machine learning at Amazon. So much, in fact, that it can be difficult to keep track of all of the cool ideas percolating among teams. To help Amazonians push the state of the art forward, we have an annual internal Amazon Machine Learning Conference (AMLC). This conference is structured similarly to well-known academic conferences, with a process of papers being peer reviewed, and a high bar for acceptance.

I’ve been working in machine learning at Amazon for six years now and have served as a reviewer and meta-reviewer of papers for AMLC many times. Although reviewing papers has been a stimulating opportunity in that it has allowed me to see the great diversity of machine learning research here at Amazon, I sometimes found myself stymied when deciding on the merits of an idea.

There is a huge amount of innovation in machine learning at Amazon. So much, in fact, that it can be difficult to keep track of all of the cool ideas percolating among teams.

Amazon is well known for a culture of “two pizza teams”. We try to reduce Amazon’s very large scale into chunks of work that can be attacked by a team of people small enough that they can be fed with two pizzas (in practice these teams are typically five to eight in size, so the pizzas should definitely be large). Each team can then be customer obsessed in focusing on the opportunity they are targeting. In machine learning, this has a major advantage in allowing us to be agile — we don’t spend too much time coordinating with other teams — so teams are free to experiment with approaches. The downside to this approach is that it can lead to a duplication of effort, and an inability to identify the best scientific approach.

I have frequently reviewed papers that presented data where some team had greatly increased the accuracy of their machine learning algorithm relative to their previous approach, and had  delivered significant customer value.  This sounds good, but one of the Amazon Leadership Principles is that we should “Insist on the Highest Standards”. I would ask myself, “Yes, what this paper is describing is great, but is this the best that could be done here?”

The problem was most acute when you had separate two-pizza teams working on very similar challenges. One of my areas of expertise is in linking records in databases, which led to my work on AWS Lake Formation FindMatches. We’re doing some really interesting science in this area:  one team is working on finding duplicate items in Amazon’s product catalog while another is working on identifying sets of products that are variants of one another (when buying Amazon Essentials Crewneck t-shirts, for instance, you will see all the different colors and sizes on the same page). These problems are similar in that a customer might want to see if two products “match”, but in one case they are looking for an “exact match”, while in the other they want to find “products that match if you ignore color and size differences”.

We had a similar issue with machine learning classification problems.

One two-pizza team was working on the problem of classifying Amazon products as to which customer-facing product type they belong to (such as “women’s sneakers”). Meanwhile another team was classifying items into categories that sometimes have a special treatment for sales tax purposes (for instance “alcoholic beverage” or “children’s clothing” or “food” or “medicine”). Amazon Music has a similar problem with classifying music tracks as to genre (is it “holiday music” or “instrumental jazz” or “string quartet”?).

Each of these teams was working on classifying items into a fairly large, but fixed number of classes, a problem known in machine learning as “k-way classification”. The items being classified (either products or music tracks) had many different attributes which were of different data types such as text (product_description, music_track_title), numeric (shipping_weight), categorical (color, size), and image (the picture of the product or the album cover), so we said that this was “k-way classification of multimodal tabular data”. Finally, each of these teams had a substantial number of labeled records where an Amazon employee had determined the correct category. We dubbed this challenge as “supervised k-way classification of multimodal tabular data” —  a very important but understudied problem in ML.

The problem came when each of these teams submitted a paper describing their results to the Amazon Machine Learning Conference.  The questions I had to resolve as a reviewer were: “Who has the better algorithm”? and “This other two-pizza team is working on a very similar problem. What would happen if they used the other team’s algorithm on their data”?

The MultiModal Tabular Data Challenge Workshop included a question-and-answer session with competition finalists and scientists from the competition’s organizing committee.

These kinds of questions led some of my machine learning colleagues and me to organize an internal “Grand Challenge in MultiModal Tabular Data”. Organizing a competition like this is a big task, but there are similar examples in the global ML community. Our first project was to gather and organize k-way classification and matching datasets from two-pizza teams across Amazon.

Next we had a kick-off meeting where we announced the competition and the prizes ($1000 in Amazon gift cards for the best average performance on the matching tasks and the best average performance on the classification tasks).

The contest itself lasted for four months, with more than 50 teams submitting results, and culminated with a workshop at AMLC last October. There the top three teams in the Matching and K-Way Classification challenges described their systems.

In reflecting on the Challenge, we found a number of positive effects:

  • The competition was a fun activity, with more than 50 teams and over 100 participants. Many participants enthusiastically made dozens of attempts at the different competitions.
  • Because a reverence for rank and titles is not one of Amazon’s Leadership Principles, the Challenge placed participants of all levels, locations, and job titles on equal footing.
  • One of the key challenges for the organizing committee was the need to standardize all of the data for the different tasks according to the same conventions (for instance, we made all of the data available with similar schemas in two popular formats —.csv and .parquet). This data is now available for future Amazon research projects, and thus future papers submitted to the conference.  
  • Two of the top six solutions made heavy use of AWS’ new open source Automated Machine Learning toolkit, AutoGluon, including one of the Grand Prize winners. Ideas from these Challenge entrants also made their way back into the AutoGluon toolkit, particularly around improving AutoGluon’s ability to handle textual columns in a tabular dataset.
  • Researchers benefited because these datasets are more complex and representative of real-world problems than most datasets in the public domain. In particular, it is difficult for researchers to get their hands on datasets where the correct decision hinges on signals derived from a combination of complex text, image, numeric, and categorical attributes.
  • More generally, the Challenge has helped to encourage closer teamwork among  different two-pizza teams working on similar problems. I’ve been in a number of meetings with teams working on a task that was in the Challenge or on problems that were similar to one of those tasks, where we have discussed ideas for leveraging the learnings from the winning teams.
  • Finally, for me, the Challenge led me to join the Amazon Selection and Catalog Systems team, which was one of the main contributors of data to the project. One of the great things about working here is the opportunity to switch to a team that you are passionate about.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo