Results from the NeurIPS Black Box Optimization Competition

Michael McCourt
Advanced Optimization Techniques, Hyperparameter Optimization, SigOpt Company News

Every year in December, SigOpt looks forward to NeurIPS, an increasingly large meeting of the members of the machine learning community. As the topic of machine learning has attracted more researchers, and the application of machine learning has reached increasingly broad fields, this meeting regularly features exciting research and fun interactions with colleagues (you can see some of our previous exploits here). The digital nature of this year’s conference unfortunately stymied some of these fun interactions, but there were a still great number of presentations from researchers around the world.

As part of this year’s conference, SigOpt co-sponsored the Black Box Optimization competition along with Facebook, Twitter, ChaLearn and 4Paradigm. In this competition, participants submitted black box optimization programs, designed to operate with batch parallelism, that were evaluated on hidden functions. The results of their programs were posted online, which gave the participants an opportunity for meta-learning — they could alter their submissions and try to improve their performance. On October 15, the final submissions were judged on a hidden test set of functions (which were distinct from the earlier performance, though closely related).

Just before NeurIPS, the winners were announced! On the Friday of the conference, all the NeurIPS competitions presented the winners with their awards, and we heard from many of the participants regarding how their strategies evolved over the competition and what lessons they learned. Many of the participants provided articles explaining their strategies (see here for these articles, as well as some code). The top 3 participants also provided videos: Alexander Cowen-Rivers from Huawei Noah’s Ark Lab, Jiwei Jlu from NVIDIA RAPIDS.AI, and Mikita Sazanovich from JetBrains Research. This was a great collection of energetic and outstanding researchers sharing their hard-earned insights with each other and the rest of the ML optimization community.

Depiction of parameter updating from KAIST's submission

Figure 1: A depiction of an element of the KAIST submission, where some discrete parameters (categorical or boolean parameters) were managed using a multi-armed bandit methodology.

The figure above depicts the KAIST team’s strategy of dealing with discrete parameters through a multi-armed bandit (MAB) methodology.  When asked why her team chose to participate in the competition, graduate student Jaeyeon Ahn responded “All of our team members had no experience in black-box optimization competition so we thought it could be a great chance for our own ‘exploration‘. So we went through some survey on existing works and one of the things that we’ve found out was that MAB often works well as a pair with Bayesian optimization.” In dealing with the COVID situation, she explained “Although it was hard to work together during the COVID situation, we struggled to communicate actively as a team through teamwork tools such as slack and notion. It was truly a great opportunity to learn how the experts all around the world have worked on for the competition!”

Shotaro Sano, who competed as part of a joint team between his company Preferred Networks and CyberAgent, finished in fifth place. When asked about how his strategy performed so well, he responded “We start from TuRBO, which was a good paper, and account for multiple kernels, the discrete domain, and how flat metric values led to stagnation. These flat metric values can be common for ML classification problems. This competition was good to collaborate with colleagues at CyberAgent.

In addition to the main leaderboard, we also created a “warm-starting” leaderboard. In this leaderboard, the participants were made aware of what parameter names were used (such as learning rate); in contrast, the original leaderboard did not provide participants any hint of the role that the parameters might play in the black box functions. Our original competition rules hid this information from the final test, but several teams realized that they could use such information to improve performance. The AutoML team from the University of Freiburg took advantage of this extra knowledge to top this alternate leaderboard. (The 4Paradigm sponsors were so impressed that they provided an extra $3000 prize.)

As sponsors and organizers, we are in the privileged position to be able to learn from the participants. David Eriksson of Facebook Core Data Science commented, “The field of Bayesian optimization has grown immensely over the past few years and many great papers and software packages have been written. I saw this competition as an exciting opportunity to investigate how different Bayesian optimization methods and packages compare when applied to a large number of machine learning hyperparameter optimization problems. My hope is that the learnings from this competition will offer guidance for practitioners using Bayesian optimization on what methods to use for this type of problems as well as encourage interesting new research directions.”

The main competition organizer, Ryan Turner of Twitter Cortex, saw this competition as a great opportunity to solidify the position of Bayesian optimization relative to its simpler alternative, random search. “A lot of talk at conferences has been around random search being really effective—just as good for hyperparameter tuning. That there’s no benefit to doing anything other than random search. We already had this Bayesmark package which was showing, to us, that this was not the case. This competition was a great opportunity to show, without a doubt, that random search can be improved upon. That was a big motivator for me.”

Valohai is a provider of MLOps software, and they contributed their platform to participants during the competition. Senior software developer Juha Kiili saw this as a chance for Valohai to test their infrastructure in a setting with very demanding users. “We ran the biggest pipeline ever ran with Valohai — we pushed the envelope. The full pipeline, associated with the initial testing process, had 2000 nodes running simultaneously. Optimizations were required in the computational process to satisfy certain initial timing restrictions. We really loved being able to push our system and were excited to do so as part of the NeurIPS competition.” Building on momentum from this competition, SigOpt recently collaborated with Valohai and Tecton.ai on an MLOps book.

Twitter, SigOpt, Valohai, Facebook, ChaLearn, 4Paradigm join forces at the NeurIPS conference to sponsor and organize the black box optimization competition.

Figure 2: Twitter, SigOpt, Valohai, Facebook, ChaLearn, 4Paradigm join forces at the NeurIPS conference to sponsor and organize the black box optimization competition.

Each year, NeurIPS brings the newest ideas from the ML community to the masses. This year, SigOpt was proud to support the ML community in the ideation and presentation of some of these ideas in black box optimization. Congratulations to all the participants (over 70 teams competed!) and our sincere thanks goes out to the NeurIPS organizers for approving our competition for placement at the conference.  We look forward to NeurIPS 2021!

Use SigOpt free. Sign up today.

MichaelMcCourt
Michael McCourt Research Engineer