SigOpt research is excited to attend the 2019 INFORMS annual meeting, in Seattle from October 20-23. At this meeting, the INFORMS community of experts in operations research and management sciences joins for their annual meeting to discuss recent advances in the field and set goals for the next year.
SigOpt research engineers Harvey Cheng and Gustavo Malkomes will represent SigOpt at the career fair. We know that there are many outstanding students who will be attending and presenting at this meeting, and we encourage anyone who wants to learn more about SigOpt research and our goals to stop by our career fair booth on October 20 from noon to 5pm. If anyone cannot make this time, please reach out to Harvey or Gustavo to set up a separate meeting.
There are a number of great sessions which we recommend and will be attending.
- SB34 (October 20, 2019, 11:00-12:30pm, CC – Room 603) features works from various subfields of derivative-free optimization. Soojung Baek and his advisor Kristofer Reyes will be presenting on optimizing materials design by posing the optimization problem as a Markov decision process. (In our blog on KG, we have also discussed how BO can be phrased as an MDP). Peter Frazier will be recapping his work on applying BO to designing peptides for studying enzymes.
- MC43 (October 21, 2019, 1:30-4:30pm, CC – Room 612) features some exciting new results in Bayesian optimization. Matthias Poloczek will be representing SigOpt’s neighbor, Uber AI Labs, and speaking on BO in discrete domains. Raul Astudillo will speak on BO of composite functions (Raul has previously blogged for us on this topic). Warren Powell will be speaking on new improvements to the knowledge gradient (Harvey, one of Warren’s students, has previously blogged on KG).
- TE37 (October 22, 2019, 4:35-6:05pm, CC – Room 606) features work in both Bayesian optimization and active learning/search. Roman Garnett (Gustavo’s PhD advisor and recent NSF CAREER award recipient) will present work on how active learning helps build scientific intuition. Yijia Wang (former SigOpt research intern and current PhD student at University of Pittsburgh) will present her work on improving RL agent exploration behavior. Eric Lee (former SigOpt research intern and current PhD student at Cornell University) will present his work on nonmyopic BO from his time at SigOpt. Matthias will present a lecture on high dimensional BO.
As stated above, SigOpt employees and alumni will give presentations in the TE37 session. The information regarding these talks is provided below.
Gustavo Malkomes – Active Machine Learning for Automating Scientific Discovery
Analyzing data to determine properties of interest can be very expensive, requiring human intervention or costly experiments. In this case it is critical that we allocate limited resources effectively. In active machine learning, we consider how to collect the most-useful data to achieve our experimental goals. We will discuss a particular setting modeling scientific discovery: “active search,” where we seek to discover rare, valuable points from a large set of alternatives. We will discuss the surprising difficulty of this problem and introduce efficient, nonmyopic polices. We will also outline a vision for automating scientific discovery by incorporating automated model construction.
Eric Lee – Non-myopic Bayesian Optimization as a Markov Decision Process [slides]
We present Bayesian optimization (BO) as a Markov decision process (MDP), in which global optimization is equivalent to maximization of a reward over a finite horizon. In this MDP setting, myopic acquisition functions are greedy policies maximizing immediate reward, whereas non-myopic acquisition functions are long-horizon policies maximizing a combination of immediate and future reward(s). We discuss qualitative behavior of these non-myopic acquisition functions with a few examples. We then present experimental results demonstrating the benefits of non-myopic BO. Finally, we discuss when non-myopic BO is appropriate by examining trade-offs between model accuracy and performance.
Yijia Wang – Exploration Via Sample-efficient Subgoal Design
The problem of exploration in unknown environments continues to pose a challenge for reinforcement learning algorithms. In this paper, we consider a new problem domain where an agent faces an unknown task in the future, assumed to be drawn from an unknown distribution of Markov decision processes, that it must learn within a small number of samples. Prior to this, the agent has opportunities to “practice” in related tasks from the same distribution. We propose a sample-efficient Bayesian approach for subgoal design to maximize the expected performance over a distribution of tasks, given a limited number of interactions with the environment.
After the conference
Thank you to everyone that stopped by the career far; anyone that missed us can reach out to us directly. Thank you to anyone who attended are lectures, and if you would like to discuss further please contact us so that we can do so. Below are some pictures from the event.