Advanced Algorithmic Features

Advanced Features that Power our Optimization Engine

SigOpt’s mission is to accelerate and amplify the impact of modelers everywhere. To support this mission, SigOpt has designed a software solution to support, automate and augment the experimentation process during modeling development.

This solution includes three elements: Experiment Management, Optimization Engine and Enterprise Platform. Experiment Management is a single system of record for all runs during the model development process. Our Optimization Engine combines global and Bayesian optimization algorithms with advanced algorithmic techniques to automate model selection and the optimization of any parameter configuration or set of hyperparameters. And the Enterprise Platform supports both of these solutions with a fully agnostic design so this solution works with any infrastructure, modeling stack or modeling type.

The focus of this set of insights is the Optimization Engine, and, in particular, the combination of advanced algorithmic features that differentiate it from open source optimization algorithms that are available to any user today.

Bayesian Optimization

Bayesian optimization is one of the core algorithmic strategies we employ behind our API to intelligently explore and exploit any parameter space and, through the process, uncover the configuration of these parameters that optimizes an objective metric. Bayesian optimization is well researched in operations research and hyperparameter optimization communities, among others. Our proprietary Bayesian optimization algorithms are designed to be easy to use, reliable, scalable and performant for any modeling problem and user, whether a Fortune 100 enterprise or an AI lab at a leading university. Learn more about some of the research underpinning our optimization algorithms:

Intuition Behind Bayesian Optimization
Intuition Behind Asynchronous Parallelization

Asynchronous Parallel Optimization

To meet user needs, we have invested in a system to make our Bayesian and global optimization algorithms easy to use in any modeling context and easy to scale to utilize compute width when it is available. This diagram explains how our patent-approved system for serving new suggestions works for any given optimization run. We designed this system so that it can asynchronously serve suggestions in parallel for any given optimization job to meet the needs of our customers with broader access to compute. The result is a faster approach to parameter and hyperparameter optimization that helps scale the model development process. Learn more about some of the engineering challenges we addressed in designing this patented system:

Multimetric Optimization

Our customers often consider multiple metrics when building a model, yet rarely have the solutions they need to weigh tradeoffs between them. To address this challenge, we integrated multimetric optimization into our solution so that teams can optimize across a variety of metrics at the same time. The output is a pareto frontier of optimal configurations for any model that balances these metrics, each of which can be explored more deeply. This helps our customers explore tradeoffs, define their metrics and ultimately select the right metrics for any given modeling process. Here is more insight behind the evolution of this product over time:

A parallel axes plot with multimetric feasible region.
Intuition Behind Multitask Optimization

Multitask Optimization

Our users are often developing models in a resource-constrained environment. To accommodate this need, we have designed algorithmic solutions like multitask optimization that are designed to run a more efficient hyperparameter optimization process. With multitask optimization, we use partial-cost tasks early in the tuning process to cheaply explore the parameter space and then full-cost tasks later in the tuning process to more expensively exploit the most productive areas within the parameter space. This approach reduces the time required to optimize more expensive deep learning and simulation models. Learn more about multitask optimization:

User-Informed Optimization

Some of our customers prefer to keep all information related to the models we optimize private. For these users, we designed an experience that allows them to benefit from our optimization algorithms while masking even the names of parameters and, to a certain extent, the metric they are maximizing in a black-box optimization process. Others prefer to collaborate with us and share proprietary information on their modeling tasks that we can use to improve the way the optimizer works for their particular use case. Examples of this type of information include training run checkpoints for early stopping, metric thresholds to inform the comparison of model configurations and conditionality between parameters to inform the optimization process itself. This image presents an example of this type of collaboration with the University of Pittsburgh in a materials research process to design innovative solar panels glass. Here are examples of some of these features in greater detail:

Intuition Behind User-Informed Optimization