A Better Approach to Experimentation

Steven Stein
Advanced Optimization Techniques, Applied AI Insights, Experiment Management, Hyperparameter Optimization, SigOpt 101, SigOpt Company News

Artificial intelligence is beginning to provide value in a wide variety of business use cases, but successfully training and deploying a machine learning model is an experimental process that is tough to get right. For example, tweaking and optimizing machine learning models typically involves developers spending hours iterating with no guarantee of success. In a technical talk hosted by MLconf, SigOpt Co-Founder Scott Clark discussed this challenge and how the right combination of tools and techniques can overcome it. Watch the talk or read on to learn more about it. 

Defining Intelligent Experimentation

Scott started his talk by explaining that tools that enable traditional experimentation – such as experiment tracking – are often good at telling you what you’ve done and, sometimes, what is working. But they aren’t very good at telling you what to do next. Intelligent experimentation tooling is designed to add this component to your workflow. 

Intelligent experimentation enables SigOpt customers like PayPal, Two Sigma, OpenAI and many more to design experiments to ask the right questions, explore their modeling problems, and optimize their models so they are empowered to develop the best models as efficiently as possible.

He then walked through how SigOpt automates the painful tasks in this modeling workflow so modelers can spend more time focused on their problem and applying their domain expertise. As a result, they have more time and resources to develop the best possible model for their unique circumstance.

Design, Explore, Optimize

Scott then gave a deep dive into SigOpt, which enables more rapid, insightful experimentation with just a few lines of code. The SigOpt Intelligent Experimentation Platform is designed to be entirely agnostic to modeling framework, task, library, or problem. SigOpt offers a hosted platform which is easily integrated into any workload, both cloud or on-premise, through an easy-to-use API.

He then explained how this combination of tools and techniques is enabled by three steps in the Intelligent Experimentation framework:

(1)  Design

First, Scott provided an overview of various techniques that are critical in the workflow, such as defining and selecting a wide variety of metrics for a more complete perspective on model performance or selecting the right type of data for each particular step in the training process (such as training data versus validation data).

When designing your experiments, you need to choose the right metrics, data, and architecture to ask and answer the right questions. But design is also about having the tools that enable you to phrase your questions in the form of insightful experiments. SigOpt enables you to set metric constraints, parameter constraints, prior knowledge, and metric strategy in a way that allows for more precise design of experiments so you can learn more, faster than is otherwise possible.

(2)  Explore

To have confidence in a model requires understanding it broadly and deeply. You should explore many metrics and a variety of architectures in combination to get a better grasp on your modeling problem. SigOpt – and especially the SigOpt Dashboard – are unique in that they are designed to give you better insights the more training runs and hyperparameter optimization experiments you manage with the SigOpt API. Take metrics as an example. SigOpt enables you to track up to 50 metrics, phrase certain metrics as constraints and optimize multiple. In his talk, Scott discussed a variety of examples of metrics that you may need to include in your experimentation, including training metrics, validation metrics, guardrail metrics and production metrics.

(3)  Optimize

Finally, optimizing your models can ensure they are designed to address your particular modeling problem and will give you confidence to put these models in production or publish them as part of your research. In terms of techniques, Scott explained that it is important to consider whether you  optimize a single metric or balance multiple in the optimization process. He also explained that it is also important to consider which optimization method you select for your model. Bayesian optimization is a good choice for its sample efficiency. Grid search is a good choice for the certainty it brings to models with few parameters to optimize. Similarly, the parallel bandwidth you want to run on is also an important consideration. SigOpt enables you to easily manage this entire optimization process with a few lines of code. You can use any optimization method, including SigOpt’s proprietary optimizer, and automatically schedule your jobs across your compute resources. There are also more than a dozen advanced optimization features that enable you to make the optimization process fit your specific modeling needs.  

Take Action

To try SigOpt today, you can access it for free at sigopt.com/signup. If you’d rather hear from SigOpt customers first, attend the SigOpt Summit, a free and virtual event on November 16th. Finally, watch the video below to learn more about tools and techniques to boost your experimentation workflow.

PXL_20211015_211818478.PORTRAIT
Steven Stein Product Marketing Lead

Want more content from SigOpt? Sign up now.