SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021. It will showcase the great work of our customers from a variety of industries with a diverse set of use cases. It’s virtual and free to attend, so sign up today at sigopt.com/summit. To give you a sense of the Summit, we will publish a series of blog posts in advance of the event. The first post focused on the overview of the upcoming SigOpt Summit and the previous post focused on the design phase of the Design, Explore, Optimize journey. This post focuses on the best ways to explore experiments with SigOpt as part of the Design, Explore, Optimize journey developers use to better understand and tune their models.
In the explore phase, the SigOpt Intelligent Experimentation Platform empowers users to define metrics and explore their results to find which experiments and runs best meet those metrics. Given how custom developers need to be when developing models for their organizations, developers need to have a defined space to freely explore which models are giving them the right results for their organization. This is typically accomplished before you optimize and tune your models so you have results to compare and contrast.
Lifecycle of a Metric
To have confidence in a model requires understanding it broadly and deeply. You should explore many metrics and a variety of architectures in combination to get a better grasp on your modeling problem. SigOpt – and especially the SigOpt Dashboard – are unique in that they are designed to give you better insights the more training runs and hyperparameter optimization experiments you manage with the SigOpt API. Take metrics as an example. SigOpt enables you to track up to 50 metrics, phrase certain metrics as constraints and optimize multiple. These include a variety of metrics that you may need to include in your experimentation, including training metrics, validation metrics, guardrail metrics and production metrics.
SigOpt Dashboard for Tracking Experiment Results
So as the team is iterating through different model configurations, and designing the experiment, all relevant metadata, such as metrics, parameters, metadata, or artifacts, is automatically stored on the SigOpt Dashboard. Modelers never share any critical data with SigOpt, as all model training happens on customer hardware – so all models and data stay private in the customer’s environment. To transition from tracking runs to at-scale hyperparameter optimization requires only a single line of code, easing the operational burden on developers and making it easy to run robust experimentation to understand and validate models. Modelers can use any optimizer they prefer with the SigOpt Platform or they can use SigOpt’s proprietary optimizer that is designed to find the optimal configuration of hyperparameters in as few training runs as possible. Through this entire process, SigOpt is tracking as many metrics as the user desires to ensure they have a complete view of the model’s behavior relative to their problem.
SigOpt automates the painful tasks in this modeling workflow so modelers can spend more time focused on their problem and applying their domain expertise. As a result, they have more time and resources to develop the best possible model for their unique circumstance.
If you want to get a better sense of how SigOpt could impact your workflow than by simply reading about use cases, sign up in seconds at sigopt.com/signup. If you want to learn from our customers, sign up for the SigOpt Summit for free at sigopt.com/summit. Attendees will be able to join the talks and panels, meet with speakers in breakout rooms for deeper discussions and network with each other. Look for future posts that focus more deeply on the themes that will cut across the talks and panels at the Summit. We look forward to seeing you there!