How Teams Use SigOpt to Build Differentiated Models

Scott Clark and Nick Payton

This post is part of a five-part series. Follow these links to read any post in the series:


Model development is messy. And it is still the wild west for many teams. They use open source. They bootstrap development of their own tools. They code without tools, cobbling together capabilities as needed. These environments are starting to become more scarce, but, even within them, adoption of software to support the modeling process is growing at a fast clip. We’ll discuss some of the reasons why this is the case in the context of SigOpt’s software – and our experience delivering it to our customers. 

This is the fifth and final post in this series. The first explained why Enterprise AI is three distinct markets. The second post made the case for differentiated modeling as the future of AI in the enterprise. The third weighed the difference in technology needs for teams building more basic versus more differentiated models. And the fourth post laid out the modern machine learning stack that supports the differentiated modeling process. 

In this post, we cover a few ways that our customers have implemented our solutions to gain some benefit of automation without sacrificing the benefits of modeling’s wild west.

Explore New Use Cases

Even the most sophisticated modeling firms need to explore new use cases. De-prioritizing modeling projects can be just as important as prioritizing potential breakthroughs. Working through this process iteratively – and comprehensively – is critical so teams do not make decisions to stop or continue based on false negatives or positives. Three challenges often slow the velocity of this exploration. 

First, teams too often recreate the wheel. They lack access to past experiments or a way to efficiently search through them. Even when they discover similar examples, they do not have an easy way to incorporate those insights into their own project. Second, model training is an uninformed, messy endeavor. Modelers are often flying blind on their training jobs, which makes it tough to troubleshoot when things do go wrong – and they often do. Finally, modeling is a solo act that is tough to repeat. It is often significant extra work to make a project reproducible, so modelers avoid it. 

Experiment management is designed to address these challenges, making new use case exploration more iterative and boosting the team’s overall rate of learning. It includes a full history of all training runs, tuning experiments and the underlying model and data artifacts associated with each. This history is searchable, filterable and analyzable in a simple dashboard. Each individual run or experiment includes a full set of visualizations, click-through analytics and insights like parameter importance. These are coupled with training support during a run that includes convergence monitoring and automated early stopping for more efficient model training. And the system is designed to enable organization of and collaboration across modeling projects so teams can learn from each other. 

Using SigOpt Experiment Management as a Modeling System of Record 

Custom views of your training data that you can save and share.

Advance Models to Production

Individual modelers and enterprises are aligned on the same objective: productionalize high-performing models as quickly as possible. But most models never make it into production and those that do often take half a year or longer to get there. As projects stretch longer, it gets harder to realize a return on modeling investment. As it gets worse, this problem threatens AI initiatives themselves. 

This problem starts with the model development process itself. Modelers often lack an easy way to gain useful information on model behavior, whether it is a basic history of experiments or more advanced comparisons of metrics. Any time they spend cobbling together their own version of these insights limits the time they can spend experimenting, engineering or building intuition on the modeling problem itself. And potentially useful techniques like hyperparameter tuning are too time consuming and resource intensive to implement, so modelers use them sparingly if at all. Each of these small issues adds up to a big problem for model productionalization. 

SigOpt was founded to take this busy work out of modeling, starting with the most complete hyperparameter optimization solution in the market. Our API makes it trivial to tune any model and our algorithms are designed to efficiently run any tuning jobs. This combination encourages much greater use of tuning, which boosts the likelihood modelers build high-performing, production-worthy models. 

This hyperparameter optimization solution also includes an advanced set of features that can be used to develop even deeper insights on model behavior and performance. Multitask optimization tunes a model with partial tasks – like dataset segments – to understand how performance varies with inputs through a tuning run. Multimetric optimization generates a Pareto frontier of optimal model configurations to evaluate metric tradeoffs. And conditional parameters enable more intelligent and nuanced parameter search, whether this involves neural architecture, SGD selection or any other parameter configuration. 

Example of a Multimetric Optimization Experiment in the SigOpt Dashboard

Of equal importance, however, our solution automatically generates insights in the form of parameter importance, metric comparisons, parallel coordinates and more basic visualizations or tables in a dashboard. As we enable teams to more quickly cycle through runs and experiments, they collaboratively explore these insights through the process. Together, this allows the team to increase their overall rate of learning, boost their domain expertise and more confidently select the best models for production. 

Enable Modeling Scale

Most companies have a wide variety of problems to which they can apply models and a wide variety of modeling techniques that are applicable to these problems. As modeling teams grow to tackle this variety of problems with a variety of techniques, the scale and complexity of their modeling workflow grows as well. If teams invest in solutions capable of enabling scale, then scale will be a huge driving force of success in AI. If not, it will quickly become the biggest bottleneck. 

Scale is used so often in so many contexts that it is important to be more specific. When it comes to model development, however, we have worked with our customers to identify six different attributes of scale that are most important to solve when training and tuning models:

  • Organizational Scale — Developing flexible, generalized solutions that can work across projects and teams with potentially different modeling workloads and needs.
  • System scale — Engineering systems that are designed to ensure new suggestions are provided within milliseconds to reduce latency in the tuning process
  • Dimensional Model Scale — Crafting bespoke algorithmic strategies to tune models that require more than 25 parameters, including categorical, integer, and continuous parameter types
  • Evaluation Scale — Solving certain problems requires thousands of observations to get precise, robust measurements of model performance
  • Parallel Compute Scale — Asynchronously processing and suggesting parameter configurations to facilitate faster wall-clock time, maximize computing resources and a discussion around the tradeoffs in this process.
  • Experiment Throughput Scale — Bursting Bayesian optimization to thousands of experiments in any given hour for a single user or set of users.

Example of the Process SigOpt has Built for Scaling the Backend to Enable Training & Tuning

SigOpt System Diagram

SigOpt has implemented backend solutions to each of these scale problems in the way our solution works. Most of these are critical for either model training or hyperparameter optimization where teams are evaluating hundreds or thousands of models to find the best for their particular problem. Getting this right can be the difference between a training and tuning solution that is valuable for a single modeler and efficiently enabling tuning across all modelers within an organization. 

Thank you for following our series. As always, we are happy to discuss in more detail with anyone interested in a conversation: email [email protected]. Separately, you can try our solution, sign up for blog updates, or join our beta program for new functionality. We look forward to this Enterprise AI future that will be powered by differentiated models. 

Use SigOpt free. Sign up today.

Scott Clark, Ph. D.
Scott Clark Co-Founder & Chief Executive Officer
Nick Payton
Nick Payton Head of Marketing & Partnerships

Want more content from SigOpt? Sign up now.