Improve ML models 100x faster
SigOpt’s API tunes your model’s parameters through state-of-the-art Bayesian optimization.
- Exponentially faster and more accurate than grid search. Faster, more stable, and easier to use than open source solutions.
- Extracts additional revenue and performance left on the table by conventional tuning.
Optimizing in-production models for
What is SigOpt?
SigOpt automates the tuning of your model’s hyper, feature, and architecture parameters. If you’re not optimizing them, you’re forsaking significant performance and revenue gains.
Modelers often overlook these optimizations because traditional approaches like manual, grid, and random search are time-consuming and produce subpar results.
Let SigOpt modernize your workflow so you can focus on what you’re best at: designing your model and understanding your data.
ML, Trading, and Banking
Generate previously-unattainable optimization in mature industries where incremental gains have enormous impact.
“SigOpt optimized our fund’s financial models 15 times faster than grid search. Our models now perform better, and our engineers are free to focus on model design instead of tuning.”
How SigOpt Works
1 Provide parameters
2 Use our values
3 Send model output
4 Repeat until optimized
Works with every model
Integrates with every platform
The world’s most efficient Bayesian optimization
We outperform traditional and alternative Bayesian techniques on a collection of benchmarks and real-world problems. We also outperform MOE, spearmint, SMAC, and hyperopt in a wide variety of problems.
Those tools usually represent a single optimization approach and are often too brittle for production. SigOpt is an ensemble of state-of-the-art, proprietary optimization strategies.
It’s why Huawei, Prudential, and MIT rely on us.Read our peer reviewed comparison from ICML 2016 →
“SigOpt typically discovers a higher global maximum 10x faster than tuned grid-search.”
Scott Clark, PhD
Scott Clark is an industry leader in the Bayesian optimization of machine learning models. He led the academic research behind Yelp’s Ad Targeting team. At Yelp, he was also responsible for developing the Dataset Challenge and open-sourcing the breakthrough MOE optimization library.
Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell. He also holds BS degrees in Mathematics, Physics, and Computational Physics from OSU.