Frequently Asked Questions
Looking for something?
Check out these resources.
And if there's anything you don't see here, you can always
SigOpt is a SaaS optimization platform that amplifies your research. SigOpt takes any research pipeline and tunes it, right in place, boosting your business objectives, from machine learning and data science to manufacturing and process engineering. SigOpt’s product provides a feedback loop to maximize the output of a process with multiple interacting parameters:
- Define an objective to maximize, and identify the input parameters.
- Receive a suggestion from SigOpt for a parameter configuration to test.
- Evaluate the process with SigOpt’s suggested configuration, and report the objective result back to SigOpt.
- SigOpt suggests the best possible configuration to try next, based on all data received so far.
- Repeat steps 2-4 until the maximum is found.
In machine learning and simulation, SigOpt increases model accuracy and accelerates model tuning, using an ensemble of methods from current optimization research. SigOpt outperforms both traditional and other Bayesian techniques on a collection of benchmarks and real world problems.
In manufacturing and process engineering, SigOpt outperforms traditional Design of Experiments (DoE) techniques. Instead of exhaustively testing every option, SigOpt adaptively suggests experimental configurations so you can find the best version of your product with dramatically fewer trials.
SigOpt can accelerate modeling and simulation in a wide variety of industries. Examples include:
- Finance: fraud detection, risk, algorithmic trading. Read our blog post with Quantopian
- Manufacturing: cosmetics, food science, industrial processes.
- Marketing and Advertising: CTR modeling, propensity scoring, lead scoring.
- Energy: Enhanced Oil Recovery (EOR).
- Mechanical and Fluid Engineering: CFD, aerospace, physics-based simulations. Read our blog post with Rescale.
Check out our pricing page for more information.
If you regularly train machine learning models, run simulations, or perform process trials, SigOpt is for you. SigOpt reduces R&D time and resources invested in tuning models, simulations, and processes by up to 100x and extracts revenue and accuracy that is left on the table through conventional techniques.
You only need to send SigOpt the parameters that define the configuration of your specific model, process, or simulation (e.g. the learning rate of a neural net, the ingredients in a manufacturing process, or the dimensions of a simulation) and the objective that is being maximized (e.g. AUC of a machine learning model or the viability of a process). The core experimental or training data stays with you.
Machine Learning Questions
No. SigOpt connects to your existing model training stack to optimize the hyperparameters for each new model that you train. SigOpt will suggest a hyperparameter configuration, then you train the model, report the result to SigOpt, and SigOpt will suggest a new hyperparameter configuration for the next model to train, optimizing for accuracy, AUC, or another objective.
SigOpt optimizes an objective you define, like accuracy or AUC ROC for a machine learning model, or an online metric such as CTR resulting from a live experiment with a model in production.
Which parameters you send to SigOpt are up to you, but usually include hyperparameters of a machine learning model such as the number of trees in a random forest, or feature parameters such as the size of the trailing window of a historical average. Learn more in our blog post, “Tuning Machine Learning Models”.
Once you’ve run your model or simulation, evaluate its performance using your objective function. Then, report the value to SigOpt using our web interface or API.
SigOpt outperforms standard model tuning methods methods in both cost and performance/accuracy. Learn More
By integrating an ensemble of the latest in academic research, SigOpt outperforms other Bayesian techniques. Learn More
Manufacturing and Process Engineering Questions
SigOpt outperforms traditional Design of Experiments (DoE) techniques. Instead of exhaustively testing every option, SigOpt adaptively suggests experimental configurations so you can find the best version of your product with dramatically fewer trials.
SigOpt will attempt to maximize the metric value you give us. To minimize, just multiply the value you want to minimize by -1.
SigOpt provides an ensemble of the latest techniques in academic optimization research, built on top of advanced open source optimization systems that the founding team has built over many years in the field. First, we suggest the next best variation of your product to try, given what you have observed. Then, we create a feedback loop in which we provide optimal suggestions, and you report how they perform. We quickly iterate to the best possible variation by trading off exploration (learning about the parameters that we are tuning) and exploitation (using the information we have to get the highest possible return).
We build off of research in Design of Experiments in general and Optimal Learning in particular. Our algorithms attempt to make the tradeoff between exploration (learning more about the space we are optimizing in) and exploitation (using the information we have to achieve the best values) to find optimal parameter configurations for experiments as quickly and efficiently as possible. Learn More
SigOpt can optimize numeric parameters (integer and decimal), or categorical parameters (picking from a list of options).
We tend to see the best results when optimizing up to 20 parameters.
Categorical problems require special care and, consequently, there is a limit of 10 total categories that are allowed in standard experiments. This could consist of 2 categorical variables which each have 5 categorical values, or 3 categorical variables with 2, 4, and 4 categorical values, respectively.
SigOpt normally runs in the cloud; this works well for the vast majority of our customers because only model parameter metadata is passed to SigOpt. To learn more about our on-premises solution, please
SigOpt will always have a point ready for you. Internally, SigOpt constantly performs the trade-off between exploration (gathering more information about the space) and exploitation (exploiting the data already gathered).
You provide a single metric (Overall Evaluation Criterion) to SigOpt, this can be a combination of many sub-objectives. We are happy to help you brainstorm on the best objectives. This is a very important part of the process. We wrote a blog post detailing the pitfalls of choosing a bad metric and what to strive for.
We don’t make any assumptions about the underlying problem. SigOpt can optimize non-continuous, non-differentiable, non-convex, and non-deterministic functions.
SigOpt works best at optimizing systems where evaluating a single outcome is time-consuming and expensive. SigOpt can effectively optimize machine learning systems, complex simulations, and in-depth manufacturing and industrial processes.
If evaluating a single system configuration is cheap enough to do 100,000s or millions of times, then SigOpt may be overly robust for the problem; in these cases, algorithms like traditional Design of Experiments, simulated annealing, particle swarm, or genetic algorithms may be more suitable.
SigOpt has the largest impact on problems with 2-20 parameters and expensive measurement processes. If your function has hundreds or thousands of parameters, we recommend employing dimensionality reduction techniques and then using SigOpt, or using one of the alternative techniques suggested above if you can perform millions of evaluations to find an optimum.
Generally speaking, our customers discover optimal results when running their experiments until they’ve tested 10-20x, where x is the number of parameters (or dimensions) in your experiment.
Depending on your constraints, you may also wish to stop optimizing after a fixed budget of evaluations, or when subsequent suggestions cease to yield material improvement.
Sometimes, a parameter configuration will fail altogether (rather than just produce a lousy output). You can tell SigOpt about these failures using our API, and SigOpt will avoid failure regions in future suggestions. Learn More
Yes, you can report any observed values. The more historical values SigOpt has to build the model of your system, the more accurate it will be.
SigOpt can give suggestions to be run in parallel by conditioning on outstanding points. More information can be found here.
You also don’t need to choose between sequential or parallel. You can also use a hybrid approach that starts with many suggestions in parallel and becomes more sequential as you have reported more data.