Advanced Experimentation: Boost Model Optimization

Nick Payton and Tobias Andreasen
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning

One of the primary benefits of using SigOpt is that we have designed a feature-complete approach to hyperparameter optimization. With SigOpt, you can transform tuning jobs into advanced experimentation for your model development process. Here are a few examples of these advanced features that you can use today:

  • Multimetric: Optimize across two metrics at once to balance competing objectives
  • Prior Beliefs: Apply your domain expertise to warm start a tuning job
  • Multitask: Train partial-cost models to explore, full-cost models to exploit
  • Training Monitor: Monitor convergence and apply early stopping criteria
  • Multisolution: Run optimization to find multiple equally useful solutions to your problem
  • Parallelism: Scale optimization to up to 100x asynchronous parallelization
  • Conditionals: Take conditionality of parameters into account when optimizing

Learn about our full collection of advanced experimentation features in our documentation under the “Advanced Features” sidebar heading. Here is an example of a multitask experiment in our Dashboard:


If you want to try out the product, sign up for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization. If you want to learn more about our products, track industry news, and hear from our research team, follow our blog or subscribe to our youtube channel.

Nick Payton
Nick Payton Head of Marketing & Partnerships
Tobias Andreasen
Tobias Andreasen Machine Learning Specialist

Want more content from SigOpt? Sign up now.