Insights for Simplifying Hyperparameter Optimization

Nick Payton
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning

SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization. During this discussion, our Head of Engineering Jim Blomo shared a few best practices for metrics, model training, and hyperparameter tuning. In this post, we build on his thoughts with a few practical recommendations for simplifying hyperparameter optimization.

Be rigorous about metrics and tracking training runs

It is much easier to spin up hyperparameter optimization if you have already gone through the process of defining metrics and tracking all of your training runs prior to the point where you are ready to optimize. Doing this work upfront will create a better machine learning process. And, when you are ready, it also makes automating hyperparameter optimization with hosted solutions like SigOpt much easier. 

Learn how Jim thinks about the importance of tracking in the workflow.

Transform parameters to search over log space

Whenever applicable, transform your parameters so the hyperparameter optimization algorithm searches over log space to make the process more efficient and likely to produce good results. For example, best practice is to transform a learning rate so that you search over log space of 0 – 1 to make the search process as efficient and effective for that particular parameter. Packages like SigOpt Experiments enable this automatically. 

See how Jim sets hyperparameter optimization up for success.

Include as many parameters as possible

Provided you have enough compute capacity, search as many parameter values as possible, rather than fixing them at the start of the process. Constraining your parameter space is typically essential if you use a rudimentary approach like grid search. But Bayesian optimization and other intelligent algorithms, which are included in SigOpt, are designed to be sample efficient. So they can handle much larger parameter spaces without requiring millions of points to uncover a high performing model. When using these intelligent algorithms, you want to include as many parameters as possible so you can explore as broad a set of potential model configurations in your search. Doing so increases the likelihood that you will uncover the best performing configuration.

See how Jim discusses this recommendation in the context of your parameter optimization workflow.

Apply multimetric optimization

Ideally, you could easily define a single composite metric to maximize or minimize that would capture all of your real-world objectives. In reality, however, most modeling problems require balancing multiple metrics that may be competitive with each other. In these cases, it can be valuable to maximize or minimize two metrics at the same time so you can evaluate a Pareto frontier of model configurations. This will allow you to explore models that balance these competing metrics in a way that satisfies your modeling or business constraints.

Learn how to implement multimetric optimization with SigOpt.


These are a few of the insights our team relied upon to build Runs to track training, Experiments to automate hyperparameter optimization, and our Dashboard to visualize these jobs throughout the modeling process. 

If you’re interested in seeing the broader webinar context in which we gathered and discussed these results, watch the recording. If you want to try out the product, join our beta program for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization.

Use SigOpt free. Sign up today.

Nick Payton
Nick Payton Head of Marketing & Partnerships