Defining, selecting, and optimizing with the right set of metrics is critical to every modeling process, but can be tough to get right. This month, SigOpt launched a set of Metric Management features to help modelers meet this challenge through their training and tuning runs. And in a democast yesterday, SigOpt Research Engineer Harvey Cheng joined SigOpt Product Marketing Lead Barrett Williams to showcase the potential for Metric Management.
In the demo, they ran a hyperparameter tuning experiment to maximize validation accuracy while constraining model size for an image classification task using a CNN. In the course of running this experiment, they teased out a few insights:
- Metrics are tough to define, select, analyze and optimize for a given model, and are particularly important to get right for models that will be productionalized
- Metric Management is a set of features in SigOpt to address this problem by enabling modelers to track up to 50 metrics, apply up to 4 as constraints and optimize across multiple in any training run or tuning experiment
- This demo showcases how these features can be applied to training and tuning a CNN for an image classification task, but they are applicable to any model you are developing, whether deep learning, machine learning, reinforcement learning or simulations
And here is a more specific summary of the democast. Click through to view any segment you missed:
- Introduction to Metric Management, the latest in a long line of innovative features that SigOpt has launched to boost hyperparameter optimization and experiment tracking (3:08)
- Discussion on the typical user problem, including how metrics are tough to define, select and analyze, and how SigOpt makes it easy to track many metrics at once, evaluate tradeoffs between metrics and apply metrics as constraints (5:59)
- Overview of the Metric Management capabilities, including how it allows you to track up to 50 metrics, apply metrics as thresholds or constraints, optimize across multiple metrics, assess metric failures and set your metric strategy across all of your metrics for any given tuning job (8:13)
- Notebook use case that shows how to maximize validation accuracy while constraining the size of a CNN for an image classification task using the German Traffic Sign Recognition Benchmark (10:16)
- Demo walk through, including easy installation of SigOpt, experiment setup and the use of Metric Strategy to track a variety of metrics through the training process, optimize validation accuracy and set the size of the network as a constraint (11:53)
- Use of the web dashboard to evaluate the wide variety of metrics that are tracked, constrained on or optimized across in this tuning experiment (19:30)
- Exploration of history of suggestions shows metric failures in the context of a full history of training runs (20:39)
- Evaluation of results in the context of training runs by applying a threshold on the size of the model and evaluating validation accuracy with comparisons to a variety of other metrics, discovering there are a number of high accuracy points well below the metric threshold (21:23)
- Adjustment of the metric threshold in a simple web user experience and reevaluate the validation accuracy results with an even more constrained memory size (21:58)
- Use of multimetric optimization to tune two metrics at the same time, creating a Pareto frontier of results that compares the tradeoff between total compute required (MACs) and validation accuracy. This is used in conjunction with the Metric Constraints feature, constraining on the size of the model (23:46)
- Analysis of the Pareto frontier of runs, including clicking into these charts to further evaluate the tradeoff between accuracy and size (25:13)
- Summary of this case study, including the use of Metric Storage to track up to 50 metrics through training runs, use of Multimetric Optimization to tune across multiple metrics, use of Metric Constraints to apply specific metrics as constraints in specific optimization jobs and, finally, evaluate reported observation failures to inform the tuning process (25:57)
If you want to explore the experiment we ran further, you can find it here. Below is a screen shot from the analysis page in the SigOpt dashboard.
If you joined us, thank you for taking the time! If you’d like to watch the recording, you can find it here and find the slides from the presentation here. If you’re interested in learning more, follow our blog or try our product.
Use SigOpt free. Sign up today.