Best Practices for Metrics, Training, Tuning

Jim Blomo and Nick Payton
Advanced Optimization Techniques, Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning

SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization. During this discussion, our Head of Engineering Jim Blomo shared a few best practices for metrics, model training, and hyperparameter tuning. In this post, we share a quick summary of his take. 

Develop a complete set of metrics and thoroughly evaluate them

It is necessary but insufficient to use a variety of traditional AI/ML metrics like precision, recall, accuracy, and AUC, among others. You also need to spend time on a metric discovery process that connects machine learning metrics to business metrics, and, at times, introduces new metrics to meet business needs. This step is critical so machine learning can deliver real business value rather than achieve isolated, academic success. Rather than fall into a single metric fallacy, it is critical to constantly track and evaluate a wide variety of these metrics throughout the machine learning process to balance often competing tradeoffs.

Second, log all metrics in a central place to introduce transparency to the both modeling and business teams. Creating a centralized system of record will make it easy to run analysis, comparisons, and, ultimately, make decisions related to and as a consequence of these metrics and how they change depending on data-feature-architecture-hyperparameter combinations. 

Rigorously track all attributes of your training runs

Many modelers struggle to track all the relevant attributes of any given model they are developing, whether this includes dataset, code, hyperparameters, metrics, architecture type, features, or other metadata that is critical to understanding the broader modeling context. Even if they commit to tracking all of this valuable information, most modelers are still forced to manually capture this information. In fact, in response to a poll during this webinar, nearly 2 out of 3 attendees said they still hack together basic run tracking rather than rely on a software solution. 

Implementing an easy-to-use, API-enabled solution like SigOpt Runs is an easy way to facilitate much more rigorous, standardized, and complete tracking of training runs. Doing so will set you up to avoid busywork, while boosting the likelihood that you have a centralized repository of all runs that helps you see the bigger modeling picture. 

Automate Hyperparameter Optimization for Better, More Understandable Results

Being rigorous about metrics and training sets you up to take full advantage of hyperparameter optimization to both boost and more deeply understand model results. But when selecting an approach to hyperparameter optimization, most modelers still rely on simple grid search or manual tuning, both of which limit capacity for considering a wide variety of models. This results in a lot of wasted time on a task that should be fully automated and implemented with sample-efficient algorithms like Bayesian optimization. 

Once you decide to go down the automated hyperparameter optimization path, it is important to keep in mind a few key process-related tips. First, search for learning rates and similar hyperparameters in the log space, which may require transformations. Second, follow best practices in documentation wherever available, such as testing individual hyperparameters to get a baseline before running an at scale job that includes all hyperparameters. Finally, use the metrics and business constraints that you discovered to constrain your search space and select the one or two metrics for which you plan to optimize.

These are a few of the insights our team relied upon to build our Runs functionality to track training, Experiments capabilities to automate hyperparameter optimization, and our Dashboard to visualize these jobs throughout the modeling process. 

If you’re interested in seeing the broader webinar context in which we gathered and discussed these results, watch the recording. If you want to try out the product, join our beta program for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization.

Jim Blomo Head of Engineering
Nick Payton
Nick Payton Head of Marketing & Partnerships