SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization. During this discussion, our Head of Engineering Jim Blomo shared a few best practices for metrics, model training, and hyperparameter tuning. In this post, we build on his thoughts with a few practical recommendations for how to use metrics in your machine learning process to see the bigger modeling picture.
Connect machine learning metrics to business outcomes
It is necessary but insufficient to use a variety of traditional AI/ML metrics like precision, recall, accuracy, and AUC, among others. You also need to spend time on a metric discovery process that connects machine learning metrics to business metrics, and, at times, introduces new metrics to meet business needs. This step is critical so machine learning can deliver real business value rather than achieve isolated, academic success.
See how Jim speaks to the value of this metric discovery process, and provides an example of how this may proceed in a fraud detection case.
Use many metrics throughout the modeling process
Along with discovering and, at times, computing, metrics that connect machine learning to business needs, it is critical to rely on multiple rather than a single metric. Rather than fall into a single metric fallacy, it is critical to constantly track and evaluate a wide variety of these metrics throughout the machine learning process to balance often competing tradeoffs. Tools like SigOpt make it easy to track up to 50 metrics automatically as you run through your machine learning process.
See how Jim applies SigOpt to log metrics in a notebook example.
Consistently, transparently, and automatically log your metrics
Log all metrics in a central place to introduce transparency to the both modeling and business teams. Creating a centralized system of record will make it easy to run analysis, comparisons, and, ultimately, make decisions related to and as a consequence of these metrics and how they change depending on data-feature-architecture-hyperparameter combinations. This type of rigor will facilitate deeper discussions across machine learning and business teams, and, ultimately, create better outcomes.
See Jim discuss the types of decisions and discussions this type of transparency creates.
Apply metrics to meet your particular machine learning needs
It may make sense to track certain secondary metrics so you can see how they perform, but not orient your machine learning to maximize or minimize them during optimization. It may also be useful to apply certain metrics as constraints in your modeling process to reflect some of the needs of your modeling problem. Finally, it always makes sense to define one or two metrics that are your priorities for optimization so you can use both training and hyperparameter optimization to maximize or minimize these primary metrics. SigOpt makes it easy to track up to 50 metrics, apply up to 4 metrics as constraints, or optimize across multiple at once.
See Jim setup an experiment to automate hyperparameter optimization.
These are a few of the insights our team relied upon to build Runs to track training, Experiments to automate hyperparameter optimization, and our Dashboard to visualize these jobs throughout the modeling process.
If you’re interested in seeing the broader webinar context in which we gathered and discussed these results, watch the recording. If you want to try out the product, join our beta program for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization.
Use SigOpt free. Sign up today.