Keeping track of it all: recording and organizing model training runs

Nicki Vance and Barrett Williams
Advanced Optimization Techniques, All Model Types, Augmented ML Workflow, Experiment Management, Modeling Best Practices, Training & Tuning

SigOpt’s new Experiment Management is available in a free private beta. Request access here.

More and more modeling teams are looking for reliable tools to collaborate, track their work and reach the best models efficiently. 37% of enterprises have launched models in production and many more follow. Here are a few pain points we’ve heard too many times from the modeling community.

“We learn in every iteration and in my team we have experienced DL engineers who want to start with complex architecture while others want to try simpler models, but our biggest pain point is that we don’t have a way by which we track everyone’s work and understand which approach is more viable for our problem space…”

—Global Retailer 

“While we run parallel model training jobs to reduce wall-clock time, our ability to track how the model is evolving, and comparing our work is ad hoc at best, tracking the team’s strategies and gaining visibility into parameters and model artifacts is essential to progress.” 

—Global Device Manufacturer

Sound familiar?

In this post, you’ll learn how you can keep track of your machine learning progress, organize your model development efforts and use our Experiment Management capabilities, which you can now access as part of a private beta.

Project Analysis with Multiple Charts

SigOpt’s new project analysis dashboard.

Experiment Management with SigOpt

When you’re establishing a repeatable, consistent modeling process, how do you:

  • plot results to compare the models you’ve trained?
  • review the impact of a new feature?
  • find the best hyperparameters for a model?
  • access a version of your model code?
  • look up logs from past model trainings?

You’d rather be discovering new research and trying out new techniques than hunting through directories to read ancient training logs or exhume checkpoint metadata.

We describe all of the tasks above as Experiment Management because you’re experimenting with your features and models, testing out hypotheses and developing an understanding of important factors. It’s a complex process.

SigOpt can help manage that process for you by:

  1. recording and storing the parameter configurations, results, and metadata of your training runs 
  2. generating visualizations for analyzing single runs and comparing many runs
  3. organizing contributions from multiple team members involved in a project

Tracking and Storing Training Runs

Our Python client now supports storing information about training runs. With a few lines added to your training code, SigOpt will receive and store:

  • Metric values (e.g. accuracy, F1 score, anything you choose)
  • Parameter values
  • Metadata
    • Model type (RNNs, CNNs, Transformers, etc.)
    • Dataset identifier
    • Any key-value pairs you create
  • Code and git hash (optional)
  • Logs

You can find more details about these fields in our documentation.

Simple Setup

To record training run data, you’ll need to import the SigOpt library and set a name for your project.

# imports for model frameworks and other libraries

import sigopt
%load_ext sigopt
import os
os.environ['SIGOPT_PROJECT'] = 'run-examples'

Then create a training run record with SigOpt using sigopt.create_run(name='xgboost-1') in order to send metrics, parameters, and metadata to SigOpt.

# imports for model frameworks and other libraries

run = sigopt.create_run(name='xg-boost-1')
# training run code
# generate the AUPRC to send to SigOpt
run.log_metric('AUPRC', AUPRC)


Here’s an example notebook:

iPython Notebook in which Model and Metrics are Defined

Example model code where metrics and metadata are recorded with SigOpt.

Once you’ve created a run, you can view the information you’ve sent SigOpt on the web dashboard.

Dashboard for Training Run properties, stats, and attributes.

You can see the details of a training run and how it compares against the metrics of the rest of the runs in your project.

If you have colleagues in your SigOpt team, they’ll be able to access this data. They can contribute to the same project or work in a separate project.

You can also use the terminal to kick off a SigOpt training run from a Python file:

Terminal example using SigOpt's Python library to train a sample model.

Once you’ve run your code, SigOpt provides a powerful interface for viewing all your training run parameters, metric values, and metadata:

Custom views of your training data that you can save and share.

Sort, filter, reorder columns—build and save views of your training data.

A Model-Agnostic Approach

There are many good model frameworks out there, from logistic regression to state-of-the-art deep learning architectures. Instead of restricting your toolbox to a few frameworks, we have prioritized a framework-agnostic and model-agnostic approach. Whether you’re running a simple linear regression in scikit-learn or using Facebook’s DLRM recommender model in PyTorch, we’ve got you covered. As you can see in our documentation, it takes only a few added lines of code to call the SigOpt client and store data with SigOpt.

Tracking Training Runs with Helpful Visualizations

The metric and parameter values you store with us are used in our web application to generate visualizations for comparing models produced by each run. Here’s a quick look:

Custom set of linked plots help you link traits of your model's training process to different parameters.

Connect the dots between inputs like learning rate, overfitting, architectural parameters and more with a mouseover on any of the plots.

Screenshot of parallel coordinates chart.

Parallel coordinates charts help you observe how certain sets of parameters led to better or worse results.

Screenshot of the checkpoints chart.

Compare the training curves of multiple models over a series of epochs.

Some of these charts may appear confusing at first, which is why we allow you to customize your dashboard to match your model type and your modeling process. We’ll discuss common workflows for gradient-boosted trees, deep learning, and other model types in the second blog post in this series.

Integration with SigOpt Tuning

As you explore the problem-solution space of your project, you can, at any time, apply SigOpt’s Bayesian ensemble to tune promising models with grid or random search to explore an area of interest. Each type of hyperparameter optimization will generate a SigOpt experiment that will suggest variations of that model within the defined parameter space, helping you advance your model.

Our documentation provides guidance on setting up your parameter space so that you can use the code for a single training run to launch a tuning cycle, and we’ll present this functionality in the third blog post in this series.

Looking Forward

In the future, we anticipate adding support for the following, based on customer demand:

  • Storing prediction artifacts such as images, confusion matrices, and more
  • Further chart customization
  • The ability to export charts

Join the Beta

This is our first step in establishing a foundational tool for modeling. With a little bit of setup, having all of your model attributes and metadata structured in one place makes it possible for you to visualize your work and collaborate with colleagues in the same project workspace.

We’d love for you to try out all of these new capabilities. Learn how you can join the beta here.

Use SigOpt free. Sign up today.

img-Nicki
Nicki Vance Software Engineer
Barrett-Williams
Barrett Williams Product Marketing Lead