Finding your way to the finish line: model comparisons and visualizations

Simon Howey and Barrett Williams
Advanced Optimization Techniques, Company news, Training & Tuning

If you’re a SigOpt veteran, you’ll know that we provide best-in-class optimization for a wide variety of machine learning models. If you’re new, welcome! You’ve arrived just in time to take a journey through the eyes of a successful, efficient modeler. SigOpt just added a host of features that help you get better insights into your experiments and projects. You can sign up for beta access to Experiment Management here.

In the first blog post in this series, we explained how to add Runs to your SigOpt experiment. Today we’ll present an ideal workflow for training a model, complete with an explanation of how to use the various visualization and dashboarding tools to supercharge your modeling process.

Single Model Comparisons

It’s rare that a data scientist is able to pull a model from a research paper or from a teammate who already solved a different problem and apply it to a new use case. The typical model will go through many iterations before it gets deployed in production. These iterations can include dataset expansion, feature engineering, architecture adjustments, hyperparameter tuning, and more. But many data scientists are still using paper notes, log files, or Readmes in GitHub repositories as a system-of-record for their modeling process. Even log files are difficult to parse and compare efficiently. With Experiment Management, you can directly compare metrics and parameters from different training runs of the same model.

In real world projects, many competing metrics can make it difficult to compare runs of a single model and understand their relative performance. This problem only grows as you train and tune many models and start to collaborate with co-workers. That’s why we designed Metric Management, to allow you to get better insights into performance during the iterative development phase of model building, helping you choose which models to optimize and when, based on a whole host of metrics and visualizations available to you after a few training runs.

Single Model Visualizations

SigOpt’s Runs functionality gives you the ability to quickly compare and analyze past training runs. It’s important to be able to quickly sort and filter past results, because sometimes a large subset of runs won’t converge, or meet business objectives you’ve identified for the modeling problem at hand. For example, you might only be interested in results with AUPRC values above 0.9, and a false positive incidence of less than 50%. If that’s the case, you can filter training runs, and their corresponding plots will automatically update, as in the following screen recording:

As you can see, tables are incredibly fast at drilling down on a useful subset of your runs, and you can sort by whatever metric or parameter you choose, and then save your view for later use, or to share with a collaborator. However, a table rarely tells the whole story: it’s often difficult to understand the relationships between the metrics and parameters you’ve chosen to include. That’s why visualizations can be configured to update in real time based on your selection, in order to help you quickly make sense of high-dimensional data.

In our experience across a variety of models from a wide range of industries, different visualization types serve different purposes. For example, the parallel coordinates chart quickly shows high level patterns in the data over many dimensions. You can identify clusters of successes and failures, or determine that a certain parameter has no discernable impact on any positive or negative outcome for your model. On the other hand, scatter plots are useful for establishing the relationship between any two variables you choose. A grid of adjacent scatter plots lets you focus on only the important dimensions, while leaving the rest out of the picture, so to speak. We linked our plots so that you can mouse-over results to see how they perform or compare across multiple dimensions simultaneously. You can see how the plots link in the following video:

With Experiment Management, we not only provide you with a comprehensive set of charts to better understand your model as you iteratively adjust, train, and tune it, we also went to great lengths to ensure that the tools you use are coherently linked. Seeing the relationship between filtered runs, metrics, parameter values across multiple dimensions, and parallel coordinates is all essential to understanding the behavior of your best model thus far. With SigOpt’s new ability to track Runs, Experiment Management doesn’t miss a beat: you can track progress across training runs automatically, with just a few lines of code, rather than only tracking observations that you choose to report (often at intervals of many epochs). We are confident that tracking runs will help you make sense of your deep learning models at a much more granular level.

Zooming Out to Visually Compare Model Types

Tracking all of the details of your modeling process becomes substantially more painful when you have multiple co-workers contributing to the same modeling project. And as time passes, models often gain additional features, parameters, and metrics, which can make it difficult to compare a newer model to an older one. That’s why we made it easy to compare models within projects, including different model types:

You can see the value of being able to adaptively sort, filter, then do it over again based on new criteria. That way you can see your model from every possible angle before determining that it’s ready for production. For deep learning models in particular, training can get quite expensive, so deciding which models to tune is both an important decision and a matter of financial consequence. With multiple tracked or even optimized metrics, it’s not always clear that there’s one best set of parameters for all cases. Finding the tradeoffs between multiple tracked or multiple optimized metrics is important before you decide which model to put into production.

Helpful Collaboration Techniques

Once you’ve trained a few models within your project, you may find that you want to create custom plots to share with a colleague, stakeholder, or manager. You can create custom sets of linked scatter plots using the Widget Creator, as follows:

Once you’ve created your custom set, you can then save or update it, as follows:

Now you’re ready to share your experiment, complete with the custom plot sets you’ve created, via URL with a collaborator. We’ll go into greater detail on some best practices around collaboration in our third and final blog post in this series.

While projects help you organize the modeling process, you can also provide segmented access via Teams in the SigOpt admin dashboard. Individually, these features are useful, but together they will level up your modeling experience, eliminating valuable wasted time as you bring more and more models to production. In turn, this approach to organizing your modeling projects and workflow lends itself to a more collaborative approach to modeling. 

Join the Beta

With a little bit of setup, having all of your model attributes and metadata structured in one place makes it possible for you to visualize your work and collaborate with colleagues in the same project workspace. We’d love for you to try out all of these new capabilities. Learn how you can join the beta here. If you’d like to head back to the first, introductory post in this series, you can find it here. If you’re interested in learning more about Experiment Management from SigOpt’s Head of Product, sign up for the webinar here.

Use SigOpt free. Sign up today.

img-Simon
Simon Howey Software Engineer
Barrett-Williams
Barrett Williams Product Marketing Lead