Dashboard: Organize, Visualize, Compare Models

Nick Payton and Tobias Andreasen
Deep Learning, Experiment Management, Hyperparameter Optimization, Machine Learning

Once you start to apply SigOpt Runs to track training runs and organize your modeling attributes, you can start to take advantage of the SigOpt Dashboard to analyze your models and organize your projects.

The SigOpt Dashboard automatically populates a full history of your Runs and Experiments, including visualizations of your training runs, plots comparing metrics, and analytics like parameter importance to help you understand model behavior. So there is no action required on your part but to log into the Dashboard and make sense of your models. Here are a few simple tips and tricks to help you get the most out of the Dashboard.

Create a Project

When you start a modeling process, create a Project to organize all Runs and Experiments in a single place and facilitate much easier collaboration with your colleagues. 

Introduce Checkpoints

If you are working on a language, vision, or other deep learning model, it is often useful to create checkpoints to analyze model convergence within any given training run. Here is an example of checkpoints in the SigOpt Dashboard:

Compare Many Models

Rather than executing a single run for a single model, we always encourage you to execute many runs across a variety of models for more robust comparisons. Here is an example of a comparison of many models in the SigOpt Dashboard:

Track Many Metrics

Within SigOpt Metric Strategy, you can store up to 50 metrics, apply up to 4 metrics as constraints, and optimize 2 metrics at once. For a robust modeling process, we always recommend you utilize these features to explore a wide variety of metrics for any given modeling problem. As you execute Runs and Experiments, these metrics will create opportunities for deeper comparisons and analysis in our Dashboard. Here is an example of setting a metric as a constraint in SigOpt:

Every created run is an opportunity to learn more about your modeling process, but it is also an opportunity to compare new and existing models. This is why the SigOpt Dashboard is created with the flexibility that allows you to create the visualizations that you need to make your modeling decisions and compare different runs and models.

Understand your Modeling Problem Space

As you execute Runs and Experiments, SigOpt populates analytics that help you more deeply understand your modeling problem space, such as parallel coordinates and parameter importance, among others. Here is an example of parallel coordinates within the SigOpt Dashboard:

This SigOpt web dashboard allows you to derive the information that you need to feel confident in your modeling. If you are interested in learning more about our visualizations, check out this blog post – Finding your way to the finish line: Model comparisons and visualizations.

If you want to try out the product, sign up for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization. If you want to learn more about our products, track industry news, and hear from our research team, follow our blog or subscribe to our youtube channel.

Nick Payton
Nick Payton Head of Marketing & Partnerships
Tobias Andreasen
Tobias Andreasen Machine Learning Specialist

Want more content from SigOpt? Sign up now.