ICYMI Recap: Introducing Experiment Management

Nick Payton

In this discussion, Fay Kallel, Head of Product, and Jim Blomo, Head of Engineering, joined Product Marketing Lead Barrett Williams to demo Experiment Management, the latest solution from the SigOpt team. 

SigOpt’s Experiment Management empowers any modeling team to track runs, visualize training, compare metrics, or automate tuning. During the demo, they explore how you would apply this functionality to a fraud detection use case, but it is applicable to any modeling problem and compatible with any modeling stack. For a limited time, you can get access to this solution for free by joining our private beta.

Here are a few highlights from the demo and discussion:

  • SigOpt enables machine learning operations (MLOps) for model development
  • Use the Experiment Management Runs API to track any run when developing a model in any coding environment, and transition to an intelligent tuning job using the Experiments API with just a few additional lines of code
  • Use the Experiment Management interactive dashboard to analyze model behavior with visualizations, comparisons, parallel coordinates, and parameter importance across all training and tuning runs

And here is a more specific summary of the presentation. Click through to view any segment you missed:

  • Machine learning market trends, including that most are building their own models, most models fail to make it into production, and companies are productionalizing models at an increasingly rapid pace (1:15)
  • Insights from modelers resulting from a survey on the primary problems in the modeling process, including the value of tracking work, hyperparameter optimization, and the need for agnostic ML tooling that accommodates any combination of modeling libraries (2:14)
  • Overview of the machine learning pipeline and ways that it can be supported with API-enabled MLOps technology (4:28)
  • Introduction to the demo use case, which trains a variety of models to address a fraud detection problem and showcases tracking runs, visualizing training, and automating hyperparameter tuning to select the best performing model (5:23)
  • Comparison of the relative performance of trained XGBoost machine learning to Keras deep learning models as tracked in a Jupyter notebook using SigOpt (6:45)
  • Evaluation of checkpoints for the Keras deep learning model to evaluate convergence using visualizations in the SigOpt dashboard (8:07)
  • Application of grid search using SigOpt for hyperparameter tuning with evaluation of initial results in the SigOpt dashboard (8:53)
  • Use of feature analysis to select a new feature set, apply them to the XGBoost model, and evaluate the new training runs in comparison to prior runs across a variety of metrics (9:52)
  • Seamless transition to automated hyperparameter tuning with Bayesian optimization applied to the XGBoost model (10:19)
  • Use of parallel coordinates and metric comparison charts to more deeply evaluate these models and understand model behavior (11:54)
  • Filtering of results in the runs table to look at a cross section of runs that have met specific performance thresholds, and update to the parallel coordinates and metric comparisons charts with only these runs (13:06)
  • User adjustment of the dashboard to exclude the parallel coordinates chart and include the checkpoints charts instead to look more deeply at specific runs (14:09)
  • Broader view into cross-team projects with summary of their models, modeling problems, and performance across various runs (15:08)
  • Interactivity of dashboard items, including click-through highlighting across various charts according to specific runs and metrics (15:37)
  • Benefits of this approach, including a more systematic, reproducible, collaborative, and visual approach to model development that helps teams explore, understand, and advance their models (16:15)
  • First step to get started is a simple pip install of the SigOpt python client to start using the API, which is compatible with any modeling library, coding environment, or  compute setup and includes python support (18:24)
  • Second step is to instrument your model, including logging parameters, dataset, model, and any metadata you need to track through the process (19:16)
  • Third step is to track a training run to begin collecting information in the SigOpt dashboard, including tracking dozens of metrics as are relevant for your modeling problem (20:54
  • Fourth step is to transition from a manual training run to an automated hyperparameter optimization job using a SigOpt experiment (21:58)
  • Simple change from %%run to %%optimize to change the job from a single manual training run to an intelligent hyperparameter optimization experiment (23:17)
  • Fifth step is to use the interactive web dashboard to evaluate these runs and experiments to better understand model behavior and performance (23:50)
  • Join the private beta, request an enterprise license, or sign up for our Academic program to get access to the product (24:44)

You can watch the recording or share it with your colleagues. If you’re interested in learning more, follow our blog or try our product. If you found Experiment Management particularly compelling, join the private beta to get free access.

SigOpt Interactive Dashboard

More performance improvement shown over time in Results page.

Nick Payton
Nick Payton Head of Marketing & Partnerships

Want more content from SigOpt? Sign up now.