Take the Pain out of Training and Tuning

Taylor Jackle Spriggs and Barrett Williams
Advanced Optimization Techniques, Applied AI Insights, Augmented ML Workflow, Experiment Management

Modeling can be a messy ordeal. At times, it is confusing, frustrating, and disorganized. If you’re still tracking progress in spreadsheets or on a physical notepad, there’s a better option. When you want to decide what approach to take next, it’s essential that you have all relevant information at your disposal. In the first post in this series, we reviewed the concept of “runs,” how tracking individual training runs can make your modeling process more successful and repeatable. In a second post, we described a common deep learning-focused workflow you might deploy, complete with dashboard and plot customization. 

Today, we are tackling a different part of this tricky problem. How can you boost your workflow with a combination of training and tuning to quickly identify and select the best model? We’ll demonstrate how with just a few lines of code, you can incorporate SigOpt’s best-in-class optimization capabilities into your training and tuning workflows.

Choosing the best model to explore

In our last post, we showed how you can visualize training runs alongside the rest of the plots on the project Analysis page. Today, we’ll begin by using a model for which we’ve established a few manual training runs, trained in a notebook, at the experimentation phase of the modeling process. On the same plot, indicated by color, we can see the comparative performance of different model types, an xgboost gradient-boosted tree, and a sequential Keras MLP model. Given this dataset and classification problem, the xgboost learner is the clear winner with a cluster of promising results that score highly on both AUPRC and F1 metrics.

Plot comparing F1 Score and AUPRC across two models.

Building insights that help you find a viable parameter space

As above, it’s helpful to see which model performs better, but for that cluster of results at top-right, it’s also helpful to dig in to determine which hyperparameter regions yield that cluster.

Linked plots with heat map, across multiple parameters.

The linked plots found on the Analysis page can help you better understand which hyperparameter values and ranges contribute to that successful cluster.

Defining the parameter search space

Establishing viable values for max_depth (the depth of the gradient-boosted tree), log10_learning_rate, min_child_weight and n_estimators, we can now specify the range in which we want to optimize our model with SigOpt. We should also tell SigOpt to attempt to maximize the AUPRC metric.

To define the parameter range and kick off the SigOpt experiment in the notebook, we need only add %%experiment, which will run “cell magic” and call SigOpt right from the notebook. In this example, we showcase Experiment Management paired with random search, accessible to all Experiment Management beta users. You can see an example here:

Using "cell magic" to call SigOpt right from the notebook.

Then, in the notebook discussed above, simply change %%run to %%optimize to execute the SigOpt experiment, as follows:

Updating "cell magic" for smart optimization.

If you implemented your model in a Python file, you can define the parameter ranges in a sigopt.yml file.

sigopt.yml lets you configure parameter bounds.

Now with just a single command you can now launch the optimization job via the command-line interface (CLI). In this example we’re using the Python client available with the Experiment Management beta:

Training output via Python client and SigOpt CLI.

Now that the experiment is created and running, we can switch over to the Project page in SigOpt (this view is enabled only when you’ve signed up for the Experiment Management beta here), where you can expand the experiment to see the best run so far, and its optimized metric value (in this case, again AUPRC).

Comparison of random search and SigOpt across runs.

Additional metrics are listed below the Best Results table; other tracked metrics for this experiment include F1 Score (as seen above), as well as the dollar value of errors caused by incorrect predictions. Depending on your use case, these other metrics may be invaluable when selecting your final parameter set. (And they might even inspire you to use Metric Constraints.)

Intelligent Optimization with just a single line

Optimizing your model with random search might seem to be helpful at first, but SigOpt is designed to tune your model as efficiently as the latest research permits. Accordingly, all you need to do is remove the line type='random' in the %%experiment cell of your notebook, and intelligent optimization will be enabled for your experiment. As planned, you’ll start to see the performance of your model improve significantly faster, again with best results bubbling up to the top of the table.

More performance improvement shown over time in Results page.

You can really see the difference that intelligent optimization makes on the analysis page. Create a new scatter plot with AUPRC on one axis and F1 Score on the other, colored by experiment. The resulting plot shows how well the runs from our intelligently optimized experiment perform compared to random and manual experimentation.

Comparing optimized and unoptimized experiments on the Analysis page.

Comparing across all runs to find the best found run

Once we’ve run a few experiments, we can compare runs, even across experiments, to determine whether we’ve found a winning model. To more efficiently examine only the best results, we set up a filter as follows, so that we can examine only the results that yield an AUPRC value of more than 0.9:

Graphing runs as filtered by the table.

When we switch back to view plots, we see the following:

The plot of filtered runs.

At this stage you can return to the Runs page to view that statistics and metrics of your best run compared to prior runs, all with the final Parameter Values for your best found run:

Runs page displaying final, best found Parameter Values.

Streamlining your workflow with SigOpt Experiment Management

As you can see, SigOpt now enables you to leverage valuable information from past runs to help you select the best model and define your parameter space for further optimization. Using the same code you use to track Runs in SigOpt, you only need to add a .yml configuration file to state your parameter bounds, and add one line of “cell magic” in your notebook. With a one-line code change, it’s simple to use SigOpt’s intelligent optimization to search for the best hyperparameters for the model you’re training. Lastly, comparing all prior runs in a single dashboard now gives you the confidence that you’ve chosen the best model for your data, and for your chosen metric. You can read more about Experiment Management in the documentation, or if you’d like to review prior posts in this series, you can find them here and here. If you’re interested in signing up for our Experiment Management private beta, you can do so here.

Taylor Jackle Spriggs Software Engineer
Barrett Williams Product Marketing Lead

Want more content from SigOpt? Sign up now.