Habana and SigOpt, both Intel companies, collaborated with MLConf to host a webinar showing how SigOpt was used to optimize a home-grown Habana optimizer. This blog will recap the video showing how SigOpt brings value to customers and how Habana specifically used SigOpt for their MLPerf submission.
The Habana® Gaudi® compute efficiency and integration bring new levels of price-performance to both cloud and on-premises data center customers. SigOpt is a model development platform that makes it easy to track runs, visualize training, and scale hyperparameter optimization for any type of model built with any library on any infrastructure. Habana and SigOpt, both Intel companies, recently collaborated on improving model training time and reducing computational resources required to achieve the optimal hyperparameters for the model. The result was a reduction in training time on an MLPerf model on top of the grid search benefits, while utilizing less Gaudi®-hours with respect to grid search approach. This post highlights how Habana and SigOpt achieved these results. We hope AI developers can take advantage of the cost efficiency of Habana® Gaudi® and leverage SigOpt’s hyperparameter optimizations to accelerate model development on Gaudi®.
Below we broke out major portions of the webinar
1:33 – Introduction of speakers Steve Stein, Basam Barakat, and Eveyln Ding.
3:51 – Steve provided an overview of the talk. Steve will spend 5 minutes going over SigOpt and how developers use SigOpt to manage their experiments. Then Habana will go over how they used a home-grown optimizer and SigOpt for a MLPerf submission. Then we wrapped up with a Q&A session.
4:23 – Modeling is messy. Design experiments the right way using the SigOpt Intelligent Experimentation Platform. Here is an expanded overview of SigOpt and the Design, Explore, Optimize Framework if you would like to learn more.
6:11 – SigOpt empowers modelers with just a few lines of code to easily connect your training environment to the SigOpt dashboard and optimizers.
7:46 – Here is how SigOpt specifically adds a value to each component of Design, Explore, Optimize. This helps you better design experiments to get the most out of your experiment process.
9:03 – SigOpt is a part of the larger Intel Stack designed to boost AI performance of your most important AI workloads.
9:55 – Introduction of Habana speakers Basam Barakat and Eveyln Ding
11:33 – Overview of Habana’s work to optimize their submission of an MLPerf benchmark. The goal was to optimize the Habana home-grown optimizer while using less experiments to do so; thus conserving compute resources.
17:07 – Overview of Habana’s “home-grown” search optimizer and their results.
21:12 – Habana results of optimizing their MLPerf results using their home-grown optimizer.
22:04 – Habana chose SigOpt because of its easy to use Cloud based API which allowed quick integration into their training system. Also, the ability to support parallel evaluations sped up the experiment runtime. And finally the quick customer support and to provide hints and code examples made SigOpt easy to use.
24:14 – Here is how Habana implemented their SigOpt experiment
25:54 – Here is a graphic showing how Hanaban set-up their experimentation environment
HPE Execution Diagram with SigOpt
35:22 – Habana was able to extract great value using SigOpt. There was great value provided by the SigOpt dashboard as well as great visualizations on the experiment progress and tracking of metrics. Below is a table showing how well the SigOpt optimizer did in comparison to the home-grown optimizer.
|Metric||Habana Grid Search Optimization||SigOpt||SigOpt
|Converge Epoch Reduction (time)||28%||+6%||+6%|
40:31 – Questions from the webinar attendees
55:52 – Wrap-up. Thank you for reading and watching!
To try SigOpt today, you can access it for free at sigopt.com/signup.