Tuesday, December 3, 2019
10am–11am PST / 1pm-2pm EST
Summary
In this webinar, SigOpt ML Engineer Meghana Ravikumar will present on and build an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning—fine tuning and feature extraction—and the impact of Multitask optimization, a more efficient form of Bayesian optimization, on these techniques. Once we define the most performant transfer learning technique for Stanford Cars, we will use image augmentation to double the size of the dataset to boost the classifier’s performance. Instead of manually tuning the hyperparameters associated with image augmentation, we will use Multitask Optimization to learn these hyperparameters using the downstream image classifier’s performance as the guide. In conjunction with model performance, we will also explore the features of these augmented images and the downstream implications for our image classifier.
Our goal is to draw on a rigorous set of experimental results that can help us answer the question: how can resource-constrained teams make trade-offs between efficiency and effectiveness using pre-trained models?
Tune in to learn the results from this experiment, discuss generalizability of any of these results, and learn techniques for building high performing models under time and resource constraints.
Motivation
Nearly every company has constraints to consider for their model development process. In this talk, we shall see how leveraging efficient training and optimization techniques allows any team to build high performing models.
We look forward to seeing you there—please sign up below:
[gravityform id=”47″ title=”false” description=”true”]