Insights from Interviewing Dozens of Modelers

Nick Payton
Deep Learning, Experiment Management, Hyperparameter Optimization, Time Series

SigOpt partnered with MLconf on a webinar that focused on practical best practices for metrics, training, and hyperparameter optimization. During this discussion, our Head of Product Fay Kallel shared a few insights she gained from interviewing dozens of modelers. In this post, we share some of these takeaways with you. 

MLOps Tools Need to be Framework Agnostic

A theme across most companies – big and small – was how critical it was for them to invest in tooling that was agnostic to the modeling framework that any individual modeler was comfortable with or preferred to apply to a given modeling problem. 

A few years ago, some of these companies decided to standardize on Tensorflow and built a set of tools around this package. As more modelers needed to apply frameworks like PyTorch instead, it put pressure on these teams to rework their tooling to be framework agnostic. Most of them have completed this process and have implemented a fully agnostic set of tools going forward (though standardized on Python). 

They cited two ways in which this more agnostic approach was productive for their teams. First, it allowed them to better compete for AI/ML talent. Second, it allowed this talent to more quickly produce high quality models using their existing knowledge of certain packages from their studies. 

Collaboration on Modeling Projects Takes Too Much Effort

Most companies noted how important it was to treat modeling like a team sport. Whether defining modeling problems, translating modeling metrics to business outcomes, gut-checking baselines, or comparing model performance, getting the perspective of others tends to boost the process. 

But many also complained that facilitating this type of collaboration was painfully manual. Between logging metrics in a document or explaining runs via email, there were many ad hoc methods employed to try to rope other team members into their projects. Because it took so much additional effort, it was often ignored as a priority – even though the teams knew it would create better modeling outcomes. 

As a result, most modelers expressed an interest in tooling to help them automate some of these tedious tasks like tracking runs. 

Hyperparameter Selection is Critical – but Painful – in Deep Learning

All modelers discussed the importance of training and tuning models to get to a level of performance that met the needs of their modeling projects, and hyperparameter optimization as critical for deep learning. 

Although data-related tasks remain the most time intensive for ultimately getting this process right, many also cited some of the practical challenges associated with hyperparameter tuning as particularly cumbersome. Some cited the DevOps-like tasks associated with managing machines for hyperparameter optimization run in parallel. Others noted the synchronous nature of certain optimizers as extending the wall-clock time for these jobs to the point where they are not worth pursuing. And many still cited the pain of using methods like grid search.

Some of these teams were experimenting with or already using more sample-efficient techniques like Bayesian optimization, especially if they had DevOps teams in house to manage job parallelization. Others were still primarily hacking it together themselves, using grid search where possible and testing other libraries where necessary.

Take a Next Step

These are a few of the insights our team relied upon to build our Runs functionality to track training, Experiments capabilities to automate hyperparameter optimization, and our Dashboard to visualize these jobs throughout the modeling process. 

If you’re interested in seeing the broader webinar context in which we gathered and discussed these results, watch the recording. If you want to try out the product, join our beta program for free access, execute a run to track your training, and launch an experiment to automate hyperparameter optimization.

Nick Payton
Nick Payton Head of Marketing & Partnerships