5 Signs You Need to Invest in Hyperparameter Optimization

Barrett Williams
Augmented ML Workflow, Modeling Best Practices, Training & Tuning

As you build out your modeling practice, and the team necessary to support it, how will you know when you need a managed hyperparameter solution to support your team’s productivity? As you first start to optimize your business’s fraud detection algorithm or recommender system, you can tune simpler models with easy-to-code techniques such as grid search or random search. As the parameter count in your models increases, you may find that your team is starting to test out research-oriented (typically open source) Bayesian optimization packages. A novel approach to a growing challenge is certainly welcome, but at what point do these packages only suit the needs of individual researchers rather than the whole team? In this post, we’ll explain how you’ll know when you need a more robust optimization platform to serve your team’s needs.

In the first post of this series, I wrote about the progression from manual to efficient model tuning, then in the second post of this series, I discussed actual research-oriented and open-source packages you might use to advance your modeling workflow, along with some of their strengths and shortcomings. Today, I’m going to examine a third and final question: how will you know when you and your team need a managed hyperparameter optimization solution?

You start collaborating with others on your modeling project

In isolation, you may find that each data scientist is capable of using her own optimization package, ML library, and notebook setup. But what about when two modelers on the same team want to collaborate? It’s one thing to share snippets of Python code via a notebook or a gist, but having a system of record to share up-to-the-minute progress across an entire team can help one modeler seek assistance from a peer, or enable members of a team to continue to work on the same model from multiple time zones, around the clock, without losing progress due to the added overhead and work required when sharing results. With an experiment management platform with an interactive web dashboard, you can visualize your team’s results in ways that your peers might not have envisioned. Tracking history and organizing your product is critical if you don’t want to lose valuable hours to duplicated work or the overhead of building your own system for sharing progress.

You need to decide which model to deploy to production

As time passes, your data science team will inevitably generate multiple versions of the same model, and very often, models with competing architectures. Without any infrastructure, peer modelers may be training against different metrics, or using visualizations that result in an apples-to-oranges comparison. In order to decide efficiently and effectively which models to deploy, you’ll need to have multiple tracked metrics for competing models, and may need to compare training runs for which the same metric is tracked. This process can also help you decide that you need to optimize against a second metric, or switch metrics altogether. Only when you can compare your own model across its lifecycle as well as against competitor/peer models within projects can you realistically decide which model to deploy to production.

You develop a variety of models that require different optimization strategies

It’s important to give your data scientists the freedom to use the best libraries for the task at hand; XGBoost might make more sense for a fraud detection use case, while PyTorch might be suitable for pedestrian detection in images. With different parameter counts, depths, and architectures, you might find that grid search works for a lower parameter count, but quickly becomes infeasible as you increase the number of parameters you need to tune or the time to train each model increases. Parzen Estimator based approaches are useful strategies for large parameter spaces, but Gaussian Process based methods are often more efficient for lower dimensional mixed parameter searches. Of course if each and every data scientist is using a different optimization package for her distinct model, suddenly you’ll have a team of individuals who all have different barriers to collaboration, and your MLops or infrastructure team will have to support multiple redundant packages, often at great expense. At the optimization stage, it makes sense for whole teams to standardize on the same tooling that includes multiple strategies behind an API-enabled system and that is capable of optimizing any model.

The scale of your training and tuning jobs is becoming a bottleneck

As companies grow their data science practice, teams expand, and often the number of models in production expand as well. Large workloads begin to run up against Cloud Compute budget caps, and both tuning and training on cloud infrastructure becomes a productivity bottleneck. At the same time, for many of our customers, model performance is directly tied to revenue, for the business as a whole, or at least for specific lines of business. If scale is starting to become a gating factor for your teams, it’s time to leave cobbled-together open-source optimization packages behind, and switch to a managed solution. While many of the freely available optimizers will break down under high experiment demand, high parallelism, or simultaneous access by too many users, it makes more sense to leave the engineering work of scaling your solution to a third-party who specializes in not only more effective algorithmic optimization, but scaling the underlying infrastructure as well.

You spend too many resources maintaining your optimizer

The need for specialization brings us to our final point: with copious resources being devoted to a growing data science team, does it make sense to also allocate headcount for the maintenance and scaling of MLOps tools? Perhaps at the scale of a 100,000 employee company, there might be some efficiency gained by supporting and designing new modeling tools in house, but in our experience even larger organizations gain efficiencies from purchasing a managed solution. Building a robust optimization solution in house requires engineering hours, UI and UX design, and ongoing algorithmic research. Creating a team responsible for building and maintaining this category of MLOps systems is uniquely challenging. This is a scenario in which managed optimization can improve through use with a wide variety of customers, enabling our infrastructure to scale adaptively and robustly to your businesses unique needs.

How to cope? Try a managed solution on for size

If your data team started out by iteratively training different models and different parameter sets with grid search or random search, if you started noticing rising compute bill costs and models lingering with metrics not quite ready for deployment to production, it makes sense to try a managed solution. If you’re supporting a growing team, and you find your infrastructure is struggling to keep up, or your data scientists now use a Balkanized set of collaboration, monitoring, modeling, and optimization tools, consider SigOpt. If your needs only extend to hyperparameter optimization, reach out for a demo of our classic product. If your modeling team is looking for a robust collaboration platform, sign up for our Experiment Management beta here.

Use SigOpt free. Sign up today.

Barrett-Williams
Barrett Williams Product Marketing Lead

Want more content from SigOpt? Sign up now.