The SigOpt Academic Program

Ben Hsu
All Model Types, SigOpt 101

SigOpt was founded by academics to empower the world’s experts and is committed to supporting the academic community. There are a few ways in which we manifest this support. We publish and present research on applied optimization techniques at leading conferences, including 9 peer-reviewed papers to date. Our research team provides internships and full-time job opportunities for other experts to continue applying their expertise to optimization and machine learning research and development efforts. And we give researchers complimentary access to our enterprise solution via our academic program.

In this blog post, I’d like to talk more about our academic program which supports researchers at universities and non-profits who are publishing peer-reviewed work and advancing the frontier of scientific knowledge.

The idea for SigOpt has its roots in academia. While our CEO Scott Clark was completing his Ph.D. at Cornell, he noticed that a critical component of research was often a domain expert tweaking what they had built via trial and error. After completing his Ph.D., Scott developed the open source package MOE to solve this problem, and used it 
to optimize machine learning models and A/B tests at Yelp. But an open-source package only solved half the problem, as it did not provide the necessary scalability, quality, and interface to be used broadly by all experts. SigOpt was founded in 2014 to bring this technology to every expert in every field by putting this powerful research within a simple platform and seamless interface.

As Scott discovered when he was a Ph.D. student at Cornell, researchers across many disciplines constantly face parameter optimization problems. Deep neural networks have complex architectures and hyperparameters that currently need to be tuned by hand. Researchers in the field of molecular biology want to use computer programs to automate drug discovery. Free parameters exist when designing the fabrication process of optoelectronic materials in material sciences applications. What is shared between these problems is the existence of one or more quantitative metrics and numerous free parameters. In almost every field of study these parameters govern the quality of results produced and often require excessive time and resources to effectively tune. This is exactly the problem that SigOpt is built to solve.

Consistent with our mission, our optimization solution is capable of helping any expert solve any permutation of this parameter optimization problem. Our product provides a black-box optimizer that seamlessly bolts onto any of these problems via a simple API. Since researchers no longer need to manually tune these systems or attempt to visualize 10+ hyperparameters in their head, they save time and can focus on the domain-specific problems of their research. Through our academic program, we provide access to our solution for free to any researcher that intends to publish peer-reviewed work.

Researchers can also take advantage of our Experiment Insights Dashboard and advanced features. The Experiment Insights Dashboard serves as a “lab notebook” for researchers by facilitating analysis and reproducibility. Every single optimization experiment is logged to a visual interface, ensuring that no results are lost. Advanced features like Multimetric Optimization, Conditional Parameters, and Orchestrate are also accessible to researchers to solve the most difficult and niche research problems. Learn more about the advanced features by visiting our documentation.

In the few years that we have offered this program, we are proud to report that:

  • 200+ individual researchers and research labs from around the world are using SigOpt for free (a few active academic institutions are shown below)
  • 25+ papers have been published using and/or citing SigOpt
  • SigOpt has been applied to a wide range of use cases across artificial intelligence, biology, physics, material sciences, and quantum physics

In an effort to inspire others and share some of the amazing work that researchers have performed with SigOpt, here are two examples:

SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties

Researchers from the Pacific Northwest National Laboratory wanted to use information encoded in SMILES format (a standard used by many cheminformatics software) to predict complex chemical properties. To do this, they developed SMILES2vec, “a deep RNN that automatically learns features from SMILES to predict chemical properties, without the need for additional explicit feature engineering.” What is novel about their use-case is that they used SigOpt to tune the network topology of their neural network, in addition to standard hyperparameters. To do this, they utilized our Conditional parameters feature that allows users to define structured search spaces.

DeepDIVA: A Highly-Functional Python Framework for Reproducible Experiments

A group of researchers in the Document Image and Voice Analysis Group (DIVA) at the University of Fribourg recently published a paper detailing DeepDIVA, a framework for reproducing deep learning experiments. The framework they built incorporated key model development functionality like high-end deep learning frameworks, visualizations, code versioning, and hyperparameter optimization. They wanted to include hyperparameter optimization because they recognized its importance in the modeling workflow, but did not want to build a new system from scratch. They integrated SigOpt because they realized that it was effective at tuning across a wide cross-section of models, which allowed them to maintain their focus on developing new tools to solve problems in reproducibility.

If you are an academic working on peer-reviewed research and need parameter tuning, we would love to help supercharge your research with SigOpt. As part of the academic program, you’ll receive access to our complete product, including the optimization engine, enterprise platform, and experiment insights dashboard. To apply, fill out this form here, and we’ll be in touch soon.

Happy optimizing!

Ben Hsu
Ben Hsu Product Manager