The Most Advanced Optimization Solution for Deep Learning
Deep neural networks are highly effective at solving problems across a wide range of use-cases, from understanding images to interpreting language to automatically recommending similar products. Tuning the hyperparameters of these models is crucial for their success, but is difficult because of their large number of hyperparameters and long training times.
SigOpt’s optimization solution enables deep learning engineers to effectively tune their models and keep track of important metadata during the iterative model development process.
Effective Hyperparameter Tuning for Deep Neural Networks
SigOpt’s optimization solution was built for deep learning in mind, and we regularly benchmark our optimization techniques on deep learning surrogate functions. Traditional methods like manual tuning, grid search, and random search slow down experimentation, achieve subpar results, and waste computational resources. Support for mixed parameter types and seamless integration with all deep learning frameworks means that our optimization solution easily bolts on top of your model.

Advanced Features for Deep Learning
The SigOpt optimization engine includes numerous advanced features specifically designed to tune deep learning models.
- Black-box Constraints allows you to avoid out-of-memory errors and model failures while tuning neural networks.
- Conditional Parameters allows practitioners to automate the tuning of architecture choice, data transformations, and other structured search spaces.
- Multimetric Optimization facilitates the exploration of two distinct metrics simultaneously, like both optimizing for accuracy and model complexity.
Track and Analyze Experiments
Experiment reproducibility is one of the great problems in deep learning. As modelers explore new datasets, new architectures, and new training techniques, it is important to track the exact hyperparameters (and other settings) that led to certain results. Our solution includes an Experiment Insights dashboard to ensure that every single model that is trained and experiment that is run is tracked and available for analysis.
