Multimetric Updates in the Experiment Insights Dashboard
Multimetric Optimization enables you to intelligently weigh these tradeoffs by discovering a frontier of the best possible models across these competing objectives.
Academic Interview with the DIVA team from the University of Fribourg
The NeurIPS conference is held in Montreal this year, where leaders in the community will present innovative research over the course of tutorials, main conference proceedings and workshops.
Highlight: Bayesian Optimization of High Transparency, Low Haze, and High Oil Contact Angle Rigid and Flexible Optoelectronic Substrates
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our internship program.
Highlight: A Nonstationary Designer Space-Time Kernel
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our academic and internship programs.
Highlight: Efficient Nonmyopic Batch Active Search
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our academic and internship programs.
Highlight: Automating Bayesian Optimization with Bayesian Optimization
Our research team at SigOpt has been very fortunate to be able to collaborate with outstanding researchers around the world, including through our academic and internship programs.
Multimetric Optimization enables you to intelligently weigh these tradeoffs by discovering a frontier of the best possible models across these competing objectives.
All Model Types, Modeling Best Practices
Uncertainty 3: Balancing Multiple Metrics with Uncertainty
Here at SigOpt, we provide a tuning platform for practitioners to develop machine learning models efficiently. To stay at the cutting edge, we regularly attend conferences such as the International Conference on Machine Learning (ICML).
The research team at SigOpt works to provide the best Bayesian optimization platform for our customers. In our spare time, we also engage in research projects in a variety of other fields.
SigOpt 101, Simulations & Backtests
SigOpt Enters Strategic Investment Agreement with In-Q-Tel
This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
All Model Types, Machine Learning, Training & Tuning
In this blog post, we are going to show solutions to some of the most common problems we’ve seen people run into when implementing hyperparameter optimization.
In this post, we will show you how Bayesian optimization was able to dramatically improve the performance of a reinforcement learning algorithm in an AI challenge.
Advanced Optimization Techniques, All Model Types
Building a Better Mousetrap via Multicriteria Bayesian Optimization
In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, Augmented ML Workflow
Solving Common Issues in Distributed Hyperparameter Optimization
SigOpt’s API for hyperparameter optimization leaves us well-positioned to build exciting features for anyone who wants to perform Bayesian hyperparameter optimization in parallel.
In this post we will discuss the topic of multicriteria optimization, when you need to optimize a model for more than a single metric, and how to use SigOpt to solve these problems.
Machine Learning, Training & Tuning
Bayesian Optimization for Collaborative Filtering with MLlib
In this post we will show how to tune an MLlib collaborative filtering pipeline using Bayesian optimization via SigOpt. Code examples from this post can be found on our github repo.
Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search. Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud
In this post, we will detail how to use SigOpt with the Nervana Cloud and show results on how SigOpt and Nervana are able to reproduce, and beat, the state of the art performance in two papers.
Applied AI Insights, Simulations & Backtests
Winning on Wall Street: Tuning Trading Models with Bayesian Optimization
SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success. In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
This blog post provides solutions for comparing different optimization strategies for any optimization problem, using hyperparameter tuning as the motivating example.
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.
SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.