Research & Company Blog

Augmented ML Workflow, Deep Learning
Academic Interview with the DIVA team from the University of Fribourg
DeepDIVA: A framework for model reproducibility
Advanced Optimization Techniques, All Model Types
Multimetric Updates in the Experiment Insights Dashboard
Multimetric Optimization enables you to intelligently weigh these tradeoffs by discovering a frontier of the best possible models across these competing objectives.
All Model Types, Modeling Best Practices
Uncertainty 3: Balancing Multiple Metrics with Uncertainty
This is the third of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
All Model Types, Modeling Best Practices
Uncertainty 2: Bayesian Optimization with Uncertainty
This is the second of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
All Model Types, Modeling Best Practices
Uncertainty 1: Modeling with Uncertainty
This is the first of three blog posts during which we explore the concept of uncertainty - or noise - and its implications for Bayesian optimization.
Advanced Optimization Techniques, Machine Learning
AutoML at ICML 2018
Here at SigOpt, we provide a tuning platform for practitioners to develop machine learning models efficiently. To stay at the cutting edge, we regularly attend conferences such as the International Conference on Machine Learning (ICML).
Advanced Optimization Techniques, All Model Types
New Advanced Feature: Constraints
SigOpt is constantly working to extend the capabilities of our platform by supporting different types of optimization problems.
All Model Types, Modeling Best Practices
Circulant Binary Embeddings
The research team at SigOpt works to provide the best Bayesian optimization platform for our customers. In our spare time, we also engage in research projects in a variety of other fields.
SigOpt 101, Simulations & Backtests
SigOpt Enters Strategic Investment Agreement with In-Q-Tel
We’re happy to announce a strategic investment and technology development agreement with In-Q-Tel (IQT).
All Model Types, SigOpt 101
SigOpt Winter Update
Happy New Year! 2017 was a great year for SigOpt.
All Model Types, Augmented ML Workflow
SigOpt working with AWS to Provide PrivateLink Support for Customers on AWS
AWS PrivateLink is a new solution that will enable SigOpt to connect directly with any AWS customer that has an Amazon Virtual Private Cloud (VPC).
All Model Types, SigOpt 101
SigOpt Fall Update
This is the first edition of our new quarterly newsletter. In these updates, we will discuss newly released features, showcase content we have produced, published, or been cited in, and share interesting machine learning research that our research team has found. We hope you find these valuable and informative!
All Model Types, Augmented ML Workflow
SigOpt is Now Available on AWS Marketplace
Today we announce the availability of SigOpt on AWS Marketplace. Now, with a single click, data scientists and researchers can access SigOpt’s powerful optimization-as-a-service platform designed to automatically fine-tune their machine learning (ML) and artificial intelligence (AI) workloads.
All Model Types, SigOpt 101
At SigOpt, We’re Stronger as a Team
No one person is going to have all of the answers, so we're focused on making ourselves stronger as a team.
All Model Types, SigOpt 101
SigOpt Wins Barclays 2017 Open Innovation Challenge
Congratulations to the other innovative startups that were honored this year!
All Model Types, Modeling Best Practices
Covariance Kernels for Avoiding Boundaries
Here at SigOpt, Gaussian processes and reproducing kernel Hilbert spaces (RKHS) are important components of our Bayesian optimization methodology.
All Model Types, SigOpt 101
SigOpt Named A Gartner “Cool Vendor” in AI Core Technologies
Today, we’re honored to share that Gartner has listed us in its Cool Vendor 2017 report for AI Core Technologies.
All Model Types, Modeling Best Practices
Expected Improvement vs. Knowledge Gradient
In this blog post, we discuss another ingredient of Bayesian optimization: the sampling policy (or acquisition function).
All Model Types, Modeling Best Practices
Clustering Applied to Acquisition Functions
We have revisited one of the traditional acquisition functions of Bayesian optimization, the upper confidence bound and looked to a slightly different interpretation to better inform optimization of it. In our article, we present a number of examples of this clustering-guided UCB method being applied to various optimization problems.
All Model Types, Machine Learning, Training & Tuning
Common Problems in Hyperparameter Optimization
In this blog post, we are going to show solutions to some of the most common problems we’ve seen people run into when implementing hyperparameter optimization.
All Model Types, SigOpt 101
Announcing SigOpt Organizations
We are excited to announce SigOpt Organizations, the next step in the evolution of our web dashboard.
Reinforcement Learning, Training & Tuning
Using Bayesian Optimization for Reinforcement Learning
In this post, we will show you how Bayesian optimization was able to dramatically improve the performance of a reinforcement learning algorithm in an AI challenge.
Advanced Optimization Techniques, All Model Types
Building a Better Mousetrap via Multicriteria Bayesian Optimization
In this post, we want to analyze a more complex situation in which the parameters of a given model produce a random output, and our multicriteria problem involves maximizing the mean while minimizing the variance of that random variable.
All Model Types, Augmented ML Workflow
Solving Common Issues in Distributed Hyperparameter Optimization
SigOpt’s API for hyperparameter optimization leaves us well-positioned to build exciting features for anyone who wants to perform Bayesian hyperparameter optimization in parallel.
Advanced Optimization Techniques, All Model Types
Intro to Multicriteria Optimization
In this post we will discuss the topic of multicriteria optimization, when you need to optimize a model for more than a single metric, and how to use SigOpt to solve these problems.
Machine Learning, Training & Tuning
Bayesian Optimization for Collaborative Filtering with MLlib
In this post we will show how to tune an MLlib collaborative filtering pipeline using Bayesian optimization via SigOpt. Code examples from this post can be found on our github repo.
All Model Types, SigOpt 101
We Raised $6.6 Million To Amplify Your Research
Today is a big day at SigOpt. Since the seed round we secured last year, we’ve continued to build toward our mission to ‘optimize everything,’ and are now helping dozens of companies amplify their research and development and drive business results with our cloud Bayesian optimization platform.
All Model Types, Training & Tuning
Breaking Free of the Grid
In this post, we review the history of some of the tools implemented within SigOpt, and then we discuss the original solution to this black box optimization problem, known as full factorial experiments or grid search.  Finally, we compare both naive and intelligent grid-based strategies to the latest advances in Bayesian optimization, delivered via the SigOpt platform.
Advanced Optimization Techniques, Deep Learning
Deep Neural Network Optimization with SigOpt and Nervana Cloud
In this post, we will detail how to use SigOpt with the Nervana Cloud and show results on how SigOpt and Nervana are able to reproduce, and beat, the state of the art performance in two papers.
Applied AI Insights, Simulations & Backtests
Winning on Wall Street: Tuning Trading Models with Bayesian Optimization
SigOpt provides customers the opportunity to build better machine learning and financial models by providing users a path to efficiently maximizing key metrics which define their success.  In this post we demonstrate the relevance of model tuning on a basic prediction strategy for investing in bond futures.
All Model Types, Training & Tuning
Evaluating Hyperparameter Optimization Strategies
This blog post provides solutions for comparing different optimization strategies for any optimization problem, using hyperparameter tuning as the motivating example.
All Model Types, Modeling Best Practices
Dealing with Troublesome Metrics
At SigOpt, our goal is to help our customers build better models, simulations, and processes by maximizing key metrics for them.
Advanced Optimization Techniques, Deep Learning
TensorFlow ConvNets on a Budget with Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and TensorFlow to efficiently search for an optimal configuration of a convolutional neural network (CNN).
Advanced Optimization Techniques, Deep Learning
Unsupervised Learning with Even Less Supervision Using Bayesian Optimization
In this post on integrating SigOpt with machine learning frameworks, we will show you how to use SigOpt and XGBoost to efficiently optimize an unsupervised learning algorithm’s hyperparameters to increase performance on a classification task.
All Model Types, Augmented ML Workflow
Lessons from a RESTful API Redesign
When building the new and improved v1 API, we stepped back and chose to more closely adhere to RESTful architectural principles.
Applied AI Insights, Machine Learning
Using Model Tuning to Beat Vegas
Is it possible to use optimized machine learning models to beat Vegas? The short answer is yes; read on to find out how.
All Model Types, Modeling Best Practices
Intuition behind Gaussian Processes
SigOpt allows experts to build the next great model and apply their domain expertise instead of searching in the dark for the best experiment to run next. With SigOpt you can conquer this tedious, but necessary, element of development and unleash your experts on designing better products with less trial and error.
All Model Types, Modeling Best Practices
Profile Likelihood vs. Kriging Variance
The first post in our SigOpt in Depth series which deals with more technical aspects of topics that appear in the SigOpt Fundamentals series.
All Model Types, Modeling Best Practices
Likelihood for Gaussian Processes
Using the best approximation gives our customers the fastest path to optimal behavior, which minimizes the costs of experimentation.
All Model Types, Modeling Best Practices
Automatically Tuning Text Classifiers
In this first post on integrating SigOpt with machine learning frameworks, we’ll show you how to use SigOpt and scikit-learn to train and tune a model for text sentiment classification in under 50 lines of Python.
All Model Types, Modeling Best Practices
Approximation of Data
SigOpt provides customers the opportunity to define uncertainty with their observations and using this knowledge we can balance observed results against their variance to make predictions and identify the true behavior behind the uncertainty.
All Model Types, Modeling Best Practices
Intuition Behind Covariance Kernels
Gaussian processes are powerful because they allow you to exploit previous observations about a system to make informed, and provably optimal, predictions about unobserved behavior. They do this by defining an expected relationship between all possible situations; this relationship is called the covariance and is the topic of this post.
Applied AI Insights, Simulations & Backtests
Making a Better Airplane using SigOpt and Rescale
Rescale gives users the dynamic computational resources to run their simulations and SigOpt provides tools to optimize them. By using Rescale’s platform and SigOpt’s tuning, efficient cloud simulation is easier than ever.
All Model Types, SigOpt 101
We Raised Seed Financing from Andreessen Horowitz
When we created SigOpt, our mission was simple: optimize everything. What do we mean by that?
All Model Types, Modeling Best Practices
Picking the Right Metric
Deciding exactly what you want is hard.
Machine Learning, Training & Tuning
Tuning Machine Learning Models
SigOpt provides a simple API and web interface for quickly and easily leveraging cutting-edge optimization research to solve this problem for you.