Kisaco Research designates SigOpt as a Leader in AI Software Optimization

Scott Clark, Barrett Williams, and Nick Payton
All Model Types, SigOpt Company News

With more and more businesses building robust and repeatable machine learning pipelines, enhancing business efficiency with AI is no longer merely a research project. Enterprises need to find ways to scale their modeling efforts across teams, enabling collaboration and reliability in production. SigOpt, acquired by Intel in October 2020, was just named a leader in artificial intelligence (AI) software acceleration, as it aims to systematize ML, enabling data scientists to consistently improve their models and bring the best models into production.

Michael Azoff from Kisaco Research describes SigOpt as prominent in the machine learning community for its scalable, high-performing hyperparameter optimization solution. It can be applied not only to machine learning models, but to any parameterized system, such as a simulation or backtesting for a trading strategy.

“SigOpt’s expertise is in algorithm and platform engineering, taking operational tasks that are time intensive for machine learning (ML) and making them easy and seamless, using AI to assist in this task. Hyperparameter optimization is a key example. The aim is to help AI developers by enabling them to create models that run faster and perform better, by creating better configurations of model parameters. The company also focuses on helping its customers with real-world ML applications, ensuring they make an impact in production. SigOpt recognizes enterprise AI as falling across a spectrum of use cases, from one end of commodity models comprising relatively straightforward implementations of usually existing or tweaked-existing ML models, to the other end of differentiated models where the goal is to unlock hard problems, and where the best ML model can make a significant difference to the business.”

— Michael Azoff, Kisaco Research

SigOpt API Structure and Placement

To many data scientists, adjusting hyperparameters is either a dark art, or a time-consuming chore. Although it can be computationally intensive, Bayesian optimization is one of the most efficient tuning strategies for most models, typically outperforming both grid and random search. Optimization alone can help a data scientist arrive at a better model faster. Through customer feedback from enterprises over the course of several years, SigOpt now enables users to track and manage experiments, tune against two metrics while tracking many others, and scale across a parallel compute cluster of up to 100 nodes.

Many business already have a data center configured to train models, but for those that don’t, SigOpt offers Orchestrate, a tool designed to spin up Kubernetes clusters of machines on Amazon Web Services (AWS) to enable you to train and tune your models in parallel, reducing the total wall-clock time that it takes to train a robust, optimized model.

Fraud Detection Example Analysis Page

From the report:

Kisaco defines Artificial Intelligence Software Optimization (AI) as a set of tools that can perform any of the following:

  • Compress the model.
  • Reduce latency in the model (i.e., accelerate operation).
  • Reduce power consumption by the model.
  • Increase the model throughput.
  • Reduce the cost of working with the model.

Optimization's Role in the ML Stack

While other Intel software, such as OpenVINO™AI Analytics Toolkit, and LPOT can optimize a model for specific target hardware, SigOpt can optimize any modeling knobs that the data scientist chooses to expose to its API. These parameters typically include the depth of a tree in a classifier, or batch size, or learning rate. SigOpt can also tune parameters that impact hardware or compiler performance.

The report also highlights SigOpt’s long-standing customer, Two Sigma investments, who assert that consistent use of SigOpt has reduced model training time by a factor of 8x. As Two Sigma Chief Innovation Officer Matt Adereth explains, “Experimentation is a key part of Two Sigma’s business and it is only as good as the process which enables it. We rely on SigOpt to reduce time spent on these tasks, making our research process more efficient.” More specifically, the report highlights the following as SigOpt’s differentiating capabilities:

  • API enabled
  • ML framework agnostic
  • Integration with any stack
  • Reliable at scale
  • Designed to augment experts
  • Algorithmically differentiated
  • Designed to be best-in-class
  • It is designed to fit into existing workflow

SigOpt cites a customer, a large global technology consulting firm (with 3000 ML developers), that had ML groups working in silos and suffering from a lack of standardization. SigOpt helped automate architecture search and model optimization, which resulted in 30% productivity gain per employee and yielded nearly $50m in projected value across the firm.

To get started optimizing your models, sign up here, or if you’d like to receive updates from us, please sign up for those here.

Example Analysis Page for an xgboost Model

SigOpt’s Typical Enterprise Workloads:

Some customers use SigOpt to tune time-series models or backtesting, while others use SigOpt to tune their recommender system. While SigOpt is agnostic to model type, library, or application, it is entirely possible to train a vision model on a cluster and tune it against multiple parameters like accuracy and latency.

Some customers set thresholds on certain metrics for their models, intentionally discarding models that don’t meet business-defined requirements such as maximum inference latency.

Where to begin:

If your business is already serving models in production or your data science team is almost ready to deploy a model to production, it’s worth your while to try SigOpt for free, either to optimize models already in production, or optimize models already in development, to see how much closer you get to meeting your business needs.

If you’re interested in trying SigOpt, you can sign up for free access here.

Editor’s note: you can also find this post published on the Intel AI blog.

Scott Clark, Ph. D.
Scott Clark Co-Founder & Chief Executive Officer
Barrett-Williams
Barrett Williams Product Marketing Lead
Nick Payton
Nick Payton Head of Marketing & Partnerships