Optimizing at AI & HPC Scale

Eddie Mattia

SigOpt Summit

SigOpt is hosting our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021. It is virtual and free to attend, so sign up today at sigopt.com/summit. Attendees will be able to join the talks and panels, meet with speakers in breakout rooms for deeper discussions and network with each other. To give you a sense of the Summit, we are publishing a series of blog posts in advance of the event. The prior post focused on exploration. This post focuses on optimization. 

Challenges to Optimizing at Scale

Experimentation is a key factor in the success of modeling organizations. In AI and HPC workflows it tends to be costly and tough to manage. However, the right combination of tools and processes can empower modelers with approaches that yield satisfying results. To experiment intelligently modelers must be able to flexibly design experiments, explore their modeling problem and optimize models for objective metrics within constraints.

This post focuses on the optimization aspect of these critical components of an experimentation workflow. It is important to have efficient and flexible optimization tools to augment an iterative workflow consisting of designing experiments and exploring the modeling task space. In AI and HPC contexts, experiments are often set up to optimize over a parameter space for target metrics like accuracy, simulation error and inference latency. Real-world scenarios also contain guardrail metrics that need to be incorporated in the experiment design to appropriately constrain the optimization process. Optimizers that produce results modelers are confident in help them make design decisions quicker. Quickly (in)validating design choice saves the most valuable resource expert data scientists have, their time. 

AI and HPC Modeler Needs

An intelligent experimentation framework should cover a variety of modeler needs to be useful in complex optimization processes like this. First, it should simultaneously track model inputs and outputs back to a set of insightful dashboard views. Second, it should offer diverse global optimization approaches that take asynchronous parallelism into account with minimal implementation overhead. Third, it should seamlessly allow users to bring external optimizers built for specific modeling tasks while providing powerful, general optimizers as part of the framework. Fourth, it should make it easy for users to design experiments in the presence of complex metrics. Finally, it should be easy to instrument any model code to be tracked and optimized in just a few lines. A framework covering these user stories makes AI and HPC modelers more efficient and yields results that satisfy stakeholders. 

SigOpt’s Role – Evolve the Product and Educate Users

SigOpt has evolved its Intelligent Experimentation platform with these criteria in mind. For example, SigOpt offers a powerful yet accessible feature set for defining and optimizing in complex parameter spaces and metric spaces. Additionally, SigOpt tracks experimentation with minimal user code changes required so modelers can easily visualize, reproduce, and share the results of Training Runs and Experiments from a managed web application.

All of this is available for free, and there are many opportunities to dive deep on these concepts and tools. Optimization will be a theme at our upcoming user Summit. Please join us for a series of talks and panels with experts in industry and academia to discuss:

If you want to experience how SigOpt can impact your workflow, sign up in seconds at sigopt.com/signup. If you want to learn from our customers, sign up for the SigOpt Summit for free at sigopt.com/summit. We look forward to seeing you there!

Eddie Mattia Machine Learning Specialist

Want more content from SigOpt? Sign up now.