SigOpt was acquired by Intel back in late 2020. Since then, we have empowered many different parts of Intel – from software to hardware and beyond.
One of our first collaborators from Intel, Ke Ding, has spoken about how SigOpt helped him more efficiently tune the deep learning recommendation model present in the ML Perf benchmark series. He has also spoken about how upcoming Intel hardware, including the highly anticipated Sapphire Rapids chip, has been designed to work efficiently on these extremely valuable ML platforms.
We’ve also worked with several members from Habana Labs, an Intel company. They have spoken about how Habana Labs is working hard from within Intel to improve the ML Perf submissions. Basem Barakat and Evelyn Ding spoke on the v1.0 and v1.1 of the ML Perf submissions. They used SigOpt for hyper parameter optimization in order to speed up their model experimentation process. SigOpt platform also gave them insights as to how hyper parameters may lead to very specific high performing model behavior.
David Austin spoke on a popular topic for many data scientists – Kaggle. He is a Kaggle grand master, and we enjoyed hearing him give some thoughts on revisiting a competition – where he was already the first place winner – and improving the classification of ships or icebergs from very low resolution images. He described some best practices when conducting Intelligent Experimentation using SigOpt to improve upon his own high performing baselines.
One of our latest collaborators from Intel is Jian Zhang. He gave a very helpful survey of the state of recommender systems, and he explained how a new end to end solution his group had developed, which incorporates SigOpt. His work has helped decrease the high costs associated with developing recommendation systems for the average data scientist.
If you’d like to hear more about how we have collaborated with Intel and beyond, we highly recommend to watch the full talk.