Experimental Design in Materials Science

Luis Bermudez
Bayesian Optimization, Experiment Management, Materials Science, Modeling Best Practices

SigOpt hosted our first user conference, the SigOpt AI & HPC Summit, on Tuesday, November 16, 2021. It was virtual and free to attend, and you can access content from the event at sigopt.com/summit. For more than any other reason, we were excited to host this Summit to showcase the great work of some of SigOpt’s customers. Today, we share some best practices for experiment design from Michael Court (Head of Engineering at SigOpt), Vishwanath Hegadekatte (R&D Manager at Novelis), Marat Latypov (Assistant Professor at University of Arizona), and Paul Leu (Associate Professor at University of Pittsburgh). Specifically, we will share the discussion that Michael McCourt had with the rest of the panel about how they design their experiments for materials science research.

Michael: How does SigOpt help with finite element simulations and materials science?

Vishwanath: When Marat was part of my group, he was the one who introduced us to SigOpt in the first place. The presentation that I made today was actually put in place by Marat before he left Novelis. So coming back to your question, as Marat can vouch for this. What we did was when we started using SigOpt, the idea was to use Bayesian optimization in place of the regular Six Sigma methodologies, where you use this factorial design for your design of experiment. Now, for a company like Novelis as a traditional metals company, we spend a lot of time in the lab doing a lot of experiments. These are time consuming and these are expensive experiments. For us, the biggest impact of using a cleverer design of experiment methodology, i.e. Bayesian optimization, would be the reduction in the number of experiments that we would do in our lab.

Michael: What are some of the benefits of Bayesian Optimization?

Vishwanath: With Bayesian optimization there is one good thing that comes out for free, which is the underlying surrogate model. So now the surrogate model is trying to learn the lay of the land for it to find the global minimum. It does a pretty good job of finding the lay of the land and that surrogate model could then be used to replace the number of experiments that we do – be it computational experiments or physical experiments.

Michael: What is it like for Academia to work with SigOpt?

Paul: We were looking at SigOpt as a better form of optimization. With this in mind, we used Machine Learning and Bayesian Optimization. We did some benchmarking in collaboration with Mike and Harvey over at SigOpt. We looked at comparing Bayesian optimization with genetic algorithms. Our work – if you had a chance to look at the presentation – was just focused on glass. We focused on trying to improve the transparency of the glass. Our goal was to improve the anti-reflection properties. We looked at the transparency of the glass across different wavelengths. For a lot of optoelectronic applications – such as displays and windows – you want to maximize the transparency in the visible wavelengths. We also looked at solar module glass, where you want to maximize transparency over all the solar wavelengths. Similar to Vish, we did a combination of computer simulations and physical experiments. Similarly, our experiments were also done in batches. We did them in batches of five experiments based on suggestions from the Bayesian model and then the computer simulations were also done sequentially. Our experience has been very similar in that we really wanted to reduce the number of computer simulations that we did, as well as the number of physical experiments. 

Michael: What are the benefits of Bayesian Optimization?

Paul: I think the way experiments are traditionally designed, you’re looking through a very large design parameter space – seven or eight variables, possibly more. What a lot of people do is they’ll just fix all of the variables, except for one and then just systematically vary that one variable. So you end up searching a very limited part of the whole design parameter space. By using Bayesian optimization we’re able to explore and exploit the whole space more thoroughly.

Michael: Why is there resistance to Machine Learning?

Marat: This is a very good point. I think one of the key factors here is the cultural shift, which was also mentioned in one of the talks. We already brought up these engineering statistics approaches like Six Sigma that people get trained and they learn it during their engineering training so they’re familiar and more comfortable doing this. When we bring these new machine learning or data driven approaches, there is often a resistance because they used it as a black box and they didn’t see successful case studies which are relevant to their work. I think that there is a lot of hesitation and resistance from the people who can actually take advantage of speeding up the calculations. Tuning machine learning models is great, but when I came across these Bayesian optimization approaches and SigOpt in particular, we felt that the best way would be in the experimental work or in the plant trials because each observation is really expensive compared to either physics based simulation, even if it’s time consuming or tuning a machine learning training run. So we felt that we could see a lot of payoff in reducing the number of observations when we ran our optimization campaigns.

Michael: Why does Machine Learning need a Cultural Shift?

Vishwanath: The companies like Novelis and also a lot of our competitors in the aluminum industry and in other industries, they spent a lot of money about 15 years to 20 years ago, training their people with Six Sigma methodologies. These people know how the tool works, so they are confident that the tool works. They don’t need to learn anything new. So they go forward with what they know already rather than to learn something new. So our thought was that we need to actually do it at the source, so let us teach about our Bayesian optimization as a module within our internal training programs so that people are exposed. One thing that a company like SigOpt can do is probably have some kind of an equivalent to Six Sigma Black Belt. It’s similar to a training protocol that people can get certified on so that it actually counts in the company as an external training. This helps the employee so that they’re getting trained with something new. This could help. At the end of the day, we need a cultural shift.

Michael: How do we get Materials Scientists and Data Scientists to work together?

Paul: We have a new National Science Foundation center. It’s an IUCRC Industry University Cooperative Research Center and the center. As I mentioned in the intro, it is Materials Data Science for Reliability and Degradation. As Marat mentioned, a lot of the experimental data is very sparse and it’s very limited. Specifically with regard to reliability data or how materials properties change over time, and how different functionalities change over time. Our center is focused on how these properties or functionalities change under various stressors. So the stressor could be exposure to ultraviolet light, it could be changes in temperature or temperature cycling. It could even be exposure to scratching or some salts. So I think this is an important area especially with regard to bridging research and the applied development that happens in industry. And I think this could be an area where Bayesian optimization also makes a big impact because of just the cost of doing an experiment where you need an extended exposure to some sort of stressor, even under accelerated testing. The amount of data you can get is very little, and it takes a long time to run these experiments.

Michael: How do you decide what design parameters to use? How do you decide which metrics to explore or optimize?

Paul: I think based on our experience, it’s probably best to record as much data as possible – even if you don’t think you’re going to use it. It’s better to record it because you might come back to it later. As Vish said, a lot of those decisions are based on your physical intuition. These decisions are based on your experience working with the design problem or the material. A lot of times your physical intuition can be wrong, and that’s why it’s best to keep the problem as open as as you can and allow the Bayesian optimization to guide the process. I think a lot of people can come up with very new designs or products by only using their physical intuition, but it’s still fairly limited. I think by working together with Bayesian optimization, we’ve shown that we can come up with much better designs, much better properties and functionalities.

Michael: How can SigOpt be improved?

Paul: Based on the multi-objective optimization problems that we’ve collaborated on, I know that there is not just one single optimum. Instead, there is a pareto frontier of optimums. This could be a line or some curve in higher spaces. We’ve done some post hoc analysis on what are the solutions that fall on this curve – on the surface that forms the pareto frontier. And we’ve been able to get some sort of physical insight. For instance, these types of designs seem to be best for anti-reflection – even though it seemed like there was a certain angle that was favored by some of the structures. We looked to the pareto frontier. We weren’t exactly sure why this angle was necessarily the best, but at least it seemed like there was some sort of relationship there. There’s some sort of underlying physics that makes these types of solutions optimum. I think perhaps that’s where the underlying scientific knowledge can better try to understand that. By incorporating the underlying physics, we may be able to improve and better understand the optimization results.

Conclusion

To learn more about Experiment Design approaches for Materials Science Research, I encourage you to watch the full the talk. To see if SigOpt can drive similar results for you and your team, sign up to use it for free.

1605041069985
Luis Bermudez AI Developer Advocate

Want more content from SigOpt? Sign up now.