Aluminum design is an incredibly complicated business. Not only do you have to get the model design right—it also has to work in the real world.
In this week’s episode of Experiment Exchange, “How Novelis Applies Cutting-Edge Methodologies to Optimize Aluminum Design,” Michael McCourt talks with Vishwanath Hegadekatte, R&D Manager at Novelis. They discuss the challenging process of designing, optimizing, and testing aluminum alloys—as well as how to curb the environmental impact by designing alloys with recycled materials.
Below is a transcript from the interview. Subscribe to listen to the full podcast episode, or watch the video interview.
Q: Tell us about what you’re working on at Novelis.
We’re doing quite a bit of exciting things at Novelis, the world’s largest producer of aluminum sheets. Our aluminum sheets are used in making beverage cans, cars, and trucks, for example. More recently, with our acquisition, we have entered the aerospace market as well. We also have a presence in the specialty market. Within R&D, we’ve been using artificial intelligence for about three and a half years, looking at various applications. One of them is designing new aluminum alloys with attractive properties—exploring this new design space, and speeding up the design process. In this case, we use Bayesian optimization specifically—using SigOpt to find alloys with optimum performances.
Q: What sort of metrics are you studying?
In this particular case, we were actually looking at 6000 series aluminum alloys. They have applications in the automotive industry, and the performance requirements were around strength ductility, corrosion performance, and material anisotropy. With all of this together, we were trying to determine the best alloy with optimum performance.
Q: How do you go about defining the optimum when you have so many different metrics at play?
There are quite a bit of heuristics involved. We talk to our metallurgists and our engineers who work with our customers in order to get requirements on what our customers are looking for in the next generation aluminum alloys from Novelis. That’s how we know what kind of performance we need to target.
What we’ve realized over the course of the last three and a half years is that data-driven models are not sufficient for this work, and we need to add physics-based models to the mix. We use the term “physics-informed machine learning.” There are physics-based models that are generating data, and that data is used to train the machine learning models. You end up making hundreds of thousands of predictions—the question then is how do we select the right ones to try out in our lab? That’s where all this performance criteria comes in, and that’s where Bayesian optimization comes in as well.
Q: Have you found a good deal of literature on physics-informed ML? Has that been something you’ve been able to read about and learn more about?
Generally the perception is that industry lags behind academia. In this case, that’s not true. We are at par with academic research at the moment. So the state-of-the-art is physics-informed machine learning, and these days I’m seeing that a lot.
The key thing here for us is that we’re mixing laboratory experiments with physics-based models. Physics-based models by themselves are computationally intensive, but even more expensive for us is the experiment that we do in the laboratory. In this case, we’ve seen that Bayesian optimization helps us quite a bit in reducing the number of observations that we make in the laboratory. Each batch, which consists of probably around seven to ten chemistries that we can try out in the lab, comes out of 100,000 predictions. So we have to choose that eight or ten that we can try out in the lab—that takes about four months for us. After this selection, we then have a process that involves testing the material, processing the material, then testing the material again. All of that takes us about four months, for the data to come back to our dataset and close the loop.
This is not the only activity we do. Our metallurgists in R&D who are not using machine learning for their alloy design, we’re actually encouraging them to use Bayesian optimization to design experiments so that they are working with much smaller s-metrics than what they would normally do.
Q: I’d love to dig a little deeper into a key topic you talked about here, which is you’re not looking for just an overall optimum, you’re sending 7 to 10 options out for testing in the lab. Can you explain why that is?
The optimum is not always the best choice. The short answer to this is there is a human in the loop, and so heuristics are involved. We actually sit with our metallurgists and then decide which seven or ten chemistries we want to try out in our lab. The decision is made based on what kind of application a particular alloy could be good for. Even within the 6000 aluminum alloy series, you will see that the material can be used for structural applications as well as other applications, and they have different performance requirements. Take, as an example, an alloy for an exterior panel. For this type of panel, qualities like corrosion and formability are more important. Something for a structural application like a door intrusion beam, you need more strength. So that’s why we actually compromise on certain things while trying to optimize something else. That’s how we end up with the seven or eight or up to ten chemistries which we generally try.
Q: What should we be looking forward to from Novelis next?
Next year is going to be quite exciting for us because there are several projects coming to fruition. The alloys that we just tested look very encouraging. The bigger question for us is: how do we make a more sustainable product? We are testing ways to design alloys where we increase the scrap content as well as reduce the prime aluminum, which we think could have an impact on sustainability.
We also work on customer products like beverage cans. We’ve built an autonomous design framework for beverage can lids, for example. In this case, we’re using Bayesian optimization to run physics-based models, which have been validated with experiments, so we know they’re very good. Unlike in our alloy design case, where we cannot describe the physics completely so we have to work with physics plus experiments. In this case, we can just work with the physics-based models.
We’ve been looking at a couple of Bayesian optimization tools in this case. What we saw was that SigOpt performs extremely well when it comes to going towards the optimum. We’ve tried this with a single metric as well as multiple metrics—the latter of which requires balancing in a Pareto frontier set of results. What we saw was that every iteration of SigOpt was driving the computations towards the Pareto frontier, towards where we need to be. If we were to go very quickly towards the optimum and have several suggestions on the Pareto frontier then we have seen that SigOpt performs extremely well.
From SigOpt, Experiment Exchange is a podcast where we find out how the latest developments in AI are transforming our world. Host and SigOpt Head of Engineering Michael McCourt interviews researchers, industry leaders, and other technical experts about their work in AI, ML, and HPC — asking the hard questions we all want answers to. Subscribe through your favorite podcast platform including Spotify, Google Podcasts, Apple Podcasts, and more!