Using SigOpt to Optimize Neural Networks in Biology

Nick Payton, Yuta Tokuoka, Takahiro G. Yamada, Daisuke Mashiko, Zenki Ikeda, Noriko F. Hiroi, Tetsuya J. Kobayashi, and Kazuo Yamagata & Akira Funahashi
Biology, CNN, Convolutional Neural Networks, Hyperparameter Optimization, Segmentation, Vision

SigOpt offers our software free to academics, nonprofits, or labs who are performing research they plan to publish. Dozens of publications across a wide variety of disciplines have utilized SigOpt through this program. Learn more about the SigOpt Academic Program or join it to get access to SigOpt today. 

Researchers from Keio University, Kindai University, Sanyo-Onoda University, and the University of Tokyo recently collaborated on a project that applied deep learning to a biological use case. Specifically, they applied 3D convolutional neural network segmentation to extract quantitative criteria of the nucleus during mouse embryogenesis. They recently published their findings in Systems Biology and Applications.

In the course of their research, they applied SigOpt to explore their modeling problem and optimize their model. In this interview of Akira Funahashi and Kazuo Yamagata, we explore this research, how they utilized SigOpt, and what’s next for this collaboration.

What is your research subject?

Embryogenesis occurs in a highly dynamic three-dimensional environment. Although we continue to acquire information on embryogenesis, much remains to be discovered. 

The goal of this research is to help inform studies of embryogenesis by acquiring the three-dimensional positions of cells with high resolution. Our hypothesis is that uncovering this information will provide useful data for this team and other research teams to explore. In this sense, we hope that this cell position data will reveal the mechanisms of embryogenesis itself.

In this case, we developed a Quantitative Criteria Acquisition Network (QCANet), a new convolutional neural network-based instance segmentation algorithm for this task.  

For whom is this research most valuable?

This research is most valuable for biological and computational biology researchers who have focused their research on embryology. It is particularly valuable for researchers who have spent time acquiring quantitative criteria – such as nuclei, synchrony of cell division, rate of development – from embryogenesis. Now that the basic technology for quantitative evaluation of embryo quality has been established, it is expected to be applied to assisted reproductive technology and fertility treatment. In many ways, this research builds on some of these past explorations with the goal of providing a more robust, detailed, and revealing picture of mouse embryogenesis. 

More generally, the Quantitative Criteria Acquisition Network (QCANet) we developed is generalizable for any bioimage analysis. 

In the realm of artificial intelligence, It is also potentially useful for any deep learning researcher who is exploring the use of convolutional neural networks for image segmentation tasks. We learned a few potentially valuable lessons related to data preparation, time series, instance segmentation, exploration of parameters, and application of hyperparameter optimization to our model. 

What made you interested in it?

At the beginning of the development of QCANet, a great semantic segmentation algorithm such as U-Net had been proposed, but an instance segmentation algorithm for biological images had not been proposed. In order to perform instance segmentation, we had to solve the difficult problem of identifying crowded objects. We attempted to solve this problem by combining two CNNs.

Which models did you use and how did you select them?

In the course of our work, we developed Quantitative Criteria Acquisition Network (QCANet), a new convolutional neural network (CNN) based instance segmentation algorithm. We specifically designed this algorithm for 3D fluorescence microscopic images of cell nuclei of early embryos. 

Now this sounds specific, but the structure is actually rather simple. It combines conventional semantic segmentation algorithms and it can be easily applied to any bioimage analysis. 

In this particular case, we compared QCANet to 3D Mask R-CNN, which is state-of-the-art for instance segmentation, and found that it overperformed this baseline on this particular task.

Why did you select SigOpt to optimize these models?

The number of hyperparameters made a classic grid search infeasible for this model, which left us weighing random search and Bayesian optimization. And given constraints on the availability of resources, we were interested in applying a sample-efficient hyperparameter optimization method. This pointed us toward Bayesian optimization over random search. We also considered whether to apply an evolutionary search, but found it was more time and resource-intensive without any expected benefit. We anticipated that Bayesian optimization would likely outperform evolutionary search with fewer training runs. 

Typically, the challenge with Bayesian optimization is integrating a package that works with your particular problem and scaling this package so that it takes advantage of your computing resources. As a hosted solution with a fully scalable backend that can run up to 100x in parallel, SigOpt addressed this challenge for us. This is why we went with SigOpt.

How would you characterize the benefit of using SigOpt?

We benefited from SigOpt in three ways. First, by using SigOpt we avoided wasting our own time integrating and executing an open-source optimization package. It was automatic, seamless, and very easy to scale. Second, it solved problems associated with running Bayesian optimization in parallel and at scale, so we didn’t have to spend engineering time figuring this out. Finally, it comes with a dashboard that we used to analyze parameter importance, evaluate parallel coordinates, and generally keep track of our work.

More important than these particular benefits of SigOpt, however, is that it ultimately uncovered higher-performing configurations of our model, which helped us beat the state-of-the-art baseline we used in our research. 

What would you most love to see as an improvement to SigOpt’s product to assist you in further research?

Two things come to mind. First, we would appreciate the ability to analyze specific data points in SigOpt after a hyperparameter optimization experiment to more deeply understand correct and incorrect results in the context of the model configurations that were tested. Second, we would appreciate any tooling that provided additional detail on the importance of different model attributes to the maximization or minimization of specific metrics. These two elements would contribute to explainability, which is critical in our research area. 

How do you expect to continue evolving this research in the future?

Segmentation is an important and challenging issue in bio-image analysis aimed at revealing biological phenomena such as embryonic development. QCANet developed in this study was found to be able to identify cell nuclei with the highest accuracy in the world in three model organisms: mouse, nematode, and Drosophila. Although the fluorescence microscopy images of C.elegans and Drosophila contained a wide range of developmental stages, from a few hundred cells to several thousand cells, QCANet successfully identified nuclei across a wide range of developmental stages, making it a potentially very useful tool as a basis for developmental biology.

For future application to human embryos, a quantitative index to judge the quality of embryos without staining will be necessary. In this study, since we were targeting a model organism, we introduced a fluorescent substance into each embryo and stained the cell nuclei. In the future, it will be necessary to identify cell nuclei from unstained microscopic images, and based on the findings obtained in this study, we are developing an algorithm to identify cells from unstained images.

Use SigOpt free. Sign up today.
Nick Payton
Nick Payton Head of Marketing & Partnerships
Yuta Tokuoka Guest Author
Takahiro G. Yamada Guest Author
Daisuke Mashiko Guest Author
Zenki Ikeda Guest Author
Noriko F. Hiroi Guest Author
Tetsuya J. Kobayashi Guest Author
Kazuo Yamagata & Akira Funahashi Guest Author