Home / Science / Researchers convert 2D images into 3D with deep learning

Researchers convert 2D images into 3D with deep learning



A UCLA research team has developed a technology that extends the capacity of fluorescence microscopy, allowing researchers to accurately label parts of living cells and tissue with dyes glowing under special illumination. The researchers use artificial intelligence to turn two-dimensional images into stacks of virtual three-dimensional slices showing activity inside organisms.

In a study published in Nature Methods, researchers also reported that their frameworks, called "Deep-Z," could fix errors or deviations in images, for example, when a sample is tilted or curved. Furthermore, they demonstrated that the system could take 2D images from one type of microscope and practically create 3D images of the sample as if they were obtained by another, more advanced microscope.

"This is a very powerful new method enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to specimens," said senior author Aydogan Ozcan, UCLA Chancellor's Professor of Electrical Engineering and Computer Engineering and Deputy Director of the California NanoSystems Institute at UCLA

In addition to cutting samples from potentially harmful light doses, this system can offer biologists and scientists in life science a new 3D imaging tool that is simpler, faster and much cheaper than Current methods The ability to correct for deviations may allow researchers studying living organisms to collect data from images that would otherwise be useless. Investigators may also have virtual access to expensive and complicated equipment.

This research is based on a previous technology that Ozcan and his colleagues developed that made a tt they could make 2D fluorescence microscope images in super resolution. Both technologies promote microscopy by relying on deep learning ̵

1; using data to "train" a neural network, a computer system inspired by the human brain.

Deep-Z was taught with experimental images from a scanning fluorescence microscope, which takes images focused at multiple depths to achieve 3D imaging of samples. In thousands of training runs, the neural network learned how to take a 2D image and extract exact 3D images at different depths in a sample. Then, the framework was tested blind – fed with images not included in its training, with the virtual images compared to the actual 3D slices obtained from a scanning microscope, which provided an excellent match.

Ozcan and his colleagues applied Deep-Z to images of C. elegans, a circular mask that is a common model in neuroscience because of its simple and well-understood nervous system. By converting a 2D film from a mask into 3D, frame by frame, researchers were able to track the activity of individual neurons in the mask body. And starting with one or two 2D images of C. elegans taken at different depths, Deep-Z produced virtual 3D images that allowed the team to identify individual neurons in the mask, matching a scanning microscope 3D output, except with much less light exposure to the living organism.

The researchers also found that Deep-Z could produce 3D images from 2D surfaces where samples were tilted or bent – even though the neural network was trained only with 3D discs that were perfectly parallel with the surface of the sample.

"This feature was actually very surprising," said Yichen Wu, a UCLA graduate student who co-authored the publication. "With that, you can see through curvature or other complex topology that is very challenging to imagine."

In other experiments, Deep-Z was trained with images from two types of fluorescence microscopes: wide field, which reveals whole samples to a light source; and confocal, which uses a laser to scan a sample part by part. Ozcan and his team showed that their frameworks could then use 2D wide-field microscope images of samples to produce 3D images almost identical to those taken with a confocal microscope.

This transformation is valuable because the confocal microscope creates images that are sharper, with more contrast, compared to the large field. On the other hand, the microscope with wide fields captures images at less cost and with fewer technical requirements.

“This is a platform that is usually applicable to different microscopes, not just the field conversion to confocal. , "Said the first author Yair Rivenson, UCLA assistant adjunct professor of electrical and computer engineering. “Each microscope has its own advantages and disadvantages. With this framework, you can get the best of both worlds by using AI to digitally connect different types of microscopes. "

Other authors are graduate students Hongda Wang and Yilin Luo, postdoctoral eyal Ben-David and Laurent Bentolila, scientific director of the California NanoSystems Institute's Advanced Light Microscopy and Spectroscopy Laboratory, all UCLA; and Christian Pritz from the Hebrew University of Jerusalem, Israel.

The research is supported by the Koç Group, the National Science Foundation and the Howard Hughes Medical Institute. Imaging was performed at CNSI's Advanced Light Microscopy and Spectroscopy Laboratory and Leica Microsystems Center of Excellence.


Source link