The views, information, or opinions expressed in the Industry News RSS feed belong solely to the author and do not necessarily represent those of IDEX Health & Science and its employees.
Article obtained from Photonics RSS Feed.
A deep learning approach to image reconstruction, developed by a team at Rensselaer Polytechnic Institute (RPI), generates comprehensive molecular images of organs and tumors in living organisms at high quality and ultrafast speed. The team’s new approach leverages compressed sensing-based imaging, a signal processing technique that can be used to create images based on a limited set of point measurements. The new method builds on earlier work by RPI researchers, who proposed a way to use sensing-based imaging to acquire comprehensive molecular data sets.
While the earlier research produced more complete images, the processing of the data and formation of the image could take hours. To enable near-real-time visualization of molecular events, the team built a convolutional neural network (CNN) architecture called Net-FLICS (for fluorescence lifetime imaging with compressed sensing). Net-FLICS uses deep learning to improve image reconstruction.
In vivo intensity and mean lifetime reconstructions at 4 hours and 6 hours post-injection. Courtesy of RPI/Light: Science & Applications.
The researchers designed a large simulated data set to train Net-FLICS to directly reconstruct images from raw time-resolved compressed sensing data. Net-FLICS demonstrated the ability to reconstruct images based on both in vitro and in vivo experimental data and achieved superior performance at low photon count levels.
In addition to providing an overall snapshot of the subject being examined, including the organs or tumors that researchers have visually targeted with fluorescence, the new imaging process can reveal information about the successful intracellular delivery of drugs by measuring the decay rate of the fluorescence.
“This technique is very promising in getting a more accurate diagnosis and treatment,” said professor Pingkun Yan. “This technology can help a doctor better visualize where a tumor is and its exact size.”
Further development is required before the new approach can be used in a clinical setting. To accelerate technique’s progress, the researchers have incorporated simulated data based on modeling. Yan said the research indicates that the model could be accurately extended to the actual experimental data.
“For deep learning usually you need a very large amount of data for training, but for this system we don’t have that luxury yet because it’s a very new system,” Yan said.
“At the end, the goal is to translate these to a clinical setting. Usually when you have clinical systems you want to be as fast as possible,” researcher Marien Ochoa said.
The research was published in Light: Science & Applications (https://doi.org/10.1038/s41377-019-0138-x).READ MORE