The views, information, or opinions expressed in the Industry News RSS feed belong solely to the author and do not necessarily represent those of IDEX Health & Science and its employees.

    Deep Learning Is Used to Recover Objects in Low Light

    Article obtained from Photonics RSS Feed.

    A new imaging technique, developed by engineers at Massachusetts Institute of Technology (MIT), demonstrates that deep neural networks (DNNs) can be used to illuminate transparent features such as biological tissues and cells in images taken with very little light. The researchers used a DNN to reconstruct transparent objects from images of the objects taken in near total darkness.

    From an original transparent etching (far right), MIT engineers produced a photograph in the dark (top left). They then attempted to reconstruct the object using first a physics-based algorithm (top right) and then a trained neural network (bottom left), before combining both the neural network with the physics-based algorithm to produce the clearest, most accurate reproduction (bottom right) of the original object. Courtesy of A. Goy, K. Arthur, S. Li, and G. Barbastathis.
    To begin, the researchers consulted a database of 10,000 integrated circuits, each etched with a different pattern. Instead of etching each of the 10,000 patterns onto as many glass slides, the researchers used a phase spatial light modulator to display the pattern on a glass slide, re-creating the same optical effect that an actual etched slide would have.

    The researchers pointed a camera at an aluminum frame containing the light modulator. They then used the device to reproduce each of the 10,000 patterns from the database. The researchers covered the entire experiment so it was shielded from light, and used the light modulator to rapidly rotate through each pattern, similar to a slide carousel. They took images of each transparent pattern, in very low lighting conditions (about one photon per pixel). This produced “salt-and-pepper” images that resembled little more than static on a television screen.

    The team developed a DNN to identify transparent patterns from dark images, and then fed the DNN each of the 10,000 grainy photographs taken by the camera, along with their corresponding patterns.

    The researchers had set their camera to take images slightly out of focus. Defocusing works by providing the DNN with some evidence, in the form of ripples in the detected light, that a transparent object may be present. Such ripples serve as a visual flag that a neural network can use to determine that an object is somewhere in an image, although hidden.

    Defocusing also creates blur, which can muddy a neural network’s computations. To counter this, the researchers incorporated the physical law of light propagation into the neural network. “It’s better to include this knowledge in the model, so the neural network doesn’t waste time learning something that we already know,” said professor George Barbastathis.

    After training the neural network on the 10,000 images of different integrated circuit patterns, the team created a completely new pattern, not included in the original training set. They took an image of this pattern, again in darkness, fed it into the neural network, and compared the patterns that the neural network reconstructed, both with and without the physical law embedded in the network. The researchers found that both methods reconstructed the original transparent pattern reasonably well, but the “physics-informed reconstruction” produced a sharper, more accurate image.

    The team repeated their experiments with a totally new data set, consisting of more than 10,000 images of more general and varied objects, including people, places, and animals. After training, the researchers fed the neural network a completely new image, taken in the dark, of a transparent etching of a scene with gondolas docked at a pier. Again, they found that the physics-informed reconstruction produced a more accurate image of the original, compared to reproductions without the physical law embedded.

    “We have shown that deep learning can reveal invisible objects in the dark,” said researcher Alexandre Goy. “This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging.”

    The research was published in Physical Review Letters (https://doi.org/10.1103/PhysRevLett.121.243902). 

    Dec, 19 2018 |

    IDEX Health & Science is the global authority in fluidics and optics, bringing to life advanced optofluidic technologies with our products, people, and engineering expertise. Intelligent solutions for life.