When talking about nightb vision everyone imagines that the image displayed will be green with black, however this is about to change. Researchers have found a way for cameras to capture, even at night, a color image, as if it had been taken during the day.
On April 6, the magazine Plos One published an American article where researchers present the discovery of an optimized algorithm with a href="https://www.infobae.com/america/buscador/?query=aprendizaje%20profundo" rel="noopener noreferrer" target="_blank"ibdeep learning architecture that manages to transform spectrum visible from a night scene to how a person might see it during the day.
At night people can not see colors and contrasts due to lack of light, for this they need to illuminate the area or use night visors, the latter give a greenish image. By solving the monochrome viewers, it will be possible for everyone to see and take photos that look as if it were daytime, which will be of great help in tactical military reconnaissance work, among others.
To achieve this, the researchers used a monochromatic camera sensitive to visible and infrared light to acquire the database of a printed image or images of faces under multispectral illumination covering the standard visible eye.
They subsequently optimized a convolutional neural network (U-Net) to predict images of the visible spectrum from near-infrared images. Its algorithm is driven by deep learning using spectral light.
To learn the spectral reflectance spectrum of cyan, magenta, and yellow inks, they printed the Rainboy color palette to record their wavelengths. They then printed several images and placed them under multispectral illumination with a monochromatic camera (black and white), mounting on a dissecting microscope focused on the image.
In total, they printed a library of more than 200 human faces available in the publication “Labeled Faces in the Wild”, with a Canon printer and MCYK paint. The images were placed under different wavelengths and then used in machine learning training focused on predicting color (RGB) images from illuminated images of a single or combined wavelength.
For all experiments, they followed the practical model of machine learning: they divided the database into 3 parts, reserving 140 images for training, 40 for validation and 20 for testing. To compare performance between different models, they evaluated several metrics for the reconstruction of the image.
The researchers pointed out that this study serves as a step for the prediction of scenes in the human visible spectrum from imperceptible near-infrared illumination.
They said that “it suggests that the prediction of high-resolution images depends more on the training context [of the Machine] than on the spectroscopic signatures of each ink” and that this work should be a step for night vision videos, for which the number of frames it processes per second will depend.
KEEP READING: