Using just infrared photos, scientists have shown that a machine learning system can produce a full-color recreation of a picture. As a result of these studies, night-vision technology may have a promising new era ahead of it.
Our eyes are only able to identify a small portion of the electromagnetic spectra, making it seem that we can see all colors. The wavelengths of human-visible light fall between 400 nanometers (which the human mind interprets as violet) to 700 nm (red). Someone who was in a chamber without any openings and just an 800 nm bulb would be completely blind.
But at the other extreme, a mosquito or even a snake might see very well. If seen using an infrared camera, the scene would be visible to a human observer as well. That’s because taking pictures in infrared light isn’t difficult from a technological standpoint. Rendering such visuals in a way that a human observer can understand them is a difficult job.
Infrared pictures are being used in a significantly more sophisticated way by the researchers in the latest study. They began by producing representations of color schemes and human faces. They then used a monochromatic sensor, which can be programmed to only capture pictures at a narrow range of wavelengths, to construct a dataset from those photographs. A wide range of noticeable and near-infrared light wavelengths were used to photograph the subjects’ faces.
These digital data helped them build on years of research and innovation to construct and evaluate a deep learning system that could start with infrared photographs of a scene and estimate what that scenario might appear as in the visible light spectrum. Also, it worked! Utilizing deep U-Net-based architectures, one of the scientists was able to turn a series of three infrared photos into a full-color photo that matched a regular shot of the same subject extremely closely.
Leave a Reply