Entertainment

Google Brain has figured out how to turn pixelated images to high-res photos

Google Brain has figured out how to turn pixelated images to high-res photos
Google Brain has figured out how to turn pixelated images to high-res photos

Your heavily pixelated images will stay blurred no longer, if the latest Google algorithm comes into play.

Google Brain, a deep learning research project at Google aimed at researching technology that could benefit artificial intelligence, is in the process of bringing science fiction to the real world by developing a new software that can sharpen blurry images (kind of like those photo-enhancing techniques investigators use in CSI).

The process involves combining two neural networks with machine learning to determine the colours and patterns in the pixelated photo.

Researchers used 8 x 8 pixelated images and applied what they call “zoom in… now enhance” technology to them.

(Google Brain)

First, the software used a “conditioning” network to compare the low-res images to existing high-res photos in its database. It then lowered the quality of the high-res images to match up the colour of the pixels with similar images.

Then using a second network, which the researchers call prior network, it added the high-res details to sharpen the blurred photos.

More than a 1,000 coloured images of different human faces were used in the research.

While the images generated by Google Brain were high-resolution and contained well-defined facial features, they weren’t exact copies of the originals. But some did look eerily similar.

(Google Brain)

To test whether the technology worked, researchers brought in volunteers to see whether they could tell which photo was taken using a camera when shown real photos alongside Google Brain versions of celebrities. One in 10 people chose the Google Brain image as legitimate camera photo.

While the technology provides useful research on AI, misuse could also lead to unmasking people who choose to remain legally anonymous in certain circumstances.

Researchers concluded: “Our human evaluations indicate that samples from our model on average look more photo realistic than a strong regression based conditioning network alone.”

The study is published in Arvix.org.