Google AI Neural Networks Able to Enhance a Garbled Picture’s Resolution and Fill in the Missing Details!

It is now possible to transform a garbled or a heavily-pixellated and low-quality image into a clear photo of a person or object. It used to be seen only in the movies or on CSI TV series but now Google AI systems have already made it possible.

Three computer scientists from Google Brain, the central Google AI team, have revealed recently that it is not only possible to enhance a picture’s resolution, the systems can also fill in missing details in the process, reports Wired of UK.

The latest breakthrough is contained in a paper, entitled “Pixel Recursive Super Resolution” that was completed by three researchers from the Silicon Valley firm after training their system on small 8×8 pixel images of celebrity faces and photos of bedrooms.

The combination of a conditioning neural network and a prior neural network analyzed the images to produce higher resolution 32×32 pixel versions.

The process sees a blurry, almost unrecognizable picture turned into something that clearly represents a human or a room.

A two-pronged approach

In particular, the AI system works by taking a two-pronged approach. The conditioning network takes the low-resolution image and compares it to high-resolution images to determine whether a face or a room is in the image. It’s possible to compare the low res image, the researchers explain, to a high res image by scaling down the large image to the same 8×8 pixel size.

The three computer scientists of Google Brain explained that when some details do not exist in the source image, the challenge lies not only in ‘deblurring’ an image but also in generating new image details that appear plausible to a human observer.

When both images are the same size it is relatively simple for the AI to identify similar pixels and shapes between the different versions. For example, the system can recognize an ear of a particular shape and compare it with the pixels in another image, telling the AI it is looking at a face.

Once the first AI network has completed its role, the Google researchers use the PixelCNN to add extra pixels to the 8×8 image.

The PixelCNN adds detail by using what it knows about certain types of images. Lips are likely to be a shade of pink, so pink pixels are added to areas identified as such.

At the end of each neural network’s process, the Google researchers combined the results to create a final image. They describe the process of adding details to how an artist works.

They pointed out that by incorporating the prior knowledge of the faces and their typical variations, an artist is able to paint believable details.

Testing the system on humans

To prove the AI-generated images are believable, the researchers tested their system on human volunteers.

A group of participants was shown a true, low res image, alongside the one created by the AI. They were then asked to guess which was from a camera.

When looking at the images of celebrities, 10% of the time the humans believed the artificially-created shot was taken by a camera.

They noted that 50% would say that an algorithm perfectly confused the subjects.

In the future, with further development, similar systems could be developed to add detail to pictures and video that are low resolution.

One current category that lends itself to this is blurry CCTV images. Although, the method is yet to be tested with any such databases of images and the AI creations are currently the machine’s best guesses rather than entirely accurate portrayals.

Google Brain’s researchers describe the neural network as hallucinating the extra information, reports The Guardian.

Leave a Reply

Your email address will not be published. Required fields are marked *