Google's incredible image enhancer looks like something out of 'CSI'

Two special agents are gathered in front of a computer screen handled by a brainy technician.

Google Brain super-resolution image tech makes “zoom, enhance!” a real thing
That's a face?! Google Brain creates faces virtually out of thin air

Pixelated images, "potato" cameras, and pictures deliberately blurred to hide people's faces, could become a thing of the past thanks to Google's latest algorithm development. "Enhance!' says Special Agent Smith. 'That's our man". And it only took 30 seconds. At this moment, the lead detective instructs the techie to zoom and enhance the quality of the image (sometimes multiple times). This will give a basic idea what the image should look like. Well until now. According to a report Google Brain has developed a software that actually make images from pixelated and small photos. The conditioning network takes the low-resolution image and compares it to high-resolution images to determine whether a face or a room is in the image. According to the report the combination of two neural networks are responsible for the magic. This one uses an implementation of PixelCNN to add realistic, high-res details to that 8x8 source image. This prior network will try to add details to the initial 8×8 image. In the case of an 8×8 block, a brown pixel on the far right and far left would correspond to eyebrows. As Ars Technica explained, if there's a brown pixel towards the top of the image, the prior network might identify that as an eyebrow: so, when the image is scaled up, it might fill in the gaps with an eyebrow-shaped collection of brown pixels. When images of a bedroom were used, 28 percent of human subjects were fooled by the computed image. Upscaled images of celebrities fooled 10 percent of human participants that these were genuine, where 50 percent would imply a ideal score. 10 percent of the time, they were fooled. Last month, Google showed off RAISR, a tool meant to save bandwidth through heavy image compression and restoration. This is important to consider, else we might fall for it just like on TV. Instead, what it's doing is using machine learning to try to guess what the original image might be if it had been downsized to 64 pixels. Some hunches are spot on, though, as evident in these press photos. Researchers also used it to recreate photos of bedrooms from low-resolution samples, essentially showing that any type of data that can be fed to a neural network might be partially reconstructed when missing, through this type of system. Rather, the "zoom-in, enhance" capability could offer a lead where there's none to begin with. But it looks like a bunch of Google researchers have figured out a way to do that - or at least get pretty darn close.


Mas noticias