News

MIT and Google’s new algorithms can retouch your photos before you take them

The system is based on editing techniques of professional photographers.
The system is based on editing techniques of professional photographers.

Want to take great selfies but can’t be bothered to do all that photo-editing before posting on Instagram or anywhere else for that matter?

Well, here’s some good news: machine-learning algorithms might soon be able to do your retouching work for you.

A new system developed by researchers at the Massachusetts Institute of Technology (MIT) and Google can automatically edit photos and make you look like a professional photographer.

The program runs in real-time, allowing you to see the final version of what the photo will look like even before you take the picture.

A woman taking a photo.
(Barrington Coombs/EMPICS Sport)
(Barrington Coombs/Empics Sport)

While it’s worth pointing out that most smartphones and cameras already process imaging data in real-time, this machine-learning algorithm is more subtle in its approach as it is able to provide a tailored response to each image rather than applying general rules.

The work builds on an earlier MIT project that involved a similar process but occurred on a cloud server where the system would send back a “transform recipe” – a template for retouching the image on the phone.

“Google heard about the work I’d done on the transform recipe,” said Michael Gharbi, an MIT graduate student and study author.

“They themselves did a follow-up on that, so we met and merged the two approaches.

Man taking a photo,
(Andrew Matthews/PA Archive/PA Images)
(Andrew Matthews/PA)

“The idea was to do everything we were doing before but, instead of having to process everything on the cloud, to learn it. And the first goal of learning it was to speed it up.”

The researchers used 5,000 raw and retouched images to create their machine-learning program. The retouched images were edited by five different photographers.

The system was also trained on other image-processing algorithms like the HDR (high-dynamic-range) images you see on your phones.

Most of the processing work is done by the algorithm on a low-resolution copy of the image and the results are then applied to the high-resolution photo on the camera, making the entire process fast and seamless.

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Jon Barron of MIT.

“Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones.

“This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

Researchers presented their work at Siggraph, a digital graphics conference.