AI Disrupting Digital Photography
AI’s impact on photography
If you weren’t aware of the strong connection between AI and photography, you’re not alone. Nevertheless, in the past decade, AI has contributed sizeable advances in the field of digital photography and is slated to continue revolutionizing the way we take photos. Thanks to computational photography and supervised machine learning algorithms, simple smartphone cameras can now perform advanced digital image enhancements, such as digital zoom, HDR merging, and digital refocusing.
Wait a minute, what is computational photography?
According to Marc Levoy, a pioneer in the field, computational photography can be explained as a variety of “computational imaging techniques that enhance or extend the capabilities of digital photography output.” Beyond what camera hardware can do alone, computational photography employs software algorithms to alter or enhance photographic images.
How does AI enhance digital photography?
According to popular belief, digital photography is largely a function of the quality of the hardware one is working with – the camera lens and sensors. Those camera parts have indeed developed considerably in the past decade, and now offer unprecedented quality and speed.
Almost uniformly across manufacturers, specs have improved so much that it doesn’t really matter which brand of hardware you choose today – you’re guaranteed to get a superb quality of photographs either way. What is starting to make a tangible difference, however, is the software embedded in digital cameras, including those in smartphone cameras. Thus, despite the similar hardware in the two types of devices, there are sizeable differences in the quality of Google Pixel phone photos and those taken with the latest iPhone model.
The upside of artificially enhanced photos
To begin with, computational photography has extended the capabilities of digital cameras to unprecedented heights. On top of those advances, some hardware manufacturers have also invested in incorporating AI into their camera software. Google, for instance, has registered notable progress with the AI embedded in the Google Camera app, as well as the development of its own AI-powered microchip, now included in all Google Pixel phones starting with the Pixel 2.
For a while now, Google has also used the photos that users upload in the (now obsolete) Google+ social network, and in the Google Photos app, to train its AI algorithms to recognize, categorize and group images, improve image quality, and add extra features (e.g. the blurred background effect). These features do not only help users up their photography game, but also further the development of a superior artificial intelligence, capable of analyzing and enhancing photos in ways previously unthinkable.
AI-enabled camera apps
To take reasonably good photos, we had to lug around heavy cameras, attach a different lens for each type of photo, and play with tons of settings to capture the most optimal light. AI-enabled phone camera apps have now made professional photography equipment nearly obsolete. While specific camera lenses may still be required for certain shots or environments, most amateur photographers find that phone cameras are not only more convenient to use, but also provide nearly the same quality of output – or sometimes even better – depending on the setting.
This result is largely due to a game-changing element that smartphone cameras are now packing: a neural processing unit (NPU). When it comes to impact on quality, the NPU is equally important as some of the more traditional parts that make up a camera – the image processor and the CPU. The NPU will become even more central to machine learning and AI going forward, as they require a fast and efficient processing power to function properly, especially on end user mobile devices.
Business applications of computational photography and AI
Now that AI can analyze and enhance photographs, different algorithms can be employed to achieve targeted results. Algorithms that reconstruct depth, refocus, adjust viewpoints, or merge HDR can turn simple digital lenses into complex equipment that not only captures but analyses and interprets inputs to provide sophisticated insights.
As neural processing units become more advanced, phone cameras powered by machine learning can enable other applications that we haven’t yet thought of, for instance in health and wellness. A simple example of such a use case is the mobile app Welltory, which uses the smartphone camera to record videos of one’s fingertips, which are then used to measure HRV (heart rate variability) – a predictor of general health.
Additional computational photography features, such as these, can come in handy in different sectors and applications:
- Shadow detection and correction can be used in surveillance technology
- Image reconstruction can be achieved for badly damaged or aged documents
- Automatic facial and body photographic enhancements have proven popular with consumers.
As computational power and algorithms in photography are perfected, the use cases for this technology will become virtually limitless, thus opening the gates for additional B2B and B2C mobile applications that have the potential to disrupt traditional processes and ways of working.
Ready to brainstorm on how to take advantage of computational photography in your business? Contact the Pegus Digital team for a no-obligation consultation.