Cameras are among the most popular features of smartphones – typically identified as the second most common reason to upgrade to a new device. Instagram grew off the back of creative filters, becoming some people’s camera. Snapchat popularised the idea of a straight-to-camera app, replacing the camera for many of its users with augmented reality (AR) lenses and filters, and Facebook has followed suit, with its own Frame Studio and Camera Effects platform. Google has long pushed the idea of visual search, with the advent of Google Lens (now available via Google Photos), being the latest iteration.
All of these additions have involved users deciding to adopt some other app instead of the built-in camera on the phone. Apple quietly added QR code detection in the middle of last year, so that without needing to start any special software, just pointing the phone’s camera at a QR code prompts the user to follow the enclosed link, completely removing the pain point of having to install, and remember to use, a specialist QR code app.
At MWC both Samsung and LG were showing off “AI enhanced” cameras. LG’s new V30S ThinQ sported its “AI Cam”, which shows off its ability to tag what you’re seeing dynamically from a dictionary of several thousand tags. Samsung’s new flagship S9 ships with its AI agent Bixby built into the camera. Users can switch the camera to translation, beauty (augmented reality makeup, with cross-sell), shopping (product identification and purchase), and food (food identification and calorie logging) modes. In each case computer vision is used to identify the thing, with detailed supplementary information automatically provided.
Why does this matter?
This trend will undoubtedly continue, with vendors improving the capabilities of their camera apps and, perhaps, allowing plug-ins to those apps as another way (along with apps, widgets, extensions, and accessories) of customising phones.
However, the real prize is inclusion of such capabilities in the default camera applications for the two big mobile platforms — something we expect to see this year — with announcements likely at Google’s I/O (May 8th) and Apple’s WWDC (June 4th).
What are the implications for the future?
Up to this point, smartphone cameras have been about capturing photos and videos. They have been responsible for an explosion in creativity and media sharing, with the number of photos taken each year increasing from 80 billion (Kodak’s peak film usage in 1999) to an estimated 10 trillion in 2017 (of which more than 3 trillion were shared via social media). Fast forward a few years and it is reasonable to assume a key use — perhaps even the main use — of a smartphone’s camera will be as input interface: identifying objects and behaviour in the world around it and providing a “next best action”.
For brands this presents an exciting opportunity to near-seamlessly link physical products to virtual experiences and, perhaps, a stepping stone towards a world where all-day, every-day augmented reality offers an always-on digital overlay for the real world.