Augmented Reality

Camera as platform

The Camera As Platform

When the operating system moves to the viewfinder, the world will change forever

Every day, nearly 2.6bn people carry a smartphone. Mobile computing has freed us from our tether to a desk, therefore no longer restricting our computing to a single location. Now, all sorts of incredible things can happen while we’re out and about with a tiny supercomputer in our pocket.
The fundamental infrastructures of the internet have developed over the years – operating systems and graphical user interfaces, servers, APIs, app stores and the cloud. But after years of progress, the mobile computing era is coming to an end and the paradigm that will replace it is coming into focus – spatial computing.

In this new era, the physical world is not only content, it’s also the interface and the distribution channel as well. This is possible because we are on the precipice of shifting the OS layer from the mobile phone to the camera. Put simply, the camera will bring the internet and the real world into a single time and space. Brand new worlds will enter our field of view, modular and stackable like those grey NES game cartridges of yore.

First introduced to capture content, the smartphone camera increasingly enhances the experience of the world around us. Pokémon Go, Snapchat, Facebook Live and Instagram Stories have introduced behaviours which allow the camera to serve new purposes. By giving us tools to augment our selfies, Snapchat has taught us the camera can be interactive. The next step is making the world our canvas.

Focus on sectors

Industries from entertainment and retail to broadcast and travel will be transformed. What if we could make use of that much-loathed act of audience members recording a live concert on their phones? Coachella is trying to own this behaviour by empowering artists to see this act as new channel for story telling, extending the performance into and through the camera lens.

For brand FabFitFun, the camera creates a space for content and commerce to live together. Transactions will soon be processed via the lens. The editorial brand voice, metadata, the ‘buy’ button – all will be visible as we look through the camera at our physical subscription box and the products it contains.

Many media companies are also considering how to extend TV broadcasts by meeting viewers wherever they are. What if our relationships with our favorite broadcast content could be extended through our cameras and enter our living room, or be tagged with our friends’ live-tweeted comments, without battling for attention as a second screen experience?

The camera is no longer a passive tool but the new start menu. It is the next great consumption experience and the biggest technology opportunity in a decade. The internet, long confined to the four-by-two smartphone screen, is now a blank canvas as wide and broad as your entire field of view.

The way forward

The first driving factor in this transition is scale. For a decade, most mobile phones have come with a high-resolution camera as standard, densely populating the planet with lenses. In 2000, there was a desktop computer every 27.79 square miles but by 2010, there were 16.5 smartphones in the same area and that’s now increased to 59.07 in 2017. It varies between urban and rural areas but it means there are, on average, 78.5 cameras in any given square mile. This higher figure is because cameras are everywhere, not just in smartphones. They’re in tablets and laptops, keyless locks, refrigerators, and automobiles. In my office, I am surrounded by at least 30 cameras that I know about. There are many more in my home.

This wide distribution of cameras makes it appealing as a platform but more capable, intelligent software is the second key enabler. We’ve seen a wave of announcements from Apple, Microsoft and Google heralding ‘extended reality’ capabilities, while Facebook, Google and Snapchat have released their own programmable camera offerings. Everyone is counting on the camera becoming the developer’s playground and the consumer’s next obsession.
In Figure 1, we see hardware getting smaller and cheaper to the point that it’s in everything. On the y-axis, we have the software layer that is oscillating between the bundling and unbundling of programs and applications. The personal computer, for example, runs many programs and applications at once on multiple tabs, while early mobile handsets could only handle single functionality and even now, the smartphone can only display one app at a time. This is about to change.

The spatial computing era

With the camera, the fight for pixels falls by the wayside. We are moving towards a computing era when the camera will run applications just as the mobile phone does. Except, with the physical world as our canvas, we’ll find ourselves parallel processing in a way never before possible.
From ENIAC (heralded as a “giant brain” when introduced at the University of Pennsylvania in 1946) to Apple’s iPhone, the evolutionary trend of technology has been to fit more and more into smaller and smaller boxes. The screen on your phone, the TV in your house, and everything in between is a virtual representation of the world. In some cases, we have begun to augment those representations, with Pokémon Go opening the eyes of millions to what’s possible creatively.
But we’re just getting started. What comes next is looking through the camera at the entire world.

Rather than building a smaller box, we have started to eliminate the box entirely and the physical world becomes more than just content, it becomes both the interface and the distribution channel. The combination of the context provided by the data in our phone, the canvas of the real world and our own curiosity, will allow us to interact with the world in new ways.

Imagine having OS-level access to the camera, enabling mobile apps to be replaced by ‘lenses’ – camera apps –that provide an interaction fabric on top. Imagine that the camera is a clearing house for an increasing number of sensors in both the phone and any IoT-enabled objects out there.
The world around us then becomes programmable, accessible to a digital information flow and ushering in a new level of always-on, multi-dimensional computing. Rather than the push-pull paradigm of today, in which we must ask for the data we receive, we are entering a persistent, always available, always-on relationship with digital information.

For example, you’re in a new city looking for directions. The Maps app uses your GPS data and accelerometer to determine your location and lay the correct path at your feet. You come across a branch of your favourite store, which gives you details of its latest offers. In the same view, the National Geographic ‘lens’ provides contextual information about the historical building you’re passing. And all this happens without having to request it due to the persistent, always-on paradigm of spatial computing.

Eventually, imagination becomes the only rational limit. We are about to make the whole world a programmable playground, with the camera as our gateway and guide. In this world, looking through the camera will eclipse the experience of using your computer browser or your smartphone screen. And when lightweight glasses come of age, what took its first faltering steps as Google Glass will come to fruition and we’ll come to accept and even expect lenses in our field of view all of the time. Welcome to spatial computing. The smartphone’s future is all about the camera.

Allison Wood is founder and CEO of Camera IQ