Pete Ashton is an artist, mostly working with cameras, based in Birmingham. He is a Birmingham Open Media Fellow and is part of the Goodbye Wittgenstein residency exchange with A3 Projects Space in Birmingham and qujOchÖ in Linz, Austria. His practice embraces photo walks, camera obscuras and rabbits. His current research concerns cameras, images and artificial intelligence.
I recently bought myself a laser, which, as childhood ambitions go, was a rather thrilling experience. Not a laser pointer but an actual scientific instrument for measuring things. It’s a LiDAR module (as in Light raDAR) which shoots out beams in a 270 degrees arc 10 times a second and measures the time it takes for them to bounce back. It converts this into distance and spits a torrent of numbers down a USB cable to my Mac. These numbers can be turned into a graphical representation of what in front of the LiDAR, or something else entirely.
LiDARs mounted on planes measure topographic detail with astonishing accuracy and cost silly money. Mine cost a grand, has a range of about 6 metres, is usually used in robotics for autonomous navigation and it’s accurate enough to be used for 3D scanning of rooms and objects. But I bought it to use as a camera.
Using a LiDAR to make art is not a new thing. There was a nice piece by ScanLAB at London’s Photographer’s Gallery on their big screen last year and Radiohead did that video way back in 2008, so it’s been around a bit. But if I were to point this small 10cm cube at you, you probably wouldn’t think you were being photographed. You probably wouldn’t think anything was happening at all.
What it means to make a “photograph” has undergone such a seismic disruption over the last couple of decades that the term is almost meaningless. We can say that a camera is a chamber into which reflected light is allowed to enter under controlled conditions (lens focus, aperture size, shutter time), but after that pretty much everything is up for grabs. How the light is recorded and in what format, how that information (analogue or digital) is processed and how the resulting image is distributed and displayed – all these choices have grown exponentially as computing power and access to technology has expanded our ability to make and consume images. You might even say you don’t need a camera at all.
As such I’m not really sure what a photograph is anymore. Maybe photography is just the initial capture of light in a place and time, supplying the raw material for what we might call “image production” or “visual data manipulation”. Or maybe there are photographs but they don’t exist in isolation. They’re part of the “stream”, juxtaposed thoughtfully, algorithmically or randomly with each other and the surrounding world.
When I think about visual culture it’s this mass of images, and how we might process them, which comes to mind: visual art as data, zeros and ones which can be churned by a computer. This is where the power lies.
Photoshop, which recently celebrated its 25th anniversary, can seem like magic, especially when using newer functions like Content Aware Fill, but behind the skeuomorphic analogies of the interface it’s just maths. Each pixel of a photo has a number assigned to it representing its colour. By selectively applying mathematics to those numbers, the manner in which the photograph represents reality is changed.
While Photoshop is mostly managed by a human operator, we’re starting to see this editing of reality being automated. My favourite example from last year was Google introducing a new feature where multiple photos of the same scene are merged into images where everyone is smiling with their eyes open, even though that moment never happened. You don’t pose for the photo – Google’s robot poses you.
More recently, neural networks (complex computer algorithms that mimic some basic brain functions), like Tom White’s @smilevector, have been employed to literally turn that frown upside down creating authentic looking smiles in grumpy photos. This is achieved by processing thousands of smiles into mathematical expressions which can be applied to a miserable digital photograph. The maths is complicated and requires a lot of computing power, but computing power is always increasing. It’s said the time-to-Snapchat-filter for advanced image manipulation techniques is probably down to 6 months so soon we’ll all be able to alter the mood of our precious moments with a tap.
The speed by which this stuff is moving threatens to be shocking, but I feel complacency is more likely. It’s common knowledge that UK cities are the most surveilled in the world with cameras on every corner and we’ve mostly accepted this as a culture. But the implications of ubiquitous surveillance plus massive computing power are so huge as to seem fantastical. Every so often it crops up in the movies, such as the “satellites and gunships” algorithm-drone analogy of Captain America’s Project Insight, or the “hack all the cellphones and find anyone” God’s Eye of Fast & Furious 7, but the execution is understandably absurd, so any useful discussion of the ideas behind them is commuted. Still, more “realistic” depictions in the likes of Bourne or Mr Robot suffer from a credulity gap, which is odd given some of the ridiculous things the flickering screen has convinced us are real.
Maybe this is because it’s not very visually interesting. A curious phenomena of computer vision is that is doesn’t really produce visuals. Those films you may have seen of how self-driving cars “see” the road are really visualisations to help the programmers debug. The computer doesn’t see anything – it just churns the data and moves the car accordingly.
For me, this contradiction is at the heart of thinking about modern photography. We think cameras are for capturing images and making visual representations and get righteously indignant at perceived abuses of these representations of ourselves (witness John Oliver only managing to get a reaction to the Snowden leaks when he told people the government could see their more intimate home-pics). But the visual output of cameras is increasingly a byproduct. The networked surveillance machine doesn’t care about aesthetics. It just cares about where your image fit in its mathematical representation of reality. And your image is just another data point alongside your browsing history, credit rating, loyalty cards, mobile phone location, fitness data, social media activity, and so on.
When our persons are abstracted to such an extreme degree, particularly when the greatest threat appears to be adverts, is it possible for us to care? Are we calmly drifting into an Orwellian nightmare? Or are the privacy campaigners over-reacting?
As artists our most basic job is to represent the world in a way that encourages people to consider their place in it. As photographers we select the reflected light in a specific time and place and present it as a two-dimensional field of shades to provoke a reaction in the viewer. This selection is the key as it gives agency to the human pressing the shutter and makes the image a subjective representation of reality.
But the profiles generated from our data are presented as an objective truth, and if it’s wrong then it’s completely wrong. There’s no room for nuance in the world of zeroes and ones, where the struggle seems less about finding beauty than to analyse and replicate it. This makes being an artist who works with data an interesting challenge.
Any artist who works with computers is a data artist, and in the last 20 years that has come to include photographers. When processing their images many will hit “Auto” or chose a fancy filter, some will push their RAW files through the prescribed algorithms of Adobe Lightroom, while others write custom code in openFrameworks or Processing to interrogate the camera’s output. But all are working with data and maths.
Photography has always been the best example of the intersection of art and technology. Photographs can be great art and can change our perception of the world. But they are always co-authored to some degree by machines, and this puts photographers in the perfect position to consider and critique our new data-driven reality.
Because the manner in which Facebook and Google and the NSA and GCHQ are capturing, processing and presenting the world to us is not that different to the work of a photographer capturing, processing and presenting an image. We know all too well the power and limitations of mechanical representation through years of struggling to get that “perfect” photo. We know that the so-called objective reality of data holds both truth and fiction, is both pure and flawed, and most importantly is completely open to interpretation.
I feel like photography in the 2010s is at the stage painting was in the late 19th century. Painting was employed to accurately represent visual reality and the rise of photography freed it from this responsibility and allowed the riot of the 20th century to occur. In the last two decades photography, defined as single moments captured and rendered as single objects, has been superseded by something we might call Computational Datavizography, though I pray we think of a better name soon. This frees photography to go crazy but it also gives us the right, maybe the duty, to apply our knowledge and wisdom to critiquing these new capture devices and processes as they attempt to tell us about our world.
And that is why my new camera is a laser.