Getty was right to set the jazz police on AI images, writes Jason Walsh.
Wandering home from France’s national library, the Bibliothèque Nationale de France-François Mitterrand, last night my phone buzzed in my pocket. Having spent the day being interrupted from my work by having to respond to a pablum of electronic inducements and digital chivvying, I decided to keep plodding along the pavement rather than respond to the alert.
My mistake. The message, it turns out, was from the photo archive Getty Images, to which I am a contributor. The terms of service had been updated, but in an unusually intriguing fashion: images created by artificial intelligence (AI) had been banned.
AI art is hardly new. Back in my art school days I encountered the work of Aaron, a painting machine invented by University of California at San Diego art professor Harold Cohen in 1973. It was a striking discovery, to say the least, and, unsurprisingly, got a mention in my thesis on the basis of the curious relationship between the symbolic nature of computer languages and image-making. In truth though, thanks to happily fallible human memory, I can neither remember precisely what I said about Aaron, nor can I muster the will to dig around the boot of my car, which functions as an adjunct storage space, to find a copy.
Getty’s objection to AI is worth quoting in full, however:
Effective immediately, Getty Images will cease to accept all submissions created using AI generative models (eg, Stable Diffusion, Dall‑E 2, MidJourney, etc.) and prior submissions utilizing such models will be removed.
There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models.
These changes do not prevent the submission of 3D renders and do not impact the use of digital editing tools (eg, Photoshop, Illustrator, etc.) with respect to modifying and creating imagery.
So, two cheers for Getty Images, then. A fortnight ago, the freelance editor of an e-mail newsletter published by The Atlantic Monthly landed himself in a world of notifications for, faced with his budget of zero dollars to illustrate a story, asking the Dall-E algorithm to push the pixels for him. An innocent error, and an easy one to understand. Equally easy to understand was the raged response from people who are paid to paint.
Now, illustration is not quite fine art – for precisely the same reason journalism and other forms of what JG Ballard called “invisible literature” are not – but, nevertheless, it is a skilled endeavor. How, in the long term, photographers and illustrators might defend themselves from the onslaught of functional facsimiles of their trade is no clearer to me than how writers, lorry drivers and the rest of us soon to be superannuated scribblers, daubers and drivers might. And yet, one thing is clear: Dall-E is neither an artist nor is it artificial intelligence.
Science fiction author William GIbson wrote a few memorable lines of dialogue in his 1983 book. Neuromancer on the subject of AI:
– Just thinking out loud… How smart’s an AI, Case?
– Depends. Some aren’t much smarter than dogs. Pets.
Let’s be clear about this: artificial intelligence does not exist. Some day it might exist (though, frankly, it would be appropriate for it to invent itself rather than for it to spring from a start-up), and one does not have to be a conservative to say we may come to rue the day it does. For now, however, what is sold to us today as artificial intelligence is little more than natural banality. Indeed, one worry about AI art, be it intended to replace painting, music, photography, dance, film or anything else, is the existing propensity of the culture industry, no technology needed, to endlessly reproduce pleasing pastiches of the past rather than bother. with the risky business of trying something new.
More broadly, there are already maps for these territories. Walter Benjamin’s 1935 essay, The Work of Art in the Age of Mechanical Reproduction, argues reproduction reduces art, and yet, paradoxically, frees it from ritual to become a new thing in itself. Indeed, anyone who owns a painting will tell you it takes years to accustom oneself to it, and yet, who among us cannot feel the resonance of a photograph? Of course this is all moot: art survived the camera, just as it will survive AI. Neither was the objet d’art was the ne plus ultra, nor was recorded music lacking the human spark.
I am not quite ready to join the ranks of Douglas Adams’s imaginary philosopher Majikthise, who said: “You just let the machines get on with the adding up and we’ll take care of the eternal verities, thank you very much”, then. The problem with the fruit of the information technology industry is not that it is encroaching on human genius, or even on human labor. Computers are merely an aspect of those, after all. No, the problem is that much of what it produces is so very boring, predictable and ugly, designed to indulge us rather than challenge us. And that is squarely the fault of humans, not computers.
What might not survive the rise of the robots, then, is taste.