AI can produce pictures, but can it create art for itself?
Some of the earliest surviving cave art depicts images in the purest sense: pigment marks around the stencil of an outstretched human hand.
Examples like this have been found across the world -- at sites in Indonesia, Argentina and France, among others -- dating back as far as 39,000 B.C. They're the breathtaking first examples of what we humans, even in the 21st century, still think of as art -- acts of material-shaping, image-making and self-representation.
Creativity is something we closely associate with what it means to be human. But with digital technology now enabling machines to recognize, learn from and respond to humans and the world -- from digital assistants to driverless cars -- an inevitable question follows: Can machines be creative? And will artificial intelligence ever be able to make art?
Artists since the 1960s have persistently explored how computers might produce art independently of humans. Pioneers like the Hungarian Vera Molnár created the first code-based drawing programs. In the 80s and 90s, the British artist William Latham applied principles from genetics and evolution in simulators that would 'evolve' animated, organic sculptural forms without human direction.
But with the accelerating pace of research into artificial cognition backed up by vast computing power, and science's growing interest in neural networks and deep machine learning, the idea that computers might truly "create" works of art has gained currency.
After Spotify stirred controversy by promoting obscure or pseudonymous artists who create generic music for playlists such as "Peaceful Piano" or "Ambient Chill," technology writer David Pogue asked, "Why couldn't Spotify, or any music service, start using AI to generate free music to save itself money?" Pogue noted Spotify's telling hire of AI researcher François Pachet, who, at Sony, had been working on AI software that writes music.
Elsewhere, the art market website Artnet reported earlier in the year that a Parisian art collector had bought an image "made by artificial intelligence" -- or rather, created using a computer program by Obvious, a French art collective whose strapline is "creativity is not only for humans." The software "trained" itself using a set of historical paintings for reference, before producing an image that resembles an 18th-century portrait, though it is itself entirely new.
What lies behind this -- and many other -- recent art experiments is the use of "generative adversarial networks" (GANs). GANs are "neural networks" that teach themselves through their own experimentation, rather than being programmed by humans. Put simply, these systems involve two programs, one of which holds a database of something real (in this instance, pictures of old paintings) while the other produces images that try to "fool" the first into thinking that they constitute an example of what it knows. Gradually, the second program develops ever-more convincing (but completely invented) images in the "style" of the database examples.
In an important example of GANs' use in art, researchers at New Jersey's Rutgers University last year published results of a study in which their machines tried to create images that humans would think of as plausible enough to be real artworks from history. (The most convincing results were, perhaps unsurprisingly, images that looked like contemporary abstract paintings.)
It could be argued that the ability of machines to learn what things look like, and then make convincing new examples, marks the advent of "creative" AI. After all, if we enjoy a piece of music or artwork, why should it be disturbing to know that it was entirely machine-made? Isn't the pleasure or the interest we derive from it proof enough?
In George Orwell's classic dystopian novel "1984," the masses are entertained with songs made by a machine called a versificator that are "composed without any human intervention whatever." Perhaps not dissimilarly, big data is today used to analyze what viewers and listeners most want to see and hear.
Netflix famously commissioned "House of Cards" based on its own data, which suggested that viewers who liked the original UK series also liked films directed by David Fincher and those starring Kevin Spacey. YouTubers busily churn out videos in direct response to the search patterns of Internet users. And with companies like Microsoft developing programs that can generate realistic images based on users' descriptions of what they want to see, the idea of virtual creators making culture that humans want to consume seems to be edging ever-closer.
Some commentators and futurists have concluded that, eventually, we'll no longer need human artists. But the current excitement around "creative" machines -- those making artworks that could once only be made by humans -- barely gets to grips with the question of how artistic originality really works.
Machines have, it is true, been able to invent new works. Nothing exactly like them exists, so they're "new" in some basic sense. But what machines produce is only really a sophisticated variation on an established corpus of pre-existing art. Moreover, this approach to inventiveness has been based in human-centered evaluation and judgment -- it's us who confirm whether the work is "good" or "bad." Yes, captcha, this is an image of a bridge.
And yet, originality in art -- real creativity -- isn't about confirming what has already been done, but about doing something differently and for a reason. What characterizes human originality is intentional difference. Human artists have always had reasons for trying to make something new -- thoughts, criticisms, frustrations, passions, insights, hopes, ideals and all kinds of other motives. Newness, if not merely some random output we just happen to like, is the result of an artist intending to do something differently. And what is critical in that intention are the reasons for it.
Art needs a reason
A century ago, artist Marcel Duchamp decided it was no longer interesting to make artworks the way they had always been made. Instead he chose an ordinary object (in this case, a urinal) and presented it as "Fountain." His reasons for doing so have vexed and inspired critics, artists and the public ever since.
Duchamp's reasons were, in part, to do with abandoning the tired, conventional wisdom that assumed art had to look like a certain kind of object. He decided to think differently about what an artwork could be, though his rationale took another 50 years to be accepted. Even today, people still profoundly disagree with his motives. Originality lies in questioning the reasons for what has become commonplace, and Duchamp was not trying to generate the most "likes" for his work. No machine has thus far chosen not to make art.
What Duchamp's originality outlines is the way human creativity works as part of a culture's conversation with itself about what things are and the way we give them value. An artist presents a work and says, "What do you think of this?" The audience may like it or not like it, but more than this, humans argue over the value of what the artist has made. That's how all art criticism starts out. Liking something isn't enough to make it good. With the development of machine learning, it's as if machines are getting better at asking, "Do you like this?" and yet, machines cannot yet answer why we like what they make -- and have no reason to make it for themselves.
The outline of a hand on a cave seems to say, "We are here." Looking at their own hand-prints on the rock, early humans must have been aware that they had creatively changed their world, and by making a mark on it, were aware of themselves by doing so. If machines achieve the same thing, it probably won't be through making images for humans to like. It will be the moment that an algorithm, or an artificial neural network, shows to itself that it exists.
What will the digital equivalent of the hand's outline on the rock look like? Well, like nothing we've ever seen.