You’ve probably heard that automation is becoming commonplace in more fields of human endeavor. Or, in headline-speak: “Are Robots Coming for Your Job?”
You may also have heard that the last bastions of human exclusivity will probably be creativity and artistic judgment. Robots will be washing our windows long before they start creating masterpieces. Right?
Not necessarily. In reporting a story for CBS Sunday Morning, for example, I recently visited Rutgers University’s Art and Artificial Intelligence Laboratory, where Ahmed Elgammal’s team has created artificial-intelligence software that generates beautiful, original paintings.
Software is doing well at composing music, too. At Amper Music (www.ampermusic.com), you can specify what kind of music you want based on mood, instrumentation, tempo and duration. You click “Render,” and boom! There’s your original piece, not only composed but also “performed” and “mixed” by AI software.
Amper’s software doesn’t write melodies. It does, however, produce impressive background tracks—that is, mood music. This company is going after stock-music houses, companies that sell ready-to-download music for reality TV shows, Web videos, student movies, and so on.
I found these examples of robotically generated art and music to be polished and appealing. But something kept nagging at me: What happens in a world where effort and scarcity are no longer part of the definition of art?
A mass-produced print of the Mona Lisa is worth less than the actual Leonardo painting. Why? Scarcity—there’s only one of the original. But Amper churns out another professional-quality original piece of music every time you click “Render.” Elgammal’s AI painter can spew out another 1,000 original works of art with every tap of the enter key. It puts us in a weird hybrid world where works of art are unique—every painting is different—but require almost zero human effort to produce. Should anyone pay for these things? And if an artist puts AI masterpieces up for sale, what should the price be?
That’s not just a thought experiment, either. Soon the question “What’s the value of AI artwork and music?” will start impacting flesh-and-blood consumers. It has already, in fact.
Last year the music-streaming service Spotify lured AI researcher François Pachet away from Sony, where he’d been working on AI software that writes music.
Earlier, reporters at the online trade publication Music Business Worldwide discovered something fishy about many of Spotify’s playlists: according to the report, songs within them appeared to be credited to nonexistent composers and bands. These playlists have names like Peaceful Piano and Ambient Chill—exactly the kind of atmospheric, melodyless music AI software is good at.
Is Spotify using software to compose music to avoid paying royalties to human musicians? The New York Times reported that the tracks with pseudonyms have been played 500 million times, which would ordinarily have cost Spotify $3 million in payments.
But Spotify says Pachet was hired to create tools for human composers. And it has flatly denied that the tracks in question were created by “fake” artists to avoid royalties: while posted under the names of pseudonyms, they were written by actual people receiving actual money for work that they own. (It’s still possible Spotify is paying lower royalties to these mysterious music producers.) But the broader issue remains. Why couldn’t Spotify, or any music service, start using AI to generate free music to save itself money? Automation is already on track to displace millions of human taxi drivers, truck drivers and fast-food workers. Why should artists and musicians be exempt from the same economics?
Should there be anything in place—a union, a regulation—to stop that from happening? Or will we always value human-produced art and music more than machine-made stuff? Once we’ve answered those questions, we can tackle the really big one: When an AI-composed song wins the Grammy, who gets the trophy?