I originally intended to use this week's post to outline my thoughts on how exactly we'd disclose our use of AI in the creation of our words and images in an MBH4H ethics statement. However, a few things happened last week, which made me realize that this is probably not the best time to write about ethics and AI in the same sentence, let alone an entire piece.
Let me explain.
On June 19, OpenAI's CTO Mira Murati sat in for an interview at her alma mater, Dartmouth's Thayer School of Engineering, where she said the following:
"I think it's [AI] going to be a collaborative tool, especially in creative spaces where more people will become more 'creative' [her air quotes].” She went on to say [29:30 in the video], "Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place, you know, if the content that comes out of it is not very high quality."
As someone well used to tech leaders in Silicon Valley pontificating about "making the world a better place" regardless of the potential human cost, I still found Murati's level of hubris and lack of compassion astounding. For her to infer that those creatives who lose their livelihoods are somehow at fault for not being good enough made me want to go home and slam all the doors. As Joss Fong wrote on Threads, "There's an audacity crisis happening in California and it needs to be addressed."
Of course, OpenAI executives talking about creative job loss is nothing new. As AdAge pointed out in its recap of all things AI at this year's Cannes Lions International Festival of Creativity, a quote that Sam Altman made back in March was the topic of much conversation, in which he stated that "95% of what marketers use agencies, strategists and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI."
And it isn't just OpenAI exacerbating this burgeoning audacity crisis. Wired rammed the point home further last week with some excellent investigative journalism by Dhruv Mehrotra and Tim Marchman, who revealed that Perplexity was effectively gaslighting digital media. Forbes had accused Perplexity of ripping a recent scoop, and when asked for comment, Aravind Srinivas, Perplexity's CEO, claimed to the AP that "We are actually more of an aggregator of information and providing it to the people with the right attribution." It subsequently turns out that statement is not entirely true.
Mehrotra and Marchman called bullshit. Literally. They wrote that not only was Perplexity scraping content (including from Conde Nast publications, including Wired), but the company was doing so by using a "hidden" server and ignoring the Robots Exclusion Protocol (aka robots.txt)—the voluntary standard that tells web crawlers what parts of a site are not allowed to be captured for use, including MBH4H here on Substack.
The *Chef's Kiss* kicker was Marchman's follow-up piece that revealed that "Perplexity Plagiarized Our Story About How Perplexity Is a Bullshit Machine." Perplexity plagiarizing the piece accusing Perplexity of plagiarizing the publication is not only audacious, it's unconscionable.
I tend to be more of an optimist when it comes to new technologies, but I find the move-fast-and-take-things attitude demonstrated by these two billion-dollar AI companies to be utterly reprehensible; it's shitty behavior, pure and simple.
It's also more than a little demoralizing. I'm beginning to see generative AI as more of an existential threat to the creative industry than a benefit. I now wonder whether, like my initial reaction to Apple's iPad "Crush" ad, my earlier enthusiastic optimism may have been blinded by my age. Until recently, I didn't see generative AI as that big of a deal because, to be blunt, the biggest threat to my career in the creative industry is being 61 years old.
But after reading the many thoughts and concerns from other creatives over these past few months (most of whom are much younger than me), and last week listening to some of the most powerful and wealthy people within AI blatantly telling us how little they value our work, I think the people in our industry saying that it’s not possible to see generative AI as anything other than an explicit threat to our livelihoods may have a point.
The move-fast-and-take-things attitude demonstrated by these two billion-dollar AI companies is utterly reprehensible and shitty behavior, pure and simple.
That said, I still don’t believe that all AI is an equal threat. It is essential to make the distinction between using AI as a tool, for example, to remove or extend backgrounds in Photoshop, using Grammarly to copy edit text, or even using OpenAI or Gemini for brainstorming an article, and using AI as a robot: entering a prompt in Midjourney and using whatever it comes up with as final art.
I would argue that art made by a human using an AI tool as part of their creative process is not AI-generated art. Regardless, we should have clear ethical statements that disclose when and how we use them. Just because the AI industry behaves unethically doesn’t mean we should.
Last week, I announced that Ross and I had made the decision to write an ethics statement for MBH4H, specifically with this use of AI in mind. We’re keen to create an outline with a clear rationale for how and when we will disclose the use of AI in our creative works. We’re also eager to contribute to a positive discussion over ethical standards for using AI within the creative community.
It would be helpful to widen the conversation among artists, writers, photographers, and others within the creative industry and discuss ways to take a more active and direct approach to mitigating this technology—which, as I have said before, cannot be wished away.
Of course, this conversation has already started. I wrote last week about Jingna Zhang, the founder of Cara, who channeled her concerns about generative AI into creating an entirely new platform for artists. And just this week, notable SciFi author John Scalzi wrote a post that stated that after learning some AI-generated artwork had been used for the cover of one of his books published in Italy, he has made it his policy that henceforth, it would be written into all of his book contracts “that cover art must be created by a human artist.”
I applaud Scalzi for taking this stance and for also echoing what I think is the reasonable difference between using Generative AI to create artwork and AI as a tool. Scalzi said:
For anyone about to chime in about ‘AI’ features in drawing programs, Photoshop, etc, I will note I think there is a distinct creative difference between using these programs as tools to foster human creativity, and using these programs to substitute for human creativity. If you can’t parse the salient difference between those, that’s on you.
With those in the AI industry now saying that quiet part out loud, it’s time for more of us in the creative industry to find ways of making AI work for us, not against us.