Brad Bird's original pitch for The Iron Giant, his masterpiece of a movie, was: "What if a gun had a soul and didn't want to be a gun?"
It is such a wonderfully poignant idea that as soon as I first read it, I felt it crawl into my brain and set up camp. It is now toasting marshmallows over a fire while the moon illuminates a range of mountains reflecting off the lake in the distance.
I also find Bird’s premise somehow relevant when thinking about whether AI (especially in the context of creativity) is good or bad. Maybe it’s neither. Or perhaps it’s both. Regardless, what’s clear is that unlike the Iron Giant, the more “human” the AI behaves, the more we seem threatened by it. So the answer to whether AI is good or bad might be, “It depends: is it a clever tool, or is it creative in its own right?”
When I think of AI as a tool, albeit a clever one, it is largely beneficial. But when I view AI as a substitute creative, as having authorial intent in some capacity, it becomes undeniably more threatening in my mind. It’s no longer a tool to help me; it’s here to replace me.
Of course, that’s a sweeping generalization: AI isn’t a binary choice between it’s good or it’s bad. There are lots of variables at play between the two. For example, suppose it turns out that an amazingly useful new creative AI tool has been trained on the stolen work of other creatives. In that case, I have to make a value judgment between whether the ethics of using it outweigh the benefits. The more that AI is like a digital version of a food mixer, automating the work I would normally do myself by hand, the more my metaphorical slider moves towards the good side. If I had to make a choice about the current state of AI today, I’d probably place the scale somewhere around 60-40, good-to-bad. Maybe 70-30. As I said, it depends.
During yesterday's WWDC keynote, Apple unveiled "Apple Intelligence," their first and long-expected foray into AI. And they took their time; the live stream had been in full effect for well over an hour before Craig Federighi even mentioned it. But even though it might not have been explicit, AI was lurking implicitly in the wings from the get-go.
When Apple Intelligence was finally revealed, it was done in the most Apple-y way possible: through the lens of privacy. Apple Intelligence is “aware of your personal data without collecting your personal data," said Federighi and he labored on the point that "powerful intelligence goes hand in hand with powerful privacy," sentiments no doubt intended to reassure billions of users and make them comfortable diving into the deep end of AI, knowing that Apple is providing the life preserver.
And it worked, or at least it certainly did for me. By focusing on understanding personal context, Apple is using Apple Intelligence to make incredibly useful tools designed to help me do many things I am already doing but better, more intelligently. This is what good AI looks like, I thought, and I moved my metaphorical slider to roughly 90-10 Iron Giant-to-Terminator.
Then came the announcement of Genmoji, and I moved the slider back to around 50-50.
After their recent epic fail of crushing the instruments of creativity in a hydraulic press, it was interesting that Apple seemed to go out of their way to emphasize that Genmoji and Image Wand are focused on creating images for personal use. The focus is on making "totally original images to make everyday conversations more enjoyable" in iMessage or creating artwork to "accompany a children's story." Image Playground—which will be available system-wide across Notes, Freeform, Keynote, Pages, etc.—is more akin to home cooking than opening your own restaurant .
The presentation seemed framed as very much AI art for amateurs, with the apparent intent not to rile up the creator community, which is doubtless still a little sore after the “Crush” video. The images certainly had that "AI look" to them. At the risk of sounding like a total snob, I would reclassify the three image styles of Sketch, Illustration, and Animation as Tacky, Tackier, and Tackiest.
Regardless of the emphasis on using Apple Intelligence to create images for personal use, Apple has been somewhat vague about the sources of training data used for Genmoji and other image-generation tools. I am not suggesting anything nefarious here—according to John Giannandrea during the live event at the Steve Jobs theater, “a large amount of the data was actually created by Apple.” But Giannandrea also said that “we start with the investment we have in web search.” Does that mean Apple got it from Google? We just don’t know.
Perception is the primary mitigating factor on the sliding scale of whether AI is fundamentally good or bad.
If Adobe’s ongoing travails are anything to go by, I think it would be better for Apple to get out in front of this. Even though, with their post-keynote Q&A, Apple has been somewhat transparent, Adobe’s recent missteps show how even a suspicion of opacity often leads creatives to draw the worst possible conclusion; the fact that Adobe clarified their terms and conditions does not mean that negative perceptions many hold towards the company have gone away.
Perception is the primary mitigating factor on the sliding scale of whether AI is fundamentally good or bad. Photoshop is the perfect example. Despite the epic fail of “Skip the photoshoot,” I find the AI tools within Photoshop to be astonishingly useful. As someone who has spent hours extending backgrounds in images “by hand,” being able to do that in seconds using AI is mind-boggling; an Iron Giant 10 out of 10 on the good/bad scale. But, the perception that Adobe is using my skills to train AI to ultimately replace me tends to dial up the Terminator negativity a tad.
It’s the same with Genmoji. I may find them tacky, but clearly, millions, nay billions of people will think they’re magic and awesome and have zero misgivings about using them—and taking Apple at their word with regards to the training data, their customers have every right to do so.
And therein lies the heart of the “is AI good or bad” debate: It’s both, and the difference between the two is largely down to who is using the AI and why. To bring this back to another food analogy: I love to use AI like a food mixer, but I get incredibly nervous when other people use it to replace the chef.