AI is a good tool and a bad robot
I like my creative products to be productive, boring, and liability-free
In a recent piece on MBH4H, I stated that I am happy to use Generative AI, the tool, to help me create something; I am not happy for AI, the robot, to do the work for me. My reasoning? Generative AI is a bad robot.
Generative AI is not only a bad robot because of the work it delivers—which, from a creative perspective at least, tends to be a bit meh, IMHO—but because its use is increasingly opening up legal and intellectual property liabilities. Because Generative AI doesn’t so much create as it does copy.
Last week in a Q&A session with investors, Nintendo President Shuntaro Furukawa made it clear that the company intends to avoid using Generative AI when making its own games because of potential IP issues. As reported by Tweaktown, Furukawa said:
"In the game industry, AI-like technology has long been used to control enemy character movements, so game development and AI technology have always been closely related.
“Generative AI, which has been a hot topic in recent years, can be more creative, but we also recognize that it has issues with intellectual property rights.
“We have decades of know-how in creating optimal gaming experiences for our customers, and while we remain flexible in responding to technological developments, we hope to continue to deliver value that is unique to us and cannot be achieved through technology alone."
Nintendo famously guards its IP with a rod of iron and is widely regarded as one of the most litigious companies in gaming and entertainment. So, it’s hardly surprising that it is hesitant to use any technology that could weaken its defense.
But I also consider Nintendo to be one of the most creative companies on the planet. As Furukawa says in his statement, Nintendo has decades of experience creating and building some of the most popular and respected video games ever made. Why on Earth would they use Generative AI technology that may cause unintended consequences—including losing control of its IP—when they’re doing just fine without it? There’s clearly not enough upside.
Nintendo’s decision to avoid using Generative AI is wise not only from a creative and legal perspective but also from an ethical brand values standpoint.
2024 may well go down as the games industry’s annus horribilis with over 10,000 jobs lost and a deep, pervasive unease about the impact AI will make on those jobs that remain. It gives plenty of people pause that Microsoft (no surprise there) and EA Games have a diametrically opposite approach to Nintendo. Whereas Nintendo sees AI as a tool, Microsoft and EA seem to be well and truly on team robot.
Why on Earth would Nintendo use Generative AI technology that may cause unintended consequences—including losing control of its IP—when it’s doing just fine without it? There’s clearly not enough upside.
For a specific example of the dangers that AI presents to a brand’s image in the creative sphere, look no further than Figma. Last Tuesday, Figma continued the recent trend of companies moving fast and taking things when it released, and then quickly pulled its new Generative AI-powered Make Designs feature because it appeared the tool was flagrantly copying the design of Apple’s iOS Weather App. As Jay Peters at The Verge reported, “Figma CEO Dylan Field posted a thread on [Twitter] early Tuesday morning detailing the removal, putting the blame on himself for pushing the team to meet a deadline, and defending the company’s approach to developing its AI tools.”
In his post on Twitter (still not calling it by any other name, still not sorry), Field was at pains to stress that, “the Make Design feature is not trained on Figma content, community files or app designs. In other words, the accusations around data training in this tweet are false.”
I’m not surprised he called this out so directly. Figma is learning the hard way that the creatives and creators who make up the bulk of its user base are deeply concerned that their work is being scraped without their knowledge or (express) permission or being compensated for it—the fact that we’re paying these companies for their services only adds insult to potential injury.
In his reporting for The Verge, Peters spoke directly with Figma CTO Kris Rasmussen. Peters: “I asked him point blank if Make Designs was trained on Apple’s app designs. His response? He couldn’t say for sure. Figma was not responsible for training the AI models it used at all." Rasmussen confirmed to Peters that the AI models that are used by Make Designs are OpenAI’s GPT-4o and Amazon’s Titan Image Generator G1.
Even though it appears the blame for this particular snafu should be laid at the door of OpenAI and Amazon, it doesn’t really matter; the mere perception that creatives’ designs are being used for training by Figma is damaging enough, regardless of whether that data came from an AI company, or the work that was scraped belonged to Apple. And it certainly doesn’t help that as of August 15th, everyone who uses Figma will have to opt-out to prevent the company from using their work as training data.
Adobe looked set to do the same earlier this year until the company unilaterally updated its terms & conditions in June as a result of a massive pushback from their user base, who were outraged by language that seemed to give Adobe permission to use their work for training data. As WIRED reported at the time, Adobe was “caught in the crossfire of intellectual property lawsuits, the ambiguous language used to update the terms shed light on a climate of acute skepticism among artists, many of whom overrely on Adobe for their work.”
Adobe updated its terms to expressly say that no user's work stored locally or in the cloud will be used as training data, the only exception being anything submitted to Adobe Stock.
These missteps by Adobe and Figma is symptomatic of a growing trend of companies, especially companies catering to the creative community, that are seemingly pressured by their investors and stockholders to incorporate AI into their products, regardless of whether it makes sense or provides any real benefit to the users. For example, I wouldn’t be surprised if the “deadline” Figma’s CEO Dylan Field referenced in his tweet was motivated more by financial goals rather than a need to service his customers.
Field wouldn't be the first CEO to come under such pressure. "Powered by AI" is a marketing phrase that is far more relevant to the financial sector than the customer base; it is currently a massive factor in securing funding, which is hardly surprising as AI is attracting the lion's share of investment in today's bullish market. So much so that some financial analysts accuse AI of "flattering" stockholders by promising unrealistic, or at least an unproven ROI.
Personally, I think this financial feeding frenzy is the reason why we’re seeing companies launching bad new AI products and services, apparently without even a modicum of real-world testing, who then have to hurriedly scream, “Turn it off, turn it off” when it completely shits the bed and makes everyone mad.
So, to those companies marketing their Generative AI to the creative industry, I would suggest the following: Let's take a deep breath and pause. Just because you can doesn't mean you should. But if you are going to make an AI robot, ask yourself whether you're making it for your financial backers or your customers. If it's the latter, may I suggest you make a robot designed to take on repetitive, mundane tasks that will free us from spending hours doing straightforward things and allow us to focus on the creativity part. Just give us the tools we need to work faster and smarter. We can provide the rest.
Oh, and one more thing: stop stealing other people's work. Thank you.