
When Creators Rise Against Generative AI
Unless you’ve been entirely disconnected from both technology and the news circuit, you know that Artificial Intelligence (AI) has stirred up a fervent debate, especially throughout the last year. Even though many of us were astounded upon inputting a short line of directive text and being met with impressive compositions that ranged from code to poetry, to art, or even videography, the realization that there was – and is – a catch, quickly set in. The outputs generated by AI, which wow audiences receiving responses to requests as oddly specific as “create a 3D render of a panda balancing a beach umbrella on its head while on vacation in Mallorca”, are not entirely original. Instead, they are made up of millions of reference pieces that were often lifted and used to train AI models without permission from the authors. Many of us have seen replicas of the Mona Lisa floating around or have toyed with the likes of an Andy Warhol “pop art” portrait generator, but something hit differently when it came to AI scraping artwork from all corners of the digital world, and using it to piece together new works that were essentially credit-less. With the looming threat of AI suffocating creators out of work, the idea of permitted plagiarism on account of this mass-scale “borrowing” of art becomes even more questionable.
During the New Year’s weekend, information leaked, detailing a database of over 16,000 artists whose works were used to train Midjourney, the wildly popular AI model used for visual creation. On it were household names like Frida Kahlo and Norman Rockwell, but also thousands of present-day artists who did not consent to their work being available for free use and replication in this way. From digital artists to game designers, illustrators, hobbyists, and even a 6 year old child, creators’ pieces were exploited while their protections against AI proved to be nonexistent. The art community was furious. They still are.
Midjourney developers caught discussing laundering, and creating a database of Artists (who have been dehumanized to styles) to train Midjourney off of. This has been submitted into evidence for the lawsuit. Prompt engineers, your “skills” are not yourshttps://t.co/wAhsNjt5Kz pic.twitter.com/EBvySMQC0P
— Jon Lam #CreateDontScrape (@JonLamArt) December 31, 2023
What is important to note is that we are moving out of the place where complaints against AI are met with shrugs. Creator communities are moving the needle. Their voices are being heard, and across the landscape, action is beginning to take place. Whether it’s academic institutions like The University of Chicago (which has been working on Glaze, a system designed to protect human artists by disrupting style mimicry) putting brain-power together to develop tools that can support artists in protecting their work, legal professionals stepping up to say that what’s going on is systemically wrong and needs to be addressed, or global governments moving toward establishing policy and initiatives which demand responsible AI use in an informed manner, complete with guidelines and expectations, the wheels are turning. There certainly is a lot of room left for growth, and the wheels must turn faster, but such are the challenges of positive change.
“It’s clear that entire works are being used wholesale without permission.This tells me that copyrighted materials were used to train the models, and there are not any sufficient guardrails in place to prevent output of infringing content”
Scott Sholder, an intellectual property lawyer representing The Authors Guild in a suit against OpenAI
🧵 The historic NYT v. @OpenAI lawsuit filed this morning, as broken down by me, an IP and AI lawyer, general counsel, and longtime tech person and enthusiast.
— Cecilia Ziniti (@CeciliaZin) December 27, 2023
Tl;dr – It's the best case yet alleging that generative AI is copyright infringement. Thread. 👇 pic.twitter.com/Zqbv3ekLWt
Further demonstrating the power of community response is that it’s no longer just AI giants like OpenAI or Midjourney that are coming under fire for engaging in what is being openly considered as plagiarism, or copyright infringement. Non-AI companies are being held to new standards as well. With the risk that Generative AI poses to the livelihoods of creators (“will AI take our jobs” is still one of the largest questions on many minds), there is a new sense of vigilance that has risen. When companies like Wizards of the Coast (responsible for Magic: The Gathering) or Wacom (the digital art equipment manufacturer) are caught using AI, rather than responsibly commissioning creators, to produce materials such as those used in advertising, creator communities take notice and demand – not ask – that they do better. Considering how quickly corporate apology statements, and promised policy changes filter from these missteps, we don’t doubt that the level of criticism stemming from creators and consumers everywhere is even more overwhelming than just what is seen on social media. Even Wizards of the Coast gave nod to this, as they began their due apology for AI use with credits to the community in stating that “as you, our diligent community, pointed out, it looks like some AI components […] crept into our marketing creative.” It is also relevant to note that this isn’t Wizards of the Coast’s first brush with criticism around AI use; AI art was also previously found in the “Dungeons & Dragons” book that the group was producing. This was also discovered and escalated by the community.
Creators have taken note that silence does nothing but allow companies to continue pushing the envelope, and are taking control of the conversation. Where unions like the Writers Guild of America (WGA) or SAG-AFTRA were infamously able to negotiate new rights agreements for their members, protecting them, their professional careers, and their income, from the unfair competition that AI poses, individual creators don’t usually have these same luxuries. Banding together creates a perhaps unofficial yet still effective union of its own right.
In the past few days, Wizards of the Coast was caught using AI on ad campaign pieces after saying they wouldn't. Wacom got caught as well and deleted, which is crazy considering their products, and looks like Apex Legends too. Jobs are going in real time, makes me nauseous. pic.twitter.com/EGBA1INMPZ
— Reid Southen (@Rahll) January 7, 2024
There is no doubt that we (humans, that is) also learn from existing materials. The entire premise of education is based on utilizing previous knowledge or creations to instruct others until, and even after, they are able to begin creating independently. Whether we are talking about generalized education, or specialized, we study what already exists in order to be able to produce something new. Writers read to write better, artists draw inspiration from existing art and learn techniques that have been experimented with for ages, engineers follow formulas and regulatory codes that have been pre-established, and so on. Yet we manage to produce something new in a way that is different from AI’s skillset. In fact, most experts are determined to create work that is new, and different, and rivals or challenges that which already exists, if not improving upon it.
While humans absorb existing works as a source of inspiration and education, applying a deep personal and emotional understanding to what they learn, AI operates differently. AI systems analyze vast amounts of data, learning patterns and styles, but without the contextual understanding or the personal touch humans inherently bring. This process, often devoid of the nuances of cultural, ethical, and emotional undertones, raises concerns. It should now go without saying that when AI uses works without explicit permission, it can inadvertently undermine the original creator’s rights and intentions. Unlike a human learning from an example and adding a personal, transformative touch, AI may replicate styles or elements too closely, risking infringement on the original creator’s intellectual property.
“Outside of the legal intricacies, there's the common sense argument: what justification can there be for training a model that competes with human creators on those creators' work, without any permission or payment?”
— neil turkewitz (@neilturkewitz) January 10, 2024
What justification indeed?
🙏🏽 @ednewtonrex https://t.co/4CP8sGQzy2
Protecting work and intellectual property in the age of AI is vital. It’s not just about safeguarding the financial interests of creators but also about preserving the integrity and uniqueness of human creativity. When creators assert their rights, it’s not a matter of notoriety and income alone (though these are massive factors that shouldn’t be underscored), they are advocating for the value of human touch in art and innovation, and for the credit and respect that are due in exchange for the arduous work that was required of producing pieces which were ultimately ripped in seconds and thrown into a pile of training data. This resistance is not a stand against the use of AI in creativity, but a call for responsible and respectful use that recognizes and honors the hard work and originality of human creators. It is our collective responsibility to contribute to a future where AI aids creativity without diminishing the irreplaceable value of human ingenuity and emotion in artistic and intellectual works. We stand with the creators raising their voices, along with important questions on responsible AI use, and the work taking place to make sure that the future ahead of us all is one that we can be proud of.