Generative AI is a Reverse Mechanical Turk;

Generative AI is having a rocky start due to innovators trying to run roughshod over stakeholders... and humanity.

 

I, Own That

Successful technology innovators have a history and reputation for proceeding carelessly without regard to the concerns of regulators and others. The “move fast and break things” approach sometimes yields seemingly positive outcomes with few repercussions. When done in a laboratory or sandbox environment (yielding to obvious safety concerns), this is an excellent approach that accelerates iteration and provides a safe environment to explore concepts without artificial constraints. When this type of experimentation is taken out of the sandbox and into the real world, things do not always go smoothly.

Recently, the Recording Industry Association of America (RIAA) filed a lawsuit against AI-based startups Odio and Suno. These businesses sell access to their own AI tools for generating music. The lawsuit claims that the two startups use copyrighted material to train their AI models (almost certainly true) and that this usage is a violation of the copyrights of artists and other rights-holders. The counterarguments suggest that this kind of sampling is protected under fair use, but there’s no way they didn’t sample the entire song in every instance—and that’s not defensible as fair use. By effectively incorporating the entirety of copyrighted works into their public product, they are undeniably violating those copyrights and profiting from the original works of others without compensation.

According to U.S. and international copyright law, the transformation of a work must be substantial and bear its author's personality sufficiently to be original, and subsequently, to be considered a “derivative work.” If a work does not meet this requirement, it is likely to be considered illegal copying. AI algorithms do not have personalities, as they are not people. The output from an AI algorithm bears the personalities of the original works’ artists that it was trained on. AI executives are hoping to capitalize on the personalities and work of artists and performers while cutting them out of any profit, erasing their agency, and undermining the value of their contributions and their worth as individuals. This was made abundantly clear when OpenAI blatantly stole Scarlett Johansson’s voice and personality earlier this year. The obvious product of the AI industry, on its current course, is dehumanization.

 

Shinier than yours, meatbag!

Individual expression is the defining characteristic of the human experience. The human need for connection is satisfied only when a person feels seen and heard. We attempt to make ourselves seen, in large part, by expressing ourselves. Acknowledgment and positive encouragement foster quality connections. The quality of relationships in a person’s life is the most reliable predictor of happiness. Individual expression is also the fuel of creative markets. It not only supplies value in the eyes of the consumer, but it also provides motivation for the creator—so long as the creator gets their due. Innovation and productivity are positively correlated with the harnessing of those human motivations. If AI technology has the potential to displace human concerns by matching or exceeding human abilities, then we should ask ourselves if that is a desirable outcome. We might agree that automation and artificial intelligence are welcome to displace us in certain human concerns—repetitive, physical labor chiefly among them. But supplanting the artistic and creative energies of individual expression would have much more far-reaching consequences.

Artists deserve to be paid for their creative output. Not only is paying artists fair, but paying them enables the creation of more and higher-quality art. Copyright holders stifle creativity when they pursue copyright claims too aggressively, but a legal system that upholds copyright creates immense incentive that is essential for the health of the artistic community and their output. Art is not the only potential victim of generative AI. Scientific and technological advancements, software, branding, and personal identities are all at risk. Downgrading intellectual property (IP) rights is especially dangerous to Western nations with white-collar economies. The Western-led international order has been a rigorous defender of IP rights around the globe. Even Communist China is forced to pretend it respects and defends these rights if it wants equal access to international markets. If legal protections are severed from intellectual property when it is transformed through an AI algorithm, then AI models become intellectual-property-laundering schemes. Intellectual property would be rendered virtually worthless—returns on R&D investment would likely crater, taking scientific advancement and creative output with it.

 

In an insane world, it was the sanest choice.

How do we respect the rights of artists and other rights-holders without squelching the amazing potential that AI has to enhance our experiences in meaningful ways? I talked about Suno recently in the article, ‘Running On A Tick();’ where it was used to create a fantastic piece of music that contributed enormously to the enjoyment of a very trivial bit of private conversation. The point that I made in the article was that that music (and artwork) would never exist without AI because the usual methods of production require immense investments of time and, usually, money. By lowering the bar of access to this kind of production, we have the opportunity to enhance and tailor experiences to any situation or individual regardless of import or profit. When I imagine the possibilities that would, otherwise, never materialize, that’s when I become truly convinced of the value of generative AI. It’s important that the solution ensures wide access to generative AI that enhances individual expression and the human experience without supplanting it.

Striking the right balance requires integrating the generative AI industry into existing systems for royalties and licensing. This is only possible if we have transparency about its’ training data. The AI algorithm’s IP holder also deserves to get paid, so AI firms will have to work with publishers and rights-holders to agree on a remuneration scheme. It is already technically feasible to segment the training data, but AI firms like to pretend that attribution is impossible—that somehow AI can solve cancer, but the problem of attributing sources is insurmountable. Bullshit. If tech firms believe they can bully the markets into accepting their antisocial behavior, they will not bother to solve the attribution problem. Future legislation regulating the commercial use of data-trained algorithms should require public transparency of data source attributions and private, certified validation of the algorithm’s output against those sources. Anything less will invite a dramatic erosion of intellectual property rights.

Unless we want all future IP to belong to algorithm designers or current IP owners, the output of algorithms must belong to the end-user under certain circumstances. Digital synthesizers and instruments are undoubtedly constructed from elements protected by intellectual property rights. However, the end-user can create with them freely without being encumbered by obscure legal concerns—because the product designers negotiated those licensing issues with IP owners before going into production. The output of an AI algorithm must work the same way. When the end-user supplies a substantial, original prompt and/or seed media, and the algorithm’s output is a substantial transformation that sufficiently bears the end-user's intent and personality, it is a derivative work that must be attributed to the end-user. Failing to protect derivative works will stifle innovation.

 

BathWater->ParentContainer()->DeleteAll();

It is easy to dismiss community and legal concerns as “artificial constraints” and to scoff at executives, judges, and regulators for their intransigence or ineptitude. However, these systems exist for good, human reasons and form part of a system that nurtures freedom, individuality, and prosperity. With the addition of laws requiring commercial algorithm products to be transparent about their data sources, our current legal frameworks will be perfectly capable of dealing with AI advancements and innovations. Uncharted social and market upheavals are not required for this technology to prosper. As far as “move fast and break things” goes… go break your own things.

 
Geordi

For those about to rock, we salute you.

Previous
Previous

Humans And AI Face/Off

Next
Next

Running On A Tick();