The obsession with protecting access to lyrics is one of the strangest long-running legal battles to me. I will skip tracks on Spotify sometimes specifically because there are no lyrics available. Easy access to lyrics is practically an advertisement for the music. Why do record companies not want lyrics freely available? In most cases, it means they aren't available at all. How is that a good business decision?
One amusing part of lyrics on Spotify to me is how they don't seem to track which songs are instrumentals or not and use that to skip the message about them not knowing the lyrics. An instrumental will pop up and it will say something like "Sorry, we don't have the lyrics to this one yet".
The content industries should have been the ones to invent LLMs, but their head is so stuck in the past and in regressive thinking about how they protect their revenue streams that they're incapable of innovating. Publishing houses should have been the ones to have researchers looking into how to computationally leverage their enormous corpus of data. But instead, they put zero dollars into actual research and development and paid the lawyers instead. And so it leads to attitudes like this.
I'm pretty sure you could even have lyrics with a separate copyright from the composition itself. For example, you can clearly have lyrics without the music and you can have the composition alone in the case that it is performed as an instrumental cover or something.
I know nuance takes the fun out of most online discussions, but there's a qualitative difference between a bunch of college kids downloading mp3's on a torrent site and a $500 billion company who's goal among other things is to become the primary access point to all things digital.
Should young adults be allowed to violate copyright and no one else? The damages caused seem far worse than an LLM being able to reproduce song lyrics.
Is it simply "we like college kids" and "we hate OpenAI"? That dictates this?
Very true. Just the other day, another “copyright is bad” post on the front page. Today its copyright is good because otherwise people might get some use of material in LLMs.
Considering this is hacker news, it seems to be such an odd dichotomy. Sometimes it feels like anti-hacker news. The halcyon days of 2010 after long gone. Now we need to apparently be angry at all tech.
LLMs are amazing and I wish they could train on anything and everything. LLMs are the smartphone to the fax machines of Google search.
Sounds like it was never about copyright as a principle, only symbolic politics (ie. copyrights benefit megacorps? copyright needs to be weaker! copyright hurts megacorps? copyright needs to be stronger!)
It's a good decision because it must be an incredible minority of people who only listen to music when the lyrics can be displayed. I'd imagine most people aren't even looking at the music playing app while listening to music. Regardless, they are copyrighted and they get license fees from parties that do license them and they make money that way. Likely much more money than they would make from the streams they are losing from you.
I think it depends on the music. Most people will have a greatly improved experience when listening to opera if they have access to (translated) lyrics. Even if you know the language of an opera, it can be extremely difficult for a lot of people to understand the lyrics due to all the ornamentation.
I think having the lyrics reproducible in text form isn't the problem. Many sites have been doing that for decades and as far as I know record companies haven't gone after them. But these days with generative AI, they can take lyrics and just make a new song with them, and you can probably see why artists and record companies would want to stop that.
Plus, from TFA,
"GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated."
> I think having the lyrics reproducible in text form isn't the problem. Many sites have been doing that for decades and as far as I know record companies haven't gone after them.
Reproducing lyrics in text form is, in fact, a problem, independent of AI. The music industry has historically been aggressively litigious in going after websites which post unlicensed song lyrics[0]. There are many arcane and bizarre copyright rules around lyrics. e.g. If you've ever watched a TV show with subtitles where there's a musical number but none of the lyrics are subtitled, you might think it was just laziness, but it's more likely the subtitlers didn't have permission to translate&subtitle the lyrics. And many songs on Spotify which you'd assume would have lyrics available, just don't, because they don't have the rights to publish them.
It's like saying that movie studios haven't gone after Netflix over movies, so what's the issue with hosting pirated movies on your own site. The reason movie studios don't go after Netflix is that they have a license to show it.
I'm not one of the downvoters, but it may be this: "Many sites have been doing that for decades and as far as I know record companies haven't gone after them."
Record companies have in fact, for decades, been going after sites for showing lyrics. If you play guitar, for example, it's almost impossible to find chords/tabs that include the lyrics because sites get shut down for doing that.
Hmm, alright. I actually do play guitar and used to find chords/tabs with lyrics easily. I haven't been doing that for maybe 10-15 years. Anyway, maybe those sites were paying for a license and I just never considered it
Maybe, but it's also possible to get an AI to produce a song with the exact same lyrics. And a human copying lyrics would also be a copyright issue in any case.
But anyway it seems I misinterpreted the issue and record companies have always been against reproduction of lyrics whether an AI or human is doing it
> Had a couple of drive-by downvotes... Is it that stupid an opinion?
While I do not agree with your take, FWIW I found your comment substantive and constructive.
You seem to be making two points that are both controversial:
The first is that generative AI makes the availability of lyrics more problematic, given new kinds of reuse and transformation it enables. The second is that AI companies owe something (legally or morally) to lyric rights holders, and that it is better to have some mechanism for compensation, even if the details are not ideal.
I personally do not believe that AI training is meaningfully different from traditional data analysis, which has long been accepted and rarely problematized.
While I understand that reproducing original lyrics raises copyright issues, this should only be a concern in terms of reproduction, not analysis. Example: Even if you do no data analysis at all and your random character generator publishes the lyrics of a famous Beatles song (or other forbidden numbers) by sheer coincidence, it would still be a copyright issue.
I also do not believe in selective compensation schemes driven by legal events. If a legitimate mechanism for rights holders cannot be constructed in general, it is poor policy craftsmanship to privilege the music industry specifically.
Doing so relieves the pressure to find a universal solution once powerful stakeholders are satisfied. While this might be seen as setting a useful precedent by small-scale creators, I doubt it will help them.
Likely because you're a "luddite" which in the current atmosphere of HN and other tech spaces, mean you have a problem with a "research institution" which has a separate for-profit enterprise face that it wears when it feels like it having free and open access to the collected works of humanity so it can create a plagiarism machine that it can then charge for people to access.
I don't respect this opinion but it is unfortunately infesting tech spaces right now.
Everyone knows that these LLMs were trained on copyrighted material, and as a next-token prediction model, LLMs are strongly inclined to reproduce text they were trained on.
All AI companies know they're breaking the law. They all have prompts effectively saying "Don't show that we broke the law!". That we continue to have tech companies consistently breaking the law and nothing happens is an indictment of our current economy.
And it's a question of do we accept breaking law for the possibility to have the greatest technological advancement of the 21st century. In my opinion, legal system has become a blocker for a lot of innovation, not only in AI but elsewhere as well.
This is a point that I don't see discussed enough. I think anthropic decided to purchase books in bulk, tear them apart to scan them, and then destroy those copies. And that's the only source of copyrighted material I've ever heard of that is actually legal to use for training LLMs.
Most LLMs were trained on vast troves of pirated copyrighted material. Folks point this out, but they don't ever talk about what the alternative was. The content industries, like music, movies, and books, have done nothing to research or make their works available for analysis and innovation, and have in fact fought industries that seek to do so tooth and nail.
Further, they use the narrative that people that pirate works are stealing from the artists, where the vast majority of money that a customer pays for a piece of copyrighted content goes to the publishing industry. This is essentially the definition of rent seeking.
Those industries essentially tried to stop innovation entirely, and they tried to use the law to do that (and still do). So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.
> So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.
I think they try to expand copyright from "protected expression" to "protected patterns and abstractions", or in other words "infringement without substantial similarity". Otherwise why would they sue AI companies? It makes no sense:
1. If I wanted a specific author, I would get the original works, it is easy. Even if I am cheap it is still much easier to pirate than use generative models. In fact AI is the worst infringement tool ever invented - it almost never reproduces faithfully, it is slow and expensive to use. Much more expensive than copying which is free, instant and makes perfect replicas.
2. If I wanted AI, it means I did not want the original, I wanted something Else. So why sue people who don't want the originals? The only reason to use AI is when you want to steer the process to generate something personalized. It is not to replace the original authors, if that is what I needed no amount of AI would be able to compare to the originals. If you look carefully almost all AI outputs get published in closed chat rooms, with a small fraction being shared online, and even then not in the same venues as the original authors. So the market substitution logic is flimsy.
You're using the phrase "actually legal" when the ruling in fact meant it wasn't piracy after the change. Training on the shredded books was not piracy. Training on the books they downloaded was piracy. That is where the damages come from.
Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.
I think your first paragraph is entirely congruent with my first two paragraphs.
Your second paragraph is not what I'm discussing right now, and was not ruled on in the case you're referring to. I fully expect that, generally speaking, infringement will be on the users of the AI, rather than the models themselves, when it all gets sorted out.
>Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.
Nothing says it's illegal, either. If anything the courts are leaning towards it being legal, assuming it's not trained on pirated materials.
>A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
I'm saying that LLMs are worthwhile useful tools, and that I'm glad that we built them, and that the publishing industry, which holds the copyright on the material that we would use to train the LLMs, have had no hand in developing them, have done no research, and have actively tried to fight the process at every turn. I have no sympathy for them.
The authors have been abused by the publishing industry for many decades. I think they're just caught in the middle, because they were never going to get a payday, whether from AI or selling books. I think the percentage of authors that are commercially successful is sub 1%.
You’re willing to eliminate the entire concept of intellectual property for a possibility something might be a technological advancement? If creators are the reason you believe this advancement can be achieved, are you willing to provide them the majority of the profits?
Without agreeing or disagreeing with your view, I feel like the the issue the issue with that paradigm is inconsistency. If an individual "pirates", they get fines and possible jail time, but if a large enough company does it, they get rewarded by stockholders and at most a slap on the wrist by regulators. If as a society we've decided that the restrictions aren't beneficial, they should be lifted for everyone, not just ignored when convenient for large corporations. As it stands right now, the punishments are scaled inversely to the amount of damage that the one breaking the law actually is capable of doing.
Training on copyright is not illegal. Even in the lawsuit against anthropic it was found to be fair use.
Pirating material is a violation of copyright, which some labs have done, but that has nothing to do with training AI and everything to do with piracy.
Why wouldn’t training be illegal? It’s illegal for me to acquire and watch movies or listen to songs without paying for them. If consuming copyrighted material isn’t fair use, then it doesn’t make sense that AI training would be fair use.
I don’t read this as “don’t show we broke the law,” I read it as “don’t give the user the false impression that there’s any legal issue with this generated content.”
There’s nothing law breaking about quoting publicly available information. Google isn’t breaking the law when it displays previews of indexed content returned by the search algorithm, and that’s clearly the approach being taken here.
I think in the end they will just pay off copyright holders. The German GEMA is mostly interested in rent-seeking through whatever means available, it's basically the whole point of the organization.
They'll easily be paid off once all legal avenues are exhausted for OpenAI. Though they'll of course keep fighting in court in the hopes of some more favorable negotiating position.
While I partially understand (but not support) the hate against AI due to possible plagiarism and "low effort generation" of works, think about the whole process: If model providers will be liable for generating output, that resembles lyrics or very short texts that fall under copyright laws, they will just change their business model.
E.g. why offering lame chat agents as a service, when you can keep the value generation in-house. E.g. have a strategy board that identifies possible use cases for your model, then spin off a company that just does agentic coding, music generation. Just cut off the end users/public form the model access, and flood the market with AI generated apps/content/works yourself (or with selected partners). Then have a lawyer checking right before publishing.
So this court decision may turn everything worse? I don't know.
The fact they don't already do that, sounds to me like the things produced by AI are not worth the investment. Especially since the output is not copyrightable, right?
If there was a lot of gold to find they wouldn't sell the shovels.
There is a lot of value in specialization. It allows capitalism to do its magic to elevate the best uses of your technology without yourself taking on any of the risk. Trying to inhouse everything often smothers innovation and leads to bad resource allocation. It can be done, but in fields with a lot of ongoing innovation it's extremely hard to get right
There is a reason that Cisco doesn't offer websites, and you are probably actively ignoring whatever websites your ISP has. ASML isn't making chips, and TSMC isn't making chip designs
If there is such an immense value in spinning off and selling models separately you can bet that will happen - without court saying so. At the end running these models is a costly job and you'd want to squeeze out every value.
A media generation company that is forced to publish uncopyrightable works, because it cannot make the usage to these media generators public, since that would violate copyright - that does sound like a big win for everyone but that company.
Because that's the only business model that the management of these model provider companies suspect to have a chance of generating income, at the current state.
> While I partially understand (but not support) the hate against AI due to possible plagiarism
There's no *possible* plagiarism, every AI slop IS result of plagiarism.
> E.g. have a strategy board that identifies possible use cases for your model, then spin off a company that just does agentic coding, music generation.
Having lame chat agents as a service does not preclude them from doing this. The fact that they are only selling the shovels should be somewhat insightful.
> Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.
Another glimpse into the "mind" of a tech corporation allowing itself full freedom to profit from the vast body of human work available online, while explicitly declining any societal responsibility at all. It's the user's fault, he wrote an illegal prompt! We're only providing the "technology"!
This is largely how it works for nearly all coprightable work. I can draw Mickey Mouse but legally I'm not doing anything wrong until I try to sell it. It certainly doesn't put Crayola or Adobe at legal risk for me to do so.
Another instance of GEMA fighting an american company. Anyone who was on the german internet in the first half of the last decade remembers the "not available in your country" error messages on youtube because Google didn't make a deal with GEMA.
I don't think that we will end up here with such a scenario: lyrics are pervasive and probably also quoted in a lot of other publications. Furthermore, it's not just about lyrics but one can make a similar argument about any published literary work. GEMA is for music but for literary publications there is VG Wort who in fact already have an AI license.
I rather think that OpenAI will license the works from GEMA instead. Ultimately this will be beneficial for the likes of OpenAI because it can serve as a means to keep out the small players. I'm sure that GEMA won't talk to the smaller startups in the field about licensing.
Is this good for the average musician/author? these organizations will probably distribute most of the money to the most popular ones, even though AI models benefit from quantity of content instead of popularity.
Can't they just ask for copies of the lyrics they are not allowed to use and s/lyrics//g the training set? I imagine the volume of text that will be removed would be relatively miniscule.
I made a living from GEMA payments some while back, but dear lord, so much of how the institution does what it does feels so bad and zero-sum. Might just be that the world would be better off without it. It does something important for right holders for sure, but (and I understand, I am heavily back-seating here without offering a solution) there must be better ways to go about it.
Now, without the fimförderung all those grim dark arthouse movies where people yell "Scheisse!" in Berlin stairwells would never be made. And all that public gremium pleasing shovelware, looking extracute and boring clogging up the appstores with zero sales, what would we do without that.
Take anything popular streamingwise and ask yourself would it get through and by. And if it was stopped by what and who.. fire that, to fix germanys media sector.
Nah. It’s so easy for OpenAI to modify their output. I’m already seeing them restrict news article re-generation by newspaper name. They do it to reduce liability. There’s also a big copyright infringement case coming up in the USA this year, and being able to point to responsiveness to complaints will be a key part of their legal defense I bet.
You can modify the output but the underlying model is always susceptible to jail breaks.
A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.
I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.
Lyrics produced some of the first AI slop I noticed after ChatGPT was launched in late 2022, even if the large models hadn’t been trained on them specifically. Overnight there were a bunch of different advertising-laden sites that clearly scraped Genius or other lyric websites, and then had GPT generate commentaries on what the lyrics supposedly mean, so that these would get picked up by search engines.
The result was mostly comical, the commentaries for vacuous pop music all sounded more or less the same: “‘Shake Your Booty’ by KC and the Sunshine Band expresses the importance of letting one’s hair down and letting loose. The song communicates to listeners how liberating it is to gyrate one’s posterior and dance.” Definitely one of the first signs that this new tech was not going to be good for the web.
I am curious what happens if they call their bluff on this and cut off ChatGPT in Germany. Not that I think OpenAI is doing the right thing, just, I don’t think a country’s government can justify no commercial LLMs to its populace.
There are 80 million Germans. If you where OpenAI, or it's shareholders, would you leave that market open for a competitor? No, you'd make a version of your product without the lyrics. More EU countries are going to follow and reach the same conclusion, especially now that Germany has set a legal precedence. Should OpenAI just pull out of a market with 500 million people and leave it to Claude, Perplexity or someone else entirely?
It doesn't appear that modern LLMs are really that hard to build, expensive perhaps, but if you have monopoly on a large enough market, price isn't really your main concern.
> More EU countries are going to follow and reach the same conclusion, especially now that Germany has set a legal precedence.
That's not how laws and regulations work in European or even EU countries. Courts/the legal system in Germany can not set legal precedents for other countries, and countries don't use legal precedents from other countries, as they obviously have different laws. It could be cited as an authority, but no one is obligated to follow that.
What could happen for example, would be that EU law is interpreted through the CJEU (Court of Justice of the European Union), and its rulings bind EU member states, but that's outside of what individual countries do.
Sidenote, I'm not a English native speaker, but I think it's "precedent", not "precedence", similar words but the first one is specifically what I think you meant.
> That's not how laws and regulations work in European or even EU countries
yes, even if just looking at other court cases in Germany the role of precedent is "in general" not quite as powerful (as Courts are supposed to follow what the law says not what other courts say). To be clear this is quite a bit oversimplified. Other court ruling does still matter in practice, especially if it is from higher courts. But it's very different to how it is commonly presented to work in the US (can't say if it actually works that way).
but also EU member states do synchronize the general working of many laws to make a unified marked practically possible and this does include the general way copy right works (by implementing different country specific laws which all follow the same general framework, so details can differ)
and the parts which are the same are pretty clear about that
- if you distribute a copy of something it's a copy right violation no matter the technical details
a human memorizing the content and then reproducing it would still make it a copy right infringement, so it should be pretty obvious that this applies to LLMs to, where you potentially could even argue that it's not just "memorizing it" but storing it compressed and a bit lossy....
and that honestly isn't just the case in the Germany, or the EU, the main reason AI companies got mostly away with it so far is due to judges being pressured to rule leniently as "it's the future of humanity", "the country wouldn't be able to compete" etc. etc. Or in other words corruption (as politicians are supposed to change laws if things change not tell judges to not do their job properly).
countries don't use legal precedents from other countries, as they obviously have different laws
The seminal authority for all copyright laws, the Berne Convention, is ratified by 181 countries. Its latest revisions are TRIPS (concerning authorship of music recordings) and the WIPO Copyright Treaty (concerning digital publication), both of which are ratified by the European Union as a whole. It's not directly obvious to me that EU member states have different laws in this particular area.
That said, the EU uses the civil law model and precedent doesn't quite have the same weight here as it does under common law.
I think you're right, also not native English speaker.
No, you're right that a German can't influence e.g. the similar lawsuit against Suno in Denmark, but as you point out, it can, and most likely will be cited, and I think it's often the case that this carries a lot of weight.
>That's not how laws and regulations work in European or even EU countries. Courts/the legal system in Germany can not set legal precedents for other countries, and countries don't use legal precedents from other countries, as they obviously have different laws. It could be cited as an authority, but no one is obligated to follow that.
Do you have some sort of different understanding of copyright law where it's legal to commercially use lyrics (verbatim, mind you) without a license?
There are many competing providers of commercial LLMs with equal capabilities, so another vendor would probably be happy to serve a leading Western market of 83 million people.
But I'd be surprised if that was generally the case. It's easy to see why ChatGPT 1:1 reproducing a song's lyrics would be a copyright issue. But creating a derivative work based on the song?
What if I made a website that counts the number of alliterations in certain songs' lyrics? Would that be copyright infringement, because my algorithm uses the original lyrics to derive its output?
If this ruling really applied to any alogrithm deriving content from copyright protected works, it would be pretty absurd.
But absurd copyright laws would be nothing new, so I won't discount the possibility.
> But creating a derivative work based on the song?
1. it wouldn't matter as derivative work still needs the original license
2. expect if it's not derivative but just inspired,
and the court case was about it being pretty much _the same work_
OpenAIs defense also wasn't that it's derived or inspired but, to quote
> Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.
and the court oder said more or less
- if it can reproduce the song lyrics it means it stored a copy of the song lyrics somehow somewhere (memorization), but storing copies requires a license and OpenAI has no license
- it it outputs a copy of the song lyrics it means it's making another copy of them and giving them to the user which is copyright infringement
and this makes sens, if a human memorizes a song and then writes it down when asked it's still is and always has been copyright infringement (else you could just launder copy right by hiring people to memorize things and then write them down, which would be ridiculous).
and technically speaking LLMs are at the core a lossy compressed storage of their training content + statistic models about them. And to be clear that isn't some absurd around five corners reasoning. It's a pretty core aspect of their design. And to be clear this are things well know even before LLMs became a big deal and OpenAI got huge investment. OpenAI pretty much knew about this being a problem from the get to go. But like any recent big US "startup" following the law doesn't matter.
it technically being a unusual form of lossy compressed storage means it makes that the memorization counts as a copyright infringement (with current law)
but I would argue the law should be improved in that case, so that under some circumstances "memorization" in LLMs is treated as "memorization" in Humans (i.e. not a illegal copy, until you make it one by writing it down). But you can't make it all circumstances because like mentioned you can use the same tech to bascially to lossy file compression and you don't want people to launder copy right by training an LLM on a a single text/song/movie and then distributing that...
That seems like a really broad interpretation of "technically memorization" that could have unintended side effects (like say banning equations that could be used to generate specific lyrics), but I suppose some countries consider loading into RAM a copy already. I guess we're already at absurdity
they clearly didn't do that properly, or we wouldn't have the current law suite
the lawsuit was also not about weather it is or isn't copy right infringement. It was about who is responsible (OpenAI or the user who tries to bait it into making another illegal copy of song lyrics).
A model outputting song lyrics means it has it stored somehow somewhere. Just because the storage is in a lossy compressed obscure hyper dimensional transformation of some kind, doesn't mean it didn't store an illegal copy. Or it wouldn't have been able to output it. _Technical details do not protect from legal responsibilities (in general)_
you could (maybe should) add new laws which in some form treat LLM memorized things the same as if a human did memorize it, but currently LLMs have no special legal treatment when it comes to them storing copies of things.
No, it’s specifically about (mostly) verbatim producing big chunks of lyrics in the output. The court PR specifically mentioned memorization, retaining training data, multiple times.
This assumes that tech companies can act above the law because they've got a new feature to jam down our throats. Have you considered that not everyone wants that? Or that it might not be the best thing?
Conversely, last week we had Spain being willing to cut off Cloudflare (!) to protect football match royalties.
> I don’t think a country’s government can justify no commercial LLMs to its populace.
Counter-argument: can any country's government justify allowing its population and businesses to become completely dependent on an overseas company which does not comply with its laws? (For Americans, think "China" in this case)
I come from the country with the world’s oldest continuous parliament, and they change the law all the time. Arguably that’s all the majority of politicians do.
> I don’t think a country’s government can justify no commercial LLMs to its populace
They're not saying no LLMs, they're saying no LLMs using lyrics without a license. OpenAI simply need to pay for a license, or train an LLM without using lyrics.
But lyrics are just one example. Are you saying that training experiments must filter out all substrings from the training input that bear too close a resemblance to a substring of a copyrighted work?
Obviously there's a limit, reproducing a single sentence is unlikely to be copyright infringement just because there are only so many words in a language; but if reproducing some text would be copyright infringement if a human did it, I don't see why LLM companies should get a free pass.
If it's really essential that they train their models on song lyrics, or books, or movie scripts, or articles, or whatever, they should pay license fees.
That is a separate opinion, but with respect to the question at hand, the utilitarian value of being able to ask a computer "what are the lyrics to x" and having it produce them outweighs whatever small ideological sanctity the music labels assign to being able to gatekeep the written words of a composition to a small blessed few. It's not like chat gpt is serving up the mp3 file to you. So correct, it is insane to me that mere reproduction of just the lyrics is afforded such weighty copy protection.
(Vis a vis, I take it you write a certified letter to Universal before reproducing Happy Birthday in public? ;) That is actually a far more egregious violation indeed, as it is both a performance of the copyrighted work and in front of an audience - neither of which are the case for the chatbot - yet one we all seem to understand to be fair use.
This obviously applies to all copyrighted works. I could sue OpenAI when it reproduces my source code that I published on the Internet.
They already "filter" the code to prevent it from happening (reproducing exact works). My guess it is just superficially changing things around so it is harder to prove copyright violations.
Of course the models are not human, but if you consider this situation as if they are persons, then the question becomes: May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I guess the main difference between the situation with language models and humans is one of scale.
I think the question should be viewed like this, if I as a corporation do the same thing but just with humans, would it be legal or not. Given a hypothetical of hiring a bunch of people, having them read a bunch of lyrics, and then having them answer questions about lyrics. If no law prohibits the hypothetical with people, then I don't see why it should be prohibited with language models, and if it is prohibited with people, then there should be no specific AI ruling needed.
All this being said, Europe is rapidly becoming even more irrelevant than it was, living of the largess of the US and China, it's like some uncontacted tribe ruling that satellites can't take areal photos of them. It's all good and well, just irrelevant. I guess Germany can always go the route of North Korea if they want.
> "May a person read lyrics and tell it to someone when asked"
If you sell tickets to an event where you read the lyrics aloud, it's commercial performance and you need to pay the author. (Usually a cover artist would be singing, but that's not a requirement.)
So it's not like a human can recite the lyrics anywhere freely either.
If someone hires me as a secretary, and they ask me what is the lyrics of a song, there is no law that prohibits me from telling them if I know and I don't have to license the lyrics in order to do so.
If they hire me primarily to recite lyrics, then sure, that would probably be some manner of infringement if I don't license them. But I feel like the case with a language model is much more the former than the latter.
As soon as you take the LLM output and publicize it, it turns around and is a lot more akin to having your secretary read out the lyrics publicly. If you don't publicize it in any way, how would the copyright holder ever find out?
But the LLM is not advertised as a lyrics DB, and it in no way guarantees that it will reproduce the lyrics accurately, and similarly the copyright holder will never know that it's reproducing the lyrics unless it snoops on my conversations with it, or go ask it directly.
But then with the analogy, if I'm a secretary and the copyright holder of lyrics calls me and asks if I know the lyrics of one of their songs, I don't think it's infringement to say yes and then repeat it back to them.
The LLM is not publicising anything, it's just doing what you ask it to do, it's the humans using it publicising the output.
> May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I think the difference here is that your example is what a search engine might do, whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
> whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
Is this not something every single creative person ever has done? Is this not what creating is? We take in the world, and then create something based on that.
These people would stream German schlager to every screen and speaker in Europe and charge for it 100 EUR monthly per breathing person, if they could. They are violent.
Guess even AI can’t resist singing along! But seriously, copyright laws don’t hit pause just because it’s “machine learning.” Time for AI to learn the lyrics and the legal notes.
With AI slop showing up everywhere, there’s a real danger that folks will just no longer be motivated to produce real original content.
With all major models not basically trained on nearly all available data, beyond the financial AI bubble about to burst there’s also a big content bubble that’s about exhausted as folks are just pumping out slop vs producing original creative human output. That may be the ultimate long term tragedy of the present AI hype cycle. Expect “made by a human” to soon be a tag associated with premium brands and customer experiences.
It is of no cost to me when someone else writes a book, plays a song or draws a picture. It is also true that, basically whatever I ever do, someone else has done better. This does not stop me from doing those things because the value within them is in doing them.
We have cars, buses and planes, yet people do partake in pilgrimages. The process matters, even if only personally.
AI slop is like 90’s websites and desktop publishing - there’s a novelty for AI-newbie-creators driving them to churn out lazy crap, while being oblivious to how it lands with strangers.
Tastes will mature, society will more vocally mock this crap, and we’ll stop seeing the sloppier stuff come out of reputable locations.
You assume that the public recognizes AI slop for what it is. Across platforms now, people are readily engaging with blatant AI text posts and generated images as if they are bona-fide. In fact, if you point out that the poster is a bot, you may well well get some flack from the community.
> Expect “made by a human” to soon be a tag associated with premium brands and customer experiences.
I went to a grammar school and I write in mostly pretty high-quality sentences with a bit of British English colloquialism. I spell well, spend time thinking about what I am saying and try to speak clearly, etc.
I've always tried to be kind about people making errors but I am currently retraining my mind to see spelling mistakes and grammar errors as inherent authenticity. Because one thing ChatGPT and its ilk cannot do -- I guess architecturally —- is act convincingly like those who misspell, accidentally coin new eggcorns, accidentally use malapropisms, or use novel but terrible grammar.
And you're right: IMO the rage against the cultural damage AI will do is only just beginning, and I don't think people have clocked on to the fact that economic havoc is built-in, success or failure.
The web/AI/software-tech industry will be loathed even more than it is now (and this loathing is increasingly justified)
> one thing ChatGPT and its ilk cannot do -- I guess architecturally —- is act convincingly like those who misspell, accidentally coin new eggcorns, accidentally use malapropisms, or use novel but terrible grammar
Just wait a few more years until the majority of ChatGPT training data is filled with misspellings, accidental eggcorns, malapropisms and terrible grammar.
> folks will just no longer be motivated to produce real original content.
Honestly if your only motivation for creating art was “computers can’t do what I do” then… I don’t want to be too gatekeepy about it, but that doesn’t sound like you’re a ‘real’ artist to me. Real artists create art because they enjoy doing it, not because it’s the exclusive domain of humans.
You don’t need to be special, you don’t need to be the best, you don’t need to even be good or successful or recognized or appreciated (although of course all those things are nice) - you just have to be creating art.
> With AI slop showing up everywhere, there’s a real danger that folks will just no longer be motivated to produce real original content.
I think people would still produce original things as long they have the means for doing it. I guess we could say it is our nature. My fear is AI monopolizing the wealth that once would go to support people producing art.
This. I still produce original things and will continue to do so until I am incapable anymore. What's changed, though, is that I no longer put or discuss those things on the open internet because there's no realistic way to prevent it from getting used to train genAI models.
Plastic/synthetics are the slop of the physical world. They're a side product of extracting oil and gas so they're extremely cheap.
Yet if you look at synthetics by volume, probably 99% of them are used just because they're cheaper than the natural alternative. Yes, some have characteristics that are novel, but by and large everything we do with plastics is ultimately based on "they're cheaper".
Other countries are currently going through the same. KODA is running a similar lawsuit on behalf of the Danish musicians, they can now point to Germany as an example, making it much easier for them to win.
I m not sure about the problem here, lyrics are public you can search '$songname lyrics' and get the result in a website (or even at the search engine page). What's the issue with an LLM producing those lyrics if you ask?
Long ago the first site I remember to do this was lyrics.ch, which was long since shut down by litigation. I'm not endorsing the status quo here, but if the licensing system exists it is obviously unfair to exempt parties from it simply because they're too big to comply.
Member when music sites were suing YouTube for music videos, and now they are begging people to watch them there and YT view counts are a bragging topic?
Soon music industry will be begging OpenAI for exposure of their content, just like the media industry is begging Google for scraping.
However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead? The same way social networks are not liable for content uploaded by users? I think here there is a somewhat double standard.
Of course, maybe OpenAI et al should have get a license before training on the lyrics or to avoid training on copyrighted content. But the first would be expensive and the latter would require them to develop actual intelligence.
Why should the user be liable? They didn't reproduce the copyrighted work and the machine is totally capable of denying output (like it already does for other categories of material).
At the very least, the users being liable instead of OpenAI makes no sense. Like arresting only drug users and not dealers.
There are countries where drug consumption/posesion is penalized too. There is a similar example in other area: For instance, in Sweeden, Norway and Belize selling sex (aka prostitution) is legal, but buying it is not legal. So, your example actually exists in world legislation.
I'm just asking where are we going to put the line and why.
You had originally said the user should be liable instead of OpenAI being liable.
> However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead?
I would imagine the sociological rationale for allowing sex work would not map to a multi-billion-dollar company.
And to add, the social network example doesn't map because the user is producing the content and sharing it with the network. In OpenAI's case, they are creating and distributing copyrighted works.
No, the edited wording still conveys the same meaning. My edit was to fix another grammar typo.
The social networks are distributing such content AND benefiting from selling ads on them. Adding ads on top is a derivative work.
Personally I'm on the side of penalizing the side that provides the input, not the output:
- OpenAI training on copyrighted works.
- Users requesting custom works based on copyrighted IP
That is my opinion on how it should be layered, that's it. I'm happy to discuss why it should be that way or why not. As I put in other comment, my concern is that mandating copyright filtering o each generative tool would end up propagating to every single digital tool, which as society we don't really want.
I am curious why you are of the opinion that the user should be in trouble for requesting the copyright material and not the provider of the material. I feel like there is a distinction in something that was local-first compared to a SaaS. Like a local AI model that reproduced copyrighted works for your own use might not be problematic compared to a remote model reproducing a copyrighted work and distributing it over the internet to you. Most jurisdictions treat remote access across jurisdictional boundaries differently than completely local acts.
> However, the lyrics are shown because an action is the user so, shouldn't be the user be liable instead?
Same goes for websites where you can watch piracy streams. "The action is the user pressing play" sounds like it might win you an internet argument, but I'm 99% sure none of the courts will play those games, you as the operator who enabled whatever the user could do ends up liable.
I think that is completely different. Piracy websites do only one thing. Chatbots are different.
My concern is that where are we going to put the line: If I type a copyrighted song in Word is Microsoft liable? If I upload a lyric to ChatGPT and ask it to analyze or translate it, is it a copyright violation?
I totally understand your line of thinking. However, the one I'm suggesting could be applied as well and it has precedents in law (intellectual authors of crimes are punishable, not only the perpetrators).
Not really. Youtube is not liable as long as they remove the content after a copyright complain and other mechanisms.
The problem is if OpenAI is liable for reproducing copyrighted content, so will be other products such as word processors, video editors and so on. So, as society where we will put the line?
Are we going to tolerate some copyright infringement in these tools or are we going to pursue copyright infringements even in other tools as we already got the tools to detect it?
We cannot have double standards, law should be applied equally to everyone.
I do think that overall making OpenAI liable for output is a bad precedent, because of repercusions beyond AI tools. I'm all fine with making them liable for having trained on copyrighted content and so on...
How does OpenAI being liable for reproducing copyrighted material imply that a word processor should be as well? Last time I checked, word processors don't have a black box text generator trained on pre-existing works: a word processor only has the text that the user types into it.
> Not really. Youtube is not liable as long as they remove the content after a copyright complain and other mechanisms.
They have to take action precisely because they're liable for the material on their platform.
If that was case then Google wouldn't receive DMCA takedown of piracy links, instead offer up users searching for piracy content. Former is more prevalent than latter because
one, it requires invasion of privacy - you have to serve up everyone's search results
two, it requires understanding of intent.
Same is the issue here. OpenAI then needs to share all chats for courts to shift through and second, how to judge intent. If someone asks for a German pop song and OpenAI decides to output Bochum - whose fault is that?
The obsession with protecting access to lyrics is one of the strangest long-running legal battles to me. I will skip tracks on Spotify sometimes specifically because there are no lyrics available. Easy access to lyrics is practically an advertisement for the music. Why do record companies not want lyrics freely available? In most cases, it means they aren't available at all. How is that a good business decision?
They probably fear a domino effect if they let go of this. And so they defend it vehemently to avoid setting a precedent.
Think about compositions, samples, performance rights, and so on. There is a lot more at stake.
One amusing part of lyrics on Spotify to me is how they don't seem to track which songs are instrumentals or not and use that to skip the message about them not knowing the lyrics. An instrumental will pop up and it will say something like "Sorry, we don't have the lyrics to this one yet".
The only thing funnier than that is when they do have the lyrics to a song that probably doesn't need them, like Hocus Pocus by Focus: https://open.spotify.com/track/2uzyiRdvfNI5WxUiItv1y9?si=7a7...
Oh they track that, it's in their API as the "instrumentalness" score: https://developer.spotify.com/documentation/web-api/referenc...
The fact that they don't do anything with that information is unrelated.
I’ve also seen cases where they list lyrics for a song that doesn’t have any (usually an instrumental jazz version of an old standard).
The content industries should have been the ones to invent LLMs, but their head is so stuck in the past and in regressive thinking about how they protect their revenue streams that they're incapable of innovating. Publishing houses should have been the ones to have researchers looking into how to computationally leverage their enormous corpus of data. But instead, they put zero dollars into actual research and development and paid the lawyers instead. And so it leads to attitudes like this.
That's always been the case, eg. how they were latecomers to streaming.
“The content industries.”
Why would people invest in destroying what they love?
There is no destruction.
He meant, the stream of free money from unsuspecting monkeys.
The composition and lyrics are owned separately from the recorded performance.
I'm pretty sure you could even have lyrics with a separate copyright from the composition itself. For example, you can clearly have lyrics without the music and you can have the composition alone in the case that it is performed as an instrumental cover or something.
This is a tough one for the HN crowd. It's like that man not sure which button to push meme.
1) RIAA is evil for enforcing copyrights on lyrics?
2) OpenAI is evil for training on lyrics?
I know nuance takes the fun out of most online discussions, but there's a qualitative difference between a bunch of college kids downloading mp3's on a torrent site and a $500 billion company who's goal among other things is to become the primary access point to all things digital.
Should young adults be allowed to violate copyright and no one else? The damages caused seem far worse than an LLM being able to reproduce song lyrics.
Is it simply "we like college kids" and "we hate OpenAI"? That dictates this?
Why not both? As the GP mentioned, lyrics are also invaluable for people besides training for AI.
I think you mean the RIAA
RAII is a different kind of (necessary) evil
Indeed, too much C++. Edited.
Very true. Just the other day, another “copyright is bad” post on the front page. Today its copyright is good because otherwise people might get some use of material in LLMs.
Considering this is hacker news, it seems to be such an odd dichotomy. Sometimes it feels like anti-hacker news. The halcyon days of 2010 after long gone. Now we need to apparently be angry at all tech.
LLMs are amazing and I wish they could train on anything and everything. LLMs are the smartphone to the fax machines of Google search.
Sounds like it was never about copyright as a principle, only symbolic politics (ie. copyrights benefit megacorps? copyright needs to be weaker! copyright hurts megacorps? copyright needs to be stronger!)
Actually in Germany it's GEMA
It's a good decision because it must be an incredible minority of people who only listen to music when the lyrics can be displayed. I'd imagine most people aren't even looking at the music playing app while listening to music. Regardless, they are copyrighted and they get license fees from parties that do license them and they make money that way. Likely much more money than they would make from the streams they are losing from you.
I think it depends on the music. Most people will have a greatly improved experience when listening to opera if they have access to (translated) lyrics. Even if you know the language of an opera, it can be extremely difficult for a lot of people to understand the lyrics due to all the ornamentation.
I think having the lyrics reproducible in text form isn't the problem. Many sites have been doing that for decades and as far as I know record companies haven't gone after them. But these days with generative AI, they can take lyrics and just make a new song with them, and you can probably see why artists and record companies would want to stop that.
Plus, from TFA,
"GEMA hoped discussions could now take place with OpenAI on how copyright holders can be remunerated."
Getting something back is better than nothing
I didn't downvote, but
> I think having the lyrics reproducible in text form isn't the problem. Many sites have been doing that for decades and as far as I know record companies haven't gone after them.
Reproducing lyrics in text form is, in fact, a problem, independent of AI. The music industry has historically been aggressively litigious in going after websites which post unlicensed song lyrics[0]. There are many arcane and bizarre copyright rules around lyrics. e.g. If you've ever watched a TV show with subtitles where there's a musical number but none of the lyrics are subtitled, you might think it was just laziness, but it's more likely the subtitlers didn't have permission to translate&subtitle the lyrics. And many songs on Spotify which you'd assume would have lyrics available, just don't, because they don't have the rights to publish them.
[0] https://www.billboard.com/music/music-news/nmpa-targets-unli...
Thanks. Maybe that misconception was the problem. Taking a hammering in downvotes, lol
Had a couple of drive-by downvotes... Is it that stupid an opinion? Granted I know nothing about the case except for what's in TFA
It's like saying that movie studios haven't gone after Netflix over movies, so what's the issue with hosting pirated movies on your own site. The reason movie studios don't go after Netflix is that they have a license to show it.
I'm not one of the downvoters, but it may be this: "Many sites have been doing that for decades and as far as I know record companies haven't gone after them."
Record companies have in fact, for decades, been going after sites for showing lyrics. If you play guitar, for example, it's almost impossible to find chords/tabs that include the lyrics because sites get shut down for doing that.
Hmm, alright. I actually do play guitar and used to find chords/tabs with lyrics easily. I haven't been doing that for maybe 10-15 years. Anyway, maybe those sites were paying for a license and I just never considered it
If anything, AI would scramble the lyrics more than a human "taking lyrics to make a new song from them".
Maybe, but it's also possible to get an AI to produce a song with the exact same lyrics. And a human copying lyrics would also be a copyright issue in any case.
But anyway it seems I misinterpreted the issue and record companies have always been against reproduction of lyrics whether an AI or human is doing it
> Had a couple of drive-by downvotes... Is it that stupid an opinion?
While I do not agree with your take, FWIW I found your comment substantive and constructive.
You seem to be making two points that are both controversial:
The first is that generative AI makes the availability of lyrics more problematic, given new kinds of reuse and transformation it enables. The second is that AI companies owe something (legally or morally) to lyric rights holders, and that it is better to have some mechanism for compensation, even if the details are not ideal.
I personally do not believe that AI training is meaningfully different from traditional data analysis, which has long been accepted and rarely problematized.
While I understand that reproducing original lyrics raises copyright issues, this should only be a concern in terms of reproduction, not analysis. Example: Even if you do no data analysis at all and your random character generator publishes the lyrics of a famous Beatles song (or other forbidden numbers) by sheer coincidence, it would still be a copyright issue.
I also do not believe in selective compensation schemes driven by legal events. If a legitimate mechanism for rights holders cannot be constructed in general, it is poor policy craftsmanship to privilege the music industry specifically.
Doing so relieves the pressure to find a universal solution once powerful stakeholders are satisfied. While this might be seen as setting a useful precedent by small-scale creators, I doubt it will help them.
Likely because you're a "luddite" which in the current atmosphere of HN and other tech spaces, mean you have a problem with a "research institution" which has a separate for-profit enterprise face that it wears when it feels like it having free and open access to the collected works of humanity so it can create a plagiarism machine that it can then charge for people to access.
I don't respect this opinion but it is unfortunately infesting tech spaces right now.
Simon Willison had an analysis of Claude's system prompt back in May. One of the things that stood out was the effort they put in to avoiding copyright infringement: https://simonwillison.net/2025/May/25/claude-4-system-prompt...
Everyone knows that these LLMs were trained on copyrighted material, and as a next-token prediction model, LLMs are strongly inclined to reproduce text they were trained on.
All AI companies know they're breaking the law. They all have prompts effectively saying "Don't show that we broke the law!". That we continue to have tech companies consistently breaking the law and nothing happens is an indictment of our current economy.
The whole industry is based on breaking the law. You don’t get to be Microsoft, Google, Amazon, meta, etc without large amounts of illegality.
And the VC ecosystem and valuations are built around this assumption.
And it's a question of do we accept breaking law for the possibility to have the greatest technological advancement of the 21st century. In my opinion, legal system has become a blocker for a lot of innovation, not only in AI but elsewhere as well.
This is a point that I don't see discussed enough. I think anthropic decided to purchase books in bulk, tear them apart to scan them, and then destroy those copies. And that's the only source of copyrighted material I've ever heard of that is actually legal to use for training LLMs.
Most LLMs were trained on vast troves of pirated copyrighted material. Folks point this out, but they don't ever talk about what the alternative was. The content industries, like music, movies, and books, have done nothing to research or make their works available for analysis and innovation, and have in fact fought industries that seek to do so tooth and nail.
Further, they use the narrative that people that pirate works are stealing from the artists, where the vast majority of money that a customer pays for a piece of copyrighted content goes to the publishing industry. This is essentially the definition of rent seeking.
Those industries essentially tried to stop innovation entirely, and they tried to use the law to do that (and still do). So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.
> So, other companies innovated over the copyright holder's objections, and now we have to sort it out in the courts.
I think they try to expand copyright from "protected expression" to "protected patterns and abstractions", or in other words "infringement without substantial similarity". Otherwise why would they sue AI companies? It makes no sense:
1. If I wanted a specific author, I would get the original works, it is easy. Even if I am cheap it is still much easier to pirate than use generative models. In fact AI is the worst infringement tool ever invented - it almost never reproduces faithfully, it is slow and expensive to use. Much more expensive than copying which is free, instant and makes perfect replicas.
2. If I wanted AI, it means I did not want the original, I wanted something Else. So why sue people who don't want the originals? The only reason to use AI is when you want to steer the process to generate something personalized. It is not to replace the original authors, if that is what I needed no amount of AI would be able to compare to the originals. If you look carefully almost all AI outputs get published in closed chat rooms, with a small fraction being shared online, and even then not in the same venues as the original authors. So the market substitution logic is flimsy.
You're using the phrase "actually legal" when the ruling in fact meant it wasn't piracy after the change. Training on the shredded books was not piracy. Training on the books they downloaded was piracy. That is where the damages come from.
Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.
I think your first paragraph is entirely congruent with my first two paragraphs.
Your second paragraph is not what I'm discussing right now, and was not ruled on in the case you're referring to. I fully expect that, generally speaking, infringement will be on the users of the AI, rather than the models themselves, when it all gets sorted out.
>Nothing in the ruling says it is legal to start outputting and selling content based off the results of that training process.
Nothing says it's illegal, either. If anything the courts are leaning towards it being legal, assuming it's not trained on pirated materials.
>A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn't illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
https://www.npr.org/2025/09/05/g-s1-87367/anthropic-authors-...
I don’t follow. You’re punishing the publishing industry by punishing authors?
I'm saying that LLMs are worthwhile useful tools, and that I'm glad that we built them, and that the publishing industry, which holds the copyright on the material that we would use to train the LLMs, have had no hand in developing them, have done no research, and have actively tried to fight the process at every turn. I have no sympathy for them.
The authors have been abused by the publishing industry for many decades. I think they're just caught in the middle, because they were never going to get a payday, whether from AI or selling books. I think the percentage of authors that are commercially successful is sub 1%.
You’re willing to eliminate the entire concept of intellectual property for a possibility something might be a technological advancement? If creators are the reason you believe this advancement can be achieved, are you willing to provide them the majority of the profits?
That's an absolutely good tradeoff. There's no longer any need for copyright. Patents should go next. Only trademarks can stay.
Without agreeing or disagreeing with your view, I feel like the the issue the issue with that paradigm is inconsistency. If an individual "pirates", they get fines and possible jail time, but if a large enough company does it, they get rewarded by stockholders and at most a slap on the wrist by regulators. If as a society we've decided that the restrictions aren't beneficial, they should be lifted for everyone, not just ignored when convenient for large corporations. As it stands right now, the punishments are scaled inversely to the amount of damage that the one breaking the law actually is capable of doing.
Training on copyright is not illegal. Even in the lawsuit against anthropic it was found to be fair use.
Pirating material is a violation of copyright, which some labs have done, but that has nothing to do with training AI and everything to do with piracy.
There is US precedent for training being deemed not fair use. https://www.dglaw.com/court-rules-ai-training-on-copyrighted...
Why wouldn’t training be illegal? It’s illegal for me to acquire and watch movies or listen to songs without paying for them. If consuming copyrighted material isn’t fair use, then it doesn’t make sense that AI training would be fair use.
I don’t read this as “don’t show we broke the law,” I read it as “don’t give the user the false impression that there’s any legal issue with this generated content.”
There’s nothing law breaking about quoting publicly available information. Google isn’t breaking the law when it displays previews of indexed content returned by the search algorithm, and that’s clearly the approach being taken here.
Masked token prediction is reconstruction. It goes far beyond “quoting.”
and training on mountains of open source code with no attribution is exactly the same
the code models should also be banned, and all output they've generated subject to copyright infringement lawsuits
the sloppers (OpenAI, etc) may get away with it in the US, but the developed world has far more stringent copyright laws
and the countries that have massive industries based on copyright aren't about to let them evaporate for the benefit of a handful of US tech-bros
I think in the end they will just pay off copyright holders. The German GEMA is mostly interested in rent-seeking through whatever means available, it's basically the whole point of the organization.
They'll easily be paid off once all legal avenues are exhausted for OpenAI. Though they'll of course keep fighting in court in the hopes of some more favorable negotiating position.
If the copyright costs get too high then we'll just use Chinese AI, unless they try to ban that, too.
You know, I'm a bit of a lyricist myself. These very words are lyrics to a tune in my head, and thus enjoy the increased legal protection of lyrics.
While I partially understand (but not support) the hate against AI due to possible plagiarism and "low effort generation" of works, think about the whole process: If model providers will be liable for generating output, that resembles lyrics or very short texts that fall under copyright laws, they will just change their business model.
E.g. why offering lame chat agents as a service, when you can keep the value generation in-house. E.g. have a strategy board that identifies possible use cases for your model, then spin off a company that just does agentic coding, music generation. Just cut off the end users/public form the model access, and flood the market with AI generated apps/content/works yourself (or with selected partners). Then have a lawyer checking right before publishing.
So this court decision may turn everything worse? I don't know.
The fact they don't already do that, sounds to me like the things produced by AI are not worth the investment. Especially since the output is not copyrightable, right?
If there was a lot of gold to find they wouldn't sell the shovels.
There is a lot of value in specialization. It allows capitalism to do its magic to elevate the best uses of your technology without yourself taking on any of the risk. Trying to inhouse everything often smothers innovation and leads to bad resource allocation. It can be done, but in fields with a lot of ongoing innovation it's extremely hard to get right
There is a reason that Cisco doesn't offer websites, and you are probably actively ignoring whatever websites your ISP has. ASML isn't making chips, and TSMC isn't making chip designs
If there is such an immense value in spinning off and selling models separately you can bet that will happen - without court saying so. At the end running these models is a costly job and you'd want to squeeze out every value.
> Then have a lawyer checking right before publishing.
Your cheap app just got really expensive
> turn everything worse?
A media generation company that is forced to publish uncopyrightable works, because it cannot make the usage to these media generators public, since that would violate copyright - that does sound like a big win for everyone but that company.
How is that worse?
> why offering lame chat agents as a service
Because that's the only business model that the management of these model provider companies suspect to have a chance of generating income, at the current state.
> While I partially understand (but not support) the hate against AI due to possible plagiarism
There's no *possible* plagiarism, every AI slop IS result of plagiarism.
> E.g. have a strategy board that identifies possible use cases for your model, then spin off a company that just does agentic coding, music generation.
Having lame chat agents as a service does not preclude them from doing this. The fact that they are only selling the shovels should be somewhat insightful.
This sounds like a much more niche product that doesn't justify the over half-trillion dollars invested into it so far.
For AI to have a positive ROI, it has to be highly applicable to basically every industry, and has to be highly available.
I found this bit very revealing:
> Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.
Another glimpse into the "mind" of a tech corporation allowing itself full freedom to profit from the vast body of human work available online, while explicitly declining any societal responsibility at all. It's the user's fault, he wrote an illegal prompt! We're only providing the "technology"!
This is largely how it works for nearly all coprightable work. I can draw Mickey Mouse but legally I'm not doing anything wrong until I try to sell it. It certainly doesn't put Crayola or Adobe at legal risk for me to do so.
I am torn because on one hand, fuck record companies. On the other hand, fuck AI companies torrenting, stealing and defrauding.
Another instance of GEMA fighting an american company. Anyone who was on the german internet in the first half of the last decade remembers the "not available in your country" error messages on youtube because Google didn't make a deal with GEMA.
I don't think that we will end up here with such a scenario: lyrics are pervasive and probably also quoted in a lot of other publications. Furthermore, it's not just about lyrics but one can make a similar argument about any published literary work. GEMA is for music but for literary publications there is VG Wort who in fact already have an AI license.
I rather think that OpenAI will license the works from GEMA instead. Ultimately this will be beneficial for the likes of OpenAI because it can serve as a means to keep out the small players. I'm sure that GEMA won't talk to the smaller startups in the field about licensing.
Is this good for the average musician/author? these organizations will probably distribute most of the money to the most popular ones, even though AI models benefit from quantity of content instead of popularity.
https://www.vgwort.de/veroeffentlichungen/aenderung-der-wahr...
Can't they just ask for copies of the lyrics they are not allowed to use and s/lyrics//g the training set? I imagine the volume of text that will be removed would be relatively miniscule.
They should ask for lyrics they are allowed to use. The volume of the text that's left would be miniscule.
It would be so hilarious if GEMA was actually useful for once and not a detriment to society and artists in general.
However of course OpenAI will ignore this and at worst nothing will change and at best they get a slap on the wrist and a fine and continue scraping.
You can’t take that stuff out of the models at this point anyway.
I made a living from GEMA payments some while back, but dear lord, so much of how the institution does what it does feels so bad and zero-sum. Might just be that the world would be better off without it. It does something important for right holders for sure, but (and I understand, I am heavily back-seating here without offering a solution) there must be better ways to go about it.
Now, without the fimförderung all those grim dark arthouse movies where people yell "Scheisse!" in Berlin stairwells would never be made. And all that public gremium pleasing shovelware, looking extracute and boring clogging up the appstores with zero sales, what would we do without that. Take anything popular streamingwise and ask yourself would it get through and by. And if it was stopped by what and who.. fire that, to fix germanys media sector.
Nah. It’s so easy for OpenAI to modify their output. I’m already seeing them restrict news article re-generation by newspaper name. They do it to reduce liability. There’s also a big copyright infringement case coming up in the USA this year, and being able to point to responsiveness to complaints will be a key part of their legal defense I bet.
You can modify the output but the underlying model is always susceptible to jail breaks. A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.
I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.
Why is this so problematic? You can read all this stuff in old papers and patents that are available in the web.
And if you are not capable to do this you will likely not succeed with the chatgpt instructions.
It'd be equally hilarious if that VC money would be used to actually better society by crushing GEMA in court.
But realistically, all that will happen is that the "Pauschalabgabe" is extended to AI subscriptions, making stuff more expensive for everyone.
Damn I didn’t even consider the second part…
Lyrics produced some of the first AI slop I noticed after ChatGPT was launched in late 2022, even if the large models hadn’t been trained on them specifically. Overnight there were a bunch of different advertising-laden sites that clearly scraped Genius or other lyric websites, and then had GPT generate commentaries on what the lyrics supposedly mean, so that these would get picked up by search engines.
The result was mostly comical, the commentaries for vacuous pop music all sounded more or less the same: “‘Shake Your Booty’ by KC and the Sunshine Band expresses the importance of letting one’s hair down and letting loose. The song communicates to listeners how liberating it is to gyrate one’s posterior and dance.” Definitely one of the first signs that this new tech was not going to be good for the web.
I am curious what happens if they call their bluff on this and cut off ChatGPT in Germany. Not that I think OpenAI is doing the right thing, just, I don’t think a country’s government can justify no commercial LLMs to its populace.
There are 80 million Germans. If you where OpenAI, or it's shareholders, would you leave that market open for a competitor? No, you'd make a version of your product without the lyrics. More EU countries are going to follow and reach the same conclusion, especially now that Germany has set a legal precedence. Should OpenAI just pull out of a market with 500 million people and leave it to Claude, Perplexity or someone else entirely?
It doesn't appear that modern LLMs are really that hard to build, expensive perhaps, but if you have monopoly on a large enough market, price isn't really your main concern.
> More EU countries are going to follow and reach the same conclusion, especially now that Germany has set a legal precedence.
That's not how laws and regulations work in European or even EU countries. Courts/the legal system in Germany can not set legal precedents for other countries, and countries don't use legal precedents from other countries, as they obviously have different laws. It could be cited as an authority, but no one is obligated to follow that.
What could happen for example, would be that EU law is interpreted through the CJEU (Court of Justice of the European Union), and its rulings bind EU member states, but that's outside of what individual countries do.
Sidenote, I'm not a English native speaker, but I think it's "precedent", not "precedence", similar words but the first one is specifically what I think you meant.
> That's not how laws and regulations work in European or even EU countries
yes, even if just looking at other court cases in Germany the role of precedent is "in general" not quite as powerful (as Courts are supposed to follow what the law says not what other courts say). To be clear this is quite a bit oversimplified. Other court ruling does still matter in practice, especially if it is from higher courts. But it's very different to how it is commonly presented to work in the US (can't say if it actually works that way).
but also EU member states do synchronize the general working of many laws to make a unified marked practically possible and this does include the general way copy right works (by implementing different country specific laws which all follow the same general framework, so details can differ)
and the parts which are the same are pretty clear about that
- if you distribute a copy of something it's a copy right violation no matter the technical details
a human memorizing the content and then reproducing it would still make it a copy right infringement, so it should be pretty obvious that this applies to LLMs to, where you potentially could even argue that it's not just "memorizing it" but storing it compressed and a bit lossy....
and that honestly isn't just the case in the Germany, or the EU, the main reason AI companies got mostly away with it so far is due to judges being pressured to rule leniently as "it's the future of humanity", "the country wouldn't be able to compete" etc. etc. Or in other words corruption (as politicians are supposed to change laws if things change not tell judges to not do their job properly).
countries don't use legal precedents from other countries, as they obviously have different laws
The seminal authority for all copyright laws, the Berne Convention, is ratified by 181 countries. Its latest revisions are TRIPS (concerning authorship of music recordings) and the WIPO Copyright Treaty (concerning digital publication), both of which are ratified by the European Union as a whole. It's not directly obvious to me that EU member states have different laws in this particular area.
That said, the EU uses the civil law model and precedent doesn't quite have the same weight here as it does under common law.
US copyright law originates in the constitution and the US does not follow a number of elements of the Berne convention, such as moral rights.
> I think it's "precedent", not "precedence",
I think you're right, also not native English speaker.
No, you're right that a German can't influence e.g. the similar lawsuit against Suno in Denmark, but as you point out, it can, and most likely will be cited, and I think it's often the case that this carries a lot of weight.
>That's not how laws and regulations work in European or even EU countries. Courts/the legal system in Germany can not set legal precedents for other countries, and countries don't use legal precedents from other countries, as they obviously have different laws. It could be cited as an authority, but no one is obligated to follow that.
Do you have some sort of different understanding of copyright law where it's legal to commercially use lyrics (verbatim, mind you) without a license?
There are many competing providers of commercial LLMs with equal capabilities, so another vendor would probably be happy to serve a leading Western market of 83 million people.
Yeah? Which commercial provider’s model do you think was trained without using lyrics?
The point is that some other vendor will do the work to implement the filtering required by Germany even if OpenAI doesn't.
I would imagine providers who want to comply will scan the LLM's output and pay a license fee to the owner if it contains lyrics.
They scan for commercial work already. Isn’t the law about training, not output?
Perhaps; I didn't read the court ruling.
But I'd be surprised if that was generally the case. It's easy to see why ChatGPT 1:1 reproducing a song's lyrics would be a copyright issue. But creating a derivative work based on the song?
What if I made a website that counts the number of alliterations in certain songs' lyrics? Would that be copyright infringement, because my algorithm uses the original lyrics to derive its output?
If this ruling really applied to any alogrithm deriving content from copyright protected works, it would be pretty absurd.
But absurd copyright laws would be nothing new, so I won't discount the possibility.
> But creating a derivative work based on the song?
1. it wouldn't matter as derivative work still needs the original license
2. expect if it's not derivative but just inspired,
and the court case was about it being pretty much _the same work_
OpenAIs defense also wasn't that it's derived or inspired but, to quote
> Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.
and the court oder said more or less
- if it can reproduce the song lyrics it means it stored a copy of the song lyrics somehow somewhere (memorization), but storing copies requires a license and OpenAI has no license
- it it outputs a copy of the song lyrics it means it's making another copy of them and giving them to the user which is copyright infringement
and this makes sens, if a human memorizes a song and then writes it down when asked it's still is and always has been copyright infringement (else you could just launder copy right by hiring people to memorize things and then write them down, which would be ridiculous).
and technically speaking LLMs are at the core a lossy compressed storage of their training content + statistic models about them. And to be clear that isn't some absurd around five corners reasoning. It's a pretty core aspect of their design. And to be clear this are things well know even before LLMs became a big deal and OpenAI got huge investment. OpenAI pretty much knew about this being a problem from the get to go. But like any recent big US "startup" following the law doesn't matter.
it technically being a unusual form of lossy compressed storage means it makes that the memorization counts as a copyright infringement (with current law)
but I would argue the law should be improved in that case, so that under some circumstances "memorization" in LLMs is treated as "memorization" in Humans (i.e. not a illegal copy, until you make it one by writing it down). But you can't make it all circumstances because like mentioned you can use the same tech to bascially to lossy file compression and you don't want people to launder copy right by training an LLM on a a single text/song/movie and then distributing that...
That seems like a really broad interpretation of "technically memorization" that could have unintended side effects (like say banning equations that could be used to generate specific lyrics), but I suppose some countries consider loading into RAM a copy already. I guess we're already at absurdity
>But creating a derivative work based on the song?
You need a license to create derivative works.
they clearly didn't do that properly, or we wouldn't have the current law suite
the lawsuit was also not about weather it is or isn't copy right infringement. It was about who is responsible (OpenAI or the user who tries to bait it into making another illegal copy of song lyrics).
A model outputting song lyrics means it has it stored somehow somewhere. Just because the storage is in a lossy compressed obscure hyper dimensional transformation of some kind, doesn't mean it didn't store an illegal copy. Or it wouldn't have been able to output it. _Technical details do not protect from legal responsibilities (in general)_
you could (maybe should) add new laws which in some form treat LLM memorized things the same as if a human did memorize it, but currently LLMs have no special legal treatment when it comes to them storing copies of things.
No, it’s specifically about (mostly) verbatim producing big chunks of lyrics in the output. The court PR specifically mentioned memorization, retaining training data, multiple times.
This assumes that tech companies can act above the law because they've got a new feature to jam down our throats. Have you considered that not everyone wants that? Or that it might not be the best thing?
> Have you considered that not everyone wants that? Or that it might not be the best thing?
Did I suggest either of those things?
Conversely, last week we had Spain being willing to cut off Cloudflare (!) to protect football match royalties.
> I don’t think a country’s government can justify no commercial LLMs to its populace.
Counter-argument: can any country's government justify allowing its population and businesses to become completely dependent on an overseas company which does not comply with its laws? (For Americans, think "China" in this case)
In curious why you think the rule of law is a bluff.
Probably pattern recognition
I come from the country with the world’s oldest continuous parliament, and they change the law all the time. Arguably that’s all the majority of politicians do.
German student performance will plateau, while all other countries slowly decline.
first due to how the EU unified marked works they would have to cut it from all of the EU not just Germany
second it probably would be good for the EU and even US as it would de-monopolize the market a bit before that becomes fully impossible
Claude and Gemini would become more popular.
> cut off ChatGPT in Germany
God I can only hope
> I don’t think a country’s government can justify no commercial LLMs to its populace
They're not saying no LLMs, they're saying no LLMs using lyrics without a license. OpenAI simply need to pay for a license, or train an LLM without using lyrics.
But lyrics are just one example. Are you saying that training experiments must filter out all substrings from the training input that bear too close a resemblance to a substring of a copyrighted work?
Obviously there's a limit, reproducing a single sentence is unlikely to be copyright infringement just because there are only so many words in a language; but if reproducing some text would be copyright infringement if a human did it, I don't see why LLM companies should get a free pass.
If it's really essential that they train their models on song lyrics, or books, or movie scripts, or articles, or whatever, they should pay license fees.
At some point, use of the lyrics becomes de minimis
Oi, you got a loisense to read those words and then repeat them back to me when asked?
I take it you think copyright shouldn't exist at all, then?
That is a separate opinion, but with respect to the question at hand, the utilitarian value of being able to ask a computer "what are the lyrics to x" and having it produce them outweighs whatever small ideological sanctity the music labels assign to being able to gatekeep the written words of a composition to a small blessed few. It's not like chat gpt is serving up the mp3 file to you. So correct, it is insane to me that mere reproduction of just the lyrics is afforded such weighty copy protection.
(Vis a vis, I take it you write a certified letter to Universal before reproducing Happy Birthday in public? ;) That is actually a far more egregious violation indeed, as it is both a performance of the copyrighted work and in front of an audience - neither of which are the case for the chatbot - yet one we all seem to understand to be fair use.
This obviously applies to all copyrighted works. I could sue OpenAI when it reproduces my source code that I published on the Internet.
They already "filter" the code to prevent it from happening (reproducing exact works). My guess it is just superficially changing things around so it is harder to prove copyright violations.
Of course the models are not human, but if you consider this situation as if they are persons, then the question becomes: May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I guess the main difference between the situation with language models and humans is one of scale.
I think the question should be viewed like this, if I as a corporation do the same thing but just with humans, would it be legal or not. Given a hypothetical of hiring a bunch of people, having them read a bunch of lyrics, and then having them answer questions about lyrics. If no law prohibits the hypothetical with people, then I don't see why it should be prohibited with language models, and if it is prohibited with people, then there should be no specific AI ruling needed.
All this being said, Europe is rapidly becoming even more irrelevant than it was, living of the largess of the US and China, it's like some uncontacted tribe ruling that satellites can't take areal photos of them. It's all good and well, just irrelevant. I guess Germany can always go the route of North Korea if they want.
> "May a person read lyrics and tell it to someone when asked"
If you sell tickets to an event where you read the lyrics aloud, it's commercial performance and you need to pay the author. (Usually a cover artist would be singing, but that's not a requirement.)
So it's not like a human can recite the lyrics anywhere freely either.
You don't even have to sell tickets: if it's a free concert, copyright is likely infringed. This is likely true in all jurisdictions.
If someone hires me as a secretary, and they ask me what is the lyrics of a song, there is no law that prohibits me from telling them if I know and I don't have to license the lyrics in order to do so.
If they hire me primarily to recite lyrics, then sure, that would probably be some manner of infringement if I don't license them. But I feel like the case with a language model is much more the former than the latter.
As soon as you take the LLM output and publicize it, it turns around and is a lot more akin to having your secretary read out the lyrics publicly. If you don't publicize it in any way, how would the copyright holder ever find out?
But the LLM is not advertised as a lyrics DB, and it in no way guarantees that it will reproduce the lyrics accurately, and similarly the copyright holder will never know that it's reproducing the lyrics unless it snoops on my conversations with it, or go ask it directly.
But then with the analogy, if I'm a secretary and the copyright holder of lyrics calls me and asks if I know the lyrics of one of their songs, I don't think it's infringement to say yes and then repeat it back to them.
The LLM is not publicising anything, it's just doing what you ask it to do, it's the humans using it publicising the output.
> May a person read lyrics and tell it to someone when asked, and the court's ruling basically says no, this may not happen, which makes little sense.
I think the difference here is that your example is what a search engine might do, whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
> whereas AI is taking the lyrics, using them to create new lyrics, and then passing them off as its own.
Is this not something every single creative person ever has done? Is this not what creating is? We take in the world, and then create something based on that.
I feel compelled to support banning AI from infringing on art, even though most pop songs are terrible.
"pop" music had its own avalanche of slop long before the advent of AI. Soulless reproductions and remixes of once-popular songs are everywhere.
These people would stream German schlager to every screen and speaker in Europe and charge for it 100 EUR monthly per breathing person, if they could. They are violent.
Guess even AI can’t resist singing along! But seriously, copyright laws don’t hit pause just because it’s “machine learning.” Time for AI to learn the lyrics and the legal notes.
Please stop posting LLM-generated comments to HN.
No need to leave a comment in reply to such generated text. Just email the mods directly with a link to the username, they zap such accounts daily.
With AI slop showing up everywhere, there’s a real danger that folks will just no longer be motivated to produce real original content.
With all major models not basically trained on nearly all available data, beyond the financial AI bubble about to burst there’s also a big content bubble that’s about exhausted as folks are just pumping out slop vs producing original creative human output. That may be the ultimate long term tragedy of the present AI hype cycle. Expect “made by a human” to soon be a tag associated with premium brands and customer experiences.
It is of no cost to me when someone else writes a book, plays a song or draws a picture. It is also true that, basically whatever I ever do, someone else has done better. This does not stop me from doing those things because the value within them is in doing them.
We have cars, buses and planes, yet people do partake in pilgrimages. The process matters, even if only personally.
AI slop is like 90’s websites and desktop publishing - there’s a novelty for AI-newbie-creators driving them to churn out lazy crap, while being oblivious to how it lands with strangers.
Tastes will mature, society will more vocally mock this crap, and we’ll stop seeing the sloppier stuff come out of reputable locations.
You assume that the public recognizes AI slop for what it is. Across platforms now, people are readily engaging with blatant AI text posts and generated images as if they are bona-fide. In fact, if you point out that the poster is a bot, you may well well get some flack from the community.
People are already upset over that ‘walk my walk’ song on the country music charts
I will not stop writing music or drawing my furry bullshit, no matter the culture climate around me. Don't get your hopes up ;3
When you're the only one doing it, you'll have a large impact on model generation
> Expect “made by a human” to soon be a tag associated with premium brands and customer experiences.
I went to a grammar school and I write in mostly pretty high-quality sentences with a bit of British English colloquialism. I spell well, spend time thinking about what I am saying and try to speak clearly, etc.
I've always tried to be kind about people making errors but I am currently retraining my mind to see spelling mistakes and grammar errors as inherent authenticity. Because one thing ChatGPT and its ilk cannot do -- I guess architecturally —- is act convincingly like those who misspell, accidentally coin new eggcorns, accidentally use malapropisms, or use novel but terrible grammar.
And you're right: IMO the rage against the cultural damage AI will do is only just beginning, and I don't think people have clocked on to the fact that economic havoc is built-in, success or failure.
The web/AI/software-tech industry will be loathed even more than it is now (and this loathing is increasingly justified)
> one thing ChatGPT and its ilk cannot do -- I guess architecturally —- is act convincingly like those who misspell, accidentally coin new eggcorns, accidentally use malapropisms, or use novel but terrible grammar
Just wait a few more years until the majority of ChatGPT training data is filled with misspellings, accidental eggcorns, malapropisms and terrible grammar.
That, and AI slop itself.
> folks will just no longer be motivated to produce real original content.
Honestly if your only motivation for creating art was “computers can’t do what I do” then… I don’t want to be too gatekeepy about it, but that doesn’t sound like you’re a ‘real’ artist to me. Real artists create art because they enjoy doing it, not because it’s the exclusive domain of humans.
You don’t need to be special, you don’t need to be the best, you don’t need to even be good or successful or recognized or appreciated (although of course all those things are nice) - you just have to be creating art.
> With AI slop showing up everywhere, there’s a real danger that folks will just no longer be motivated to produce real original content.
I think people would still produce original things as long they have the means for doing it. I guess we could say it is our nature. My fear is AI monopolizing the wealth that once would go to support people producing art.
This. I still produce original things and will continue to do so until I am incapable anymore. What's changed, though, is that I no longer put or discuss those things on the open internet because there's no realistic way to prevent it from getting used to train genAI models.
We already have this in the physical world.
Plastic/synthetics are the slop of the physical world. They're a side product of extracting oil and gas so they're extremely cheap.
Yet if you look at synthetics by volume, probably 99% of them are used just because they're cheaper than the natural alternative. Yes, some have characteristics that are novel, but by and large everything we do with plastics is ultimately based on "they're cheaper".
Plastics, unfortunately, aren't going away.
> With AI slop showing up everywhere, there’s a real danger that folks will just no longer be motivated to produce real original content.
BBC truly was ahead of times with their deletion of tv shows.
*edit. Will this actually change OpenAI's behaviour to any meaningful extent?
Other countries are currently going through the same. KODA is running a similar lawsuit on behalf of the Danish musicians, they can now point to Germany as an example, making it much easier for them to win.
Does what a US court rules really matter?
Probably not for something like this honestly. I feel like it would just keep getting appealed up. But what do I know? I'm not an attorney.
It does in Germany? And quite likely in the rest of the EU?
I guess. But I doubt openai will change its behaviour due to this.
Do you think that the German courts will just shrug and accept noncompliance with a court order?
I just expect openai to suspend service to Germany such that Germans have to use a VPN.
There's a major risk to being the market leader in a new, controversial technology. Look what happened to Juul
Highly additive nicotine formulations targeted at teens is not exactly “new technology”.
I m not sure about the problem here, lyrics are public you can search '$songname lyrics' and get the result in a website (or even at the search engine page). What's the issue with an LLM producing those lyrics if you ask?
They aren't! They're subject to licensing!
https://www.digitaltrends.com/social-media/rap-genius-deserv... (2013)
Long ago the first site I remember to do this was lyrics.ch, which was long since shut down by litigation. I'm not endorsing the status quo here, but if the licensing system exists it is obviously unfair to exempt parties from it simply because they're too big to comply.
Just because you can find them freely online doesn't make them public in the legal sense. If that was the case music piracy would also be legal.
Member when music sites were suing YouTube for music videos, and now they are begging people to watch them there and YT view counts are a bragging topic?
Soon music industry will be begging OpenAI for exposure of their content, just like the media industry is begging Google for scraping.
That's exactly the difference between using with or without license.
Youtube pays the music owner. OpenAI can never pay as even with stealing content they still manage to loose 5 dollars for every dollar they make.
However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead? The same way social networks are not liable for content uploaded by users? I think here there is a somewhat double standard.
Of course, maybe OpenAI et al should have get a license before training on the lyrics or to avoid training on copyrighted content. But the first would be expensive and the latter would require them to develop actual intelligence.
Why should the user be liable? They didn't reproduce the copyrighted work and the machine is totally capable of denying output (like it already does for other categories of material).
At the very least, the users being liable instead of OpenAI makes no sense. Like arresting only drug users and not dealers.
There are countries where drug consumption/posesion is penalized too. There is a similar example in other area: For instance, in Sweeden, Norway and Belize selling sex (aka prostitution) is legal, but buying it is not legal. So, your example actually exists in world legislation.
I'm just asking where are we going to put the line and why.
You had originally said the user should be liable instead of OpenAI being liable.
> However, the lyrics are shown because the user requested them, shouldn't be the user be liable instead?
I would imagine the sociological rationale for allowing sex work would not map to a multi-billion-dollar company.
And to add, the social network example doesn't map because the user is producing the content and sharing it with the network. In OpenAI's case, they are creating and distributing copyrighted works.
No, the edited wording still conveys the same meaning. My edit was to fix another grammar typo.
The social networks are distributing such content AND benefiting from selling ads on them. Adding ads on top is a derivative work.
Personally I'm on the side of penalizing the side that provides the input, not the output:
- OpenAI training on copyrighted works. - Users requesting custom works based on copyrighted IP
That is my opinion on how it should be layered, that's it. I'm happy to discuss why it should be that way or why not. As I put in other comment, my concern is that mandating copyright filtering o each generative tool would end up propagating to every single digital tool, which as society we don't really want.
I am curious why you are of the opinion that the user should be in trouble for requesting the copyright material and not the provider of the material. I feel like there is a distinction in something that was local-first compared to a SaaS. Like a local AI model that reproduced copyrighted works for your own use might not be problematic compared to a remote model reproducing a copyrighted work and distributing it over the internet to you. Most jurisdictions treat remote access across jurisdictional boundaries differently than completely local acts.
> However, the lyrics are shown because an action is the user so, shouldn't be the user be liable instead?
Same goes for websites where you can watch piracy streams. "The action is the user pressing play" sounds like it might win you an internet argument, but I'm 99% sure none of the courts will play those games, you as the operator who enabled whatever the user could do ends up liable.
I think that is completely different. Piracy websites do only one thing. Chatbots are different.
My concern is that where are we going to put the line: If I type a copyrighted song in Word is Microsoft liable? If I upload a lyric to ChatGPT and ask it to analyze or translate it, is it a copyright violation?
I totally understand your line of thinking. However, the one I'm suggesting could be applied as well and it has precedents in law (intellectual authors of crimes are punishable, not only the perpetrators).
> I think that is completely different. Piracy websites do only one thing. Chatbots are different.
Well...YouTube is liable for any copyrighted material on their site, and do 'more than one thing'
Not really. Youtube is not liable as long as they remove the content after a copyright complain and other mechanisms.
The problem is if OpenAI is liable for reproducing copyrighted content, so will be other products such as word processors, video editors and so on. So, as society where we will put the line?
Are we going to tolerate some copyright infringement in these tools or are we going to pursue copyright infringements even in other tools as we already got the tools to detect it?
We cannot have double standards, law should be applied equally to everyone.
I do think that overall making OpenAI liable for output is a bad precedent, because of repercusions beyond AI tools. I'm all fine with making them liable for having trained on copyrighted content and so on...
How does OpenAI being liable for reproducing copyrighted material imply that a word processor should be as well? Last time I checked, word processors don't have a black box text generator trained on pre-existing works: a word processor only has the text that the user types into it.
> Not really. Youtube is not liable as long as they remove the content after a copyright complain and other mechanisms.
They have to take action precisely because they're liable for the material on their platform.
This is such a bad take.
If that was case then Google wouldn't receive DMCA takedown of piracy links, instead offer up users searching for piracy content. Former is more prevalent than latter because one, it requires invasion of privacy - you have to serve up everyone's search results
two, it requires understanding of intent.
Same is the issue here. OpenAI then needs to share all chats for courts to shift through and second, how to judge intent. If someone asks for a German pop song and OpenAI decides to output Bochum - whose fault is that?