OpenAI Says It’s "Over" If It Can’t Steal All Your Copyrighted Work (futurism.com)
from DeadNinja@lemmy.world to privacy@lemmy.ml on 16 Mar 2025 14:55
https://lemmy.world/post/26927735

#privacy

threaded - newest

masterspace@lemmy.ca on 16 Mar 2025 14:58 next collapse

Piracy is not theft.

B1naryB0t@lemmy.dbzer0.com on 16 Mar 2025 15:03 next collapse

When a corporation does it to get a competitive edge, it is.

pennomi@lemmy.world on 16 Mar 2025 15:29 next collapse

It’s only theft if they support laws preventing their competitors from doing it too. Which is kind of what OpenAI did, and now they’re walking that idea back because they’re losing again.

masterspace@lemmy.ca on 16 Mar 2025 16:17 next collapse

No it’s not.

It’s can be problematic behaviour, you can make it illegal if you want, but at a fundamental level, making a copy of something is not the same thing as stealing something.

pyre@lemmy.world on 16 Mar 2025 18:20 collapse

it uses the result of your labor without compensation. it’s not theft of the copyrighted material. it’s theft of the payment.

it’s different from piracy in that piracy doesn’t equate to lost sales. someone who pirates a song or game probably does so because they wouldn’t buy it otherwise. either they can’t afford or they don’t find it worth doing so. so if they couldn’t pirate it, they still wouldn’t buy it.

but this is a company using labor without paying you, something that they otherwise definitely have to do. he literally says it would be over if they couldn’t get this data. they just don’t want to pay for it.

masterspace@lemmy.ca on 16 Mar 2025 19:21 collapse

That information is published freely online.

Do companies have to avoid hiring people who read and were influenced by copyrighted material?

I can regurgitate copyrighted works as well, and when someone hires me, places like Stackoverflow get fewer views to the pages that I’ve already read and trained on.

Are companies committing theft by letting me read the internet to develop my intelligence? Are they committing theft when they hire me so they don’t have to do as much research themselves? Are they committing theft when they hire thousands of engineers who have read and trained on copyrighted material to build up internal knowledge bases?

What’s actually happening, is that the debates around AI are exposing a deeply and fundamentally flawed copyright system. It should not be based on scarcity and restriction but rewarding use. Information has always been able to flow freely, the mistake was linking payment to restricting it’s movement.

pyre@lemmy.world on 16 Mar 2025 21:17 collapse

it’s ok if you don’t know how copyright works. also maybe look into plagiarism. there’s a difference between relaying information you’ve learned and stealing work.

Grimy@lemmy.world on 16 Mar 2025 21:46 collapse

Training on publicly available material is currently legal. It is how your search engine was built and it is considered fair use mostly due to its transformative nature. Google went to court about it and won.

pyre@lemmy.world on 16 Mar 2025 22:17 collapse

can you point to the trial they won? I only know about a case that was dismissed.

because what we’ve seen from ai so far is hardly transformative.

Grimy@lemmy.world on 16 Mar 2025 23:21 collapse

Sorry, I was talking about HiQ labs v. Linkedin. But there is Google v. Perfect 10 and Google v. Authors Guild that show how scrapping public data is perfectly fine and include the company in question.

An image generator is trained on a billion images and is able to spit out completely new images on whatever you ask it. Calling it anything but transformative is silly, especially when such things as collage are considered transformative.

pyre@lemmy.world on 16 Mar 2025 23:31 collapse

eh, “completely new” is a huge stretch there. splicing two or ten movies together doesn’t give you an automatic pass.

refurbishedrefurbisher@lemmy.sdf.org on 16 Mar 2025 17:18 collapse

Only if it’s illegal to begin with. We need to abolish copyright, as with the internet and digital media in general, the concept has become outdated as scarcity isn’t really a thing anymore. This also applies to anything that can be digitized.

The original creator can still sell their work and people can still choose to buy it, and people will if it is convenient enough. If it is inconvenient or too expensive, people will pirate it instead, regardless of the law.

zarathustra0@lemmy.world on 16 Mar 2025 15:03 next collapse

Piracy is only theft if AI can’t be made profitable.

kibiz0r@midwest.social on 16 Mar 2025 16:39 next collapse

What OpenAI is doing is not piracy.

Grimy@lemmy.world on 16 Mar 2025 16:54 collapse

Whatever it is, it isn’t theft

kibiz0r@midwest.social on 16 Mar 2025 17:53 collapse

Also true. It’s scraping.

In the words of Cory Doctorow:

Web-scraping is good, actually.

Scraping against the wishes of the scraped is good, actually.

Scraping when the scrapee suffers as a result of your scraping is good, actually.

Scraping to train machine-learning models is good, actually.

Scraping to violate the public’s privacy is bad, actually.

Scraping to alienate creative workers’ labor is bad, actually.

We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.

FauxLiving@lemmy.world on 16 Mar 2025 18:47 next collapse

Our privacy was long gone well before AI companies were even founded, if people cared about their privacy then none of the largest tech companies would exist because they all spy on you wholesale.

The ship has sailed on generating digital assets. This isn’t a technology that can be invented. Digital artists will have to adapt.

Technology often disrupts jobs, you can’t fix that by fighting the technology. It’s already invented. You fight the disruption by ensuring that your country takes care of people who lose their jobs by providing them with support and resources to adapt to the new job landscape.

For example, we didn’t stop electronic computers to save the job of Computer (a large field of highly trained humans who did calculations) and CAD destroyed the drafting profession. Digital artists are not the first to experience this and they won’t be the last.

masterspace@lemmy.ca on 16 Mar 2025 19:23 collapse

Our privacy was long gone well before AI companies were even founded, if people cared about their privacy then none of the largest tech companies would exist because they all spy on you wholesale.

In the US. The EU has proven that you can have perfectly functional privacy laws.

If your reasoning is based o the US not regulating their companies and so that makes it impossible to regulate them, then your reasoning is bad.

FauxLiving@lemmy.world on 16 Mar 2025 19:48 collapse

My reasoning is based upon observing the current Internet from the perspective of working in cyber security and dealing with privacy issues for global clients.

The GDPR is a step in the right direction, but it doesn’t guarantee your digital privacy. It’s more of a framework to regulate the trading and collecting of your personal data, not to prevent it.

No matter who or where you are, your data is collected and collated into profiles which are traded between data brokers. Anonymized data is a myth, it’s easily deanonymized by data brokers and data retention limits do essentially nothing.

AI didn’t steal your privacy. Advertisers and other data consuming entities have structured the entire digital and consumer electronics ecosystem to spy on you decades before transformers or even deep networks were ever used.

Grimy@lemmy.world on 16 Mar 2025 19:28 next collapse

Creators who are justifiably furious over the way their bosses want to use AI are allowing themselves to be tricked by this argument. They’ve been duped into taking up arms against scraping and training, rather than unfair labor practices.

That’s a great article. Isn’t this kind of exactly what is going on here? Wouldn’t bolstering copyright laws make training unaffordable for everyone except a handful of companies. Then these companies, because of their monopoly, could easily make the highest level models only affordable by the owner class.

People are mad at AI because it will be used to exploit them instead of the ones who exploit them every chance they get. Even worse, the legislation they shout for will make that exploitation even easier.

grumuk@lemmy.ml on 17 Mar 2025 00:17 collapse

Molly White also wrote about this in the context of open access on the web and people being concerned about how their works are being used.

“Wait, not like that”: Free and open access in the age of generative AI

The same thing happened again with the explosion of generative AI companies training models on CC-licensed works, and some were disappointed to see the group take the stance that, not only do CC licenses not prohibit AI training wholesale, AI training should be considered non-infringing by default from a copyright perspective.

_lilith@lemmy.world on 16 Mar 2025 18:23 next collapse

Yeah but I don’t sell ripped dvds and copies of other peoples art.

Knock_Knock_Lemmy_In@lemmy.world on 16 Mar 2025 20:31 collapse

What if I run a filter over it. Transformative works are fine.

lemminator@lemmy.today on 17 Mar 2025 22:52 collapse
AfricanExpansionist@lemmy.ml on 16 Mar 2025 15:13 next collapse

Obligatory: I’m anti-AI, mostly anti-technology

That said, I can’t say that I mind LLMs using copyrighted materials that it accesses legally/appropriately (lots of copyrighted content may be freely available to some extent, like news articles or song lyrics)

I’m open to arguments correcting me. I’d prefer to have another reason to be against this technology, not arguing on the side of frauds like Sam Altman. Here’s my take:

All content created by humans follows consumption of other content. If I read lots of Vonnegut, I should be able to churn out prose that roughly (or precisely) includes his idiosyncrasies as a writer. We read more than one author; we read dozens or hundreds over our lifetimes. Likewise musicians, film directors, etc etc.

If an LLM consumes the same copyrighted content and learns how to copy its various characteristics, how is it meaningfully different from me doing it and becoming a successful writer?

catloaf@lemm.ee on 16 Mar 2025 15:26 next collapse

In your example, you could also be sued for ripping off his style.

Bassman1805@lemmy.world on 16 Mar 2025 15:35 next collapse

You can sue for anything in the USA. But it is pretty much impossible to successfully sue for “ripping off someone’s style”. Where do you even begin to define a writing style?

iAmTheTot@sh.itjust.works on 16 Mar 2025 16:12 next collapse

“style”, in terms of composition, is actually a component in proving plagiarism.

catloaf@lemm.ee on 16 Mar 2025 16:49 collapse

There are lots of ways to characterize writing style. Go read Finnegans Wake and tell me James Joyce doesn’t have a characteristic style.

MrQuallzin@lemmy.world on 16 Mar 2025 16:06 collapse

Edited for clarity: If that were the case then Weird AL would be screwed.

Original: In that case Weird AL would be screwed

ryedaft@sh.itjust.works on 16 Mar 2025 16:16 collapse

No because what he does is already a settled part of the law.

MrQuallzin@lemmy.world on 16 Mar 2025 17:27 collapse

That’s the point. It’s established law so OP wouldn’t be sued

pennomi@lemmy.world on 16 Mar 2025 15:32 next collapse

Right. The problem is not the fact it consumes the information, the problem is if the user uses it to violate copyright. It’s just a tool after all.

Like, I’m capable of violating copyright in infinitely many ways, but I usually don’t.

SoulWager@lemmy.ml on 16 Mar 2025 17:07 collapse

The problem is that the user usually can’t tell if the AI output is infringing someone’s copyright or not unless they’ve seen all the training data.

ricecake@sh.itjust.works on 16 Mar 2025 16:08 next collapse

Yup. Violating IP licenses is a great reason to prevent it. According to current law, if they get Alice license for the book they should be able to use it how they want.
I’m not permitted to pirate a book just because I only intend to read it and then give it back. AI shouldn’t be able to either if people can’t.

Beyond that, we need to accept that might need to come up with new rules for new technology. There’s a lot of people, notably artists, who object to art they put on their website being used for training. Under current law if you make it publicly available, people can download it and use it on their computer as long as they don’t distribute it. That current law allows something we don’t want doesn’t mean we need to find a way to interpret current law as not allowing it, it just means we need new laws that say “fair use for people is not the same as fair use for AI training”.

kibiz0r@midwest.social on 16 Mar 2025 17:03 next collapse

If an LLM consumes the same copyrighted content and learns how to copy its various characteristics, how is it meaningfully different from me doing it and becoming a successful writer?

That is the trillion-dollar question, isn’t it?

I’ve got two thoughts to frame the question, but I won’t give an answer.

  1. Laws are just social constructs, to help people get along with each other. They’re not supposed to be grand universal moral frameworks, or coherent/consistent philosophies. They’re always full of contradictions. So… does it even matter if it’s “meaningfully” different or not, if it’s socially useful to treat it as different (or not)?
  2. We’ve seen with digital locks, gig work, algorithmic market manipulation, and playing either side of Section 230 when convenient… that the ethos of big tech is pretty much “define what’s illegal, so I can colonize the precise border of illegality, to a fractal level of granularity”. I’m not super stoked to come with an objective quantitative framework for them to follow, cuz I know they’ll just flow around it like water and continue to find ways to do antisocial shit in ways that technically follow the rules.
A_norny_mousse@feddit.org on 16 Mar 2025 17:17 next collapse

Except the reason Altman is so upset has nothing to do with this very valid discussion.

As I commented elsewhere:

Fuck Sam Altmann, the fartsniffer who convinced himself & a few other dumb people that his company really has the leverage to make such demands.

He doesn’t care about democracy, he’s just scared because a chinese company offers what his company offers, but for a fraction of the price/resources.

He’s scared for his government money and basically begging for one more handout “to save democracy”.

Yes, I’ve been listening to Ed Zitron.

droplet6585@lemmy.ml on 16 Mar 2025 19:31 collapse

and learns how to copy its various characteristics

Because you are a human. Not an immortal corporation.

I am tired of people trying to have iNtElLeCtUaL dIsCuSsIoN about/with entities that would feed you feet first into a wood chipper if it thought it could profit from it.

Creosm@lemmy.world on 16 Mar 2025 15:21 next collapse

Oh it’s “over”? Fine for me

VeryInterestingTable@lemm.ee on 16 Mar 2025 19:21 collapse

Ho no, what will we do without degenerate generative AIs?!

TachyonTele@lemm.ee on 16 Mar 2025 15:28 next collapse

Darn

vk6flab@lemmy.radio on 16 Mar 2025 15:28 next collapse

“Your proposal is acceptable.”

SomeAmateur@sh.itjust.works on 16 Mar 2025 15:47 next collapse

I think it would be interesting as hell if they had to cite where the data was from on request. See if it’s legitimate sources or just what a reddit user said five years ago

1984@lemmy.today on 16 Mar 2025 15:49 next collapse

Please let it be over, yes.

Nobody even tries to write code from scratch anymore. I think it will have a lot of negative effects on programmers over time.

fartsparkles@lemmy.world on 16 Mar 2025 15:55 next collapse

If this passes, piracy websites can rebrand as AI training material websites and we can all run a crappy model locally to train on pirated material.

moreeni@lemm.ee on 16 Mar 2025 16:04 next collapse

Another win for piracy community

A_norny_mousse@feddit.org on 16 Mar 2025 16:59 next collapse

You are a glass half full sort of person!

underisk@lemmy.ml on 16 Mar 2025 17:50 next collapse

That would work if you were rich and friends with government officials. I don’t like your chances otherwise.

Knock_Knock_Lemmy_In@lemmy.world on 16 Mar 2025 20:29 collapse

Fuck it. I’m training my home AI will the world’s TV, Movies and Books.

surph_ninja@lemmy.world on 16 Mar 2025 16:34 next collapse

This is why they killed that former employee.

Sazruk@lemmy.wtf on 16 Mar 2025 17:39 collapse

Say his name y’all

Suchir Balaji

surph_ninja@lemmy.world on 16 Mar 2025 23:12 collapse

Sorry, wasn’t trying to be a dick. Just couldn’t think of it at the time.

Sazruk@lemmy.wtf on 23 Mar 2025 17:48 collapse

No worries wasn’t trying to call you out just making a point

A_norny_mousse@feddit.org on 16 Mar 2025 17:01 next collapse

Fuck Sam Altmann, the fartsniffer who convinced himself & a few other dumb people that his company really has the leverage to make such demands.

“Oh, but democracy!” - saying that in the US of 2025 is a whole 'nother kind of dumb.
Anyhow, you don’t give a single fuck about democracy, you’re just scared because a chinese company offers what you offer for a fraction of the price/resources.

Your scared for your government money and basically begging for one more handout “to save democracy”.

Yes, I’ve been listening to Ed Zitron.

supersquirrel@sopuli.xyz on 16 Mar 2025 17:13 next collapse

gosh Ed Zitron is such an anodyne voice to hear, I felt like I was losing my mind until I listened to some of his stuff

dylanmorgan@slrpnk.net on 16 Mar 2025 20:30 collapse

Yeah, he has the ability to articulate what I was already thinking about LLMs and bring in hard data to back up his thesis that it’s all bullshit. Dangerous and expensive bullshit, but bullshit nonetheless.

It’s really sad that his willingness to say the tech industry is full of shit is such an unusual attribute in the tech journalism world.

supersquirrel@sopuli.xyz on 17 Mar 2025 03:15 collapse

It’s really sad that his willingness to say the tech industry is full of shit is such an unusual attribute in the tech journalism world.

What is interesting is if he didn’t pretty regularly say in so many words " why the fuck AM I the guy who is sounding the alarm here?? " I would be much more skeptical of his points. He isn’t someone that is directly aligned with the industry, at least not in an “authoritative expert capable of doing a thorough takedown of a bubble/hype mirage” sense that you would expect someone sounding the alarm on a bubble to be. I mean I can tell the guy likes the attention (not in a bad sense really), but he seems utterly genuine in the attitude of " wtf, well ok I will do it… but like seriously I AM the guy who is sounding the alarm here? This isn’t honestly my direct area of expertise? I will provide you a thorough explantion with proof… but my argument really isn’t complicated, it is just ‘business doesn’t make money why will no one acknowledge that’ and it breaks my brain that people that are experts in directly adjacent/relevant things can’t see this…? am I high? "

… cus yeah Ed Zitron, that is how a lot of us fucking feel right now.

(these aren’t direct quotes, I was summarizing, go watch/listen to some of Ed Zitron’s stuff, none of his arguments hinge on anything unreasonable or especially complicated, which is the worrying part…)

JoeKrogan@lemmy.world on 16 Mar 2025 20:06 next collapse

Fartsniffer 🤣

Rekorse@sh.itjust.works on 17 Mar 2025 21:26 collapse

It seems like their message was written specifically for the biases the current administration holds. Calling China PRC is an obvious example. So it was written by idiots for idiots apparently.

schnurrito@discuss.tchncs.de on 16 Mar 2025 17:57 next collapse

If It Can’t Steal All Your Copyrighted Work

…wikimedia.org/…/File:Copying_Is_Not_Theft.webm

Niquarl@lemmy.ml on 16 Mar 2025 18:36 collapse

Of course it is if you copy to monetise which is what they do.

droplet6585@lemmy.ml on 16 Mar 2025 19:21 collapse

They monetize it, erase authorship and bastardize the work.

Like if copyright was to protect against anything, it would be this.

kn0wmad1c@programming.dev on 16 Mar 2025 18:24 next collapse

Bye

nothacking@discuss.tchncs.de on 16 Mar 2025 18:46 next collapse

But when China steals all their (arguably not copywrite-able) work…

turnip@sh.itjust.works on 16 Mar 2025 20:52 collapse

Sam Altman hasn’t complained surprisingly, he just said there’s competition and it will be harder for OpenAI to compete with open source. I think their small lead is essentially gone, and their plan is now to suckle Microsoft’s teet.

HiddenLayer555@lemmy.ml on 16 Mar 2025 22:19 collapse

it will be harder for OpenAI to compete with open source

Can we revoke the word open from their name? Please?

SplashJackson@lemmy.ca on 16 Mar 2025 21:00 next collapse

OpenAI can open their asses and go fuck themselves!

SocialMediaRefugee@lemmy.ml on 16 Mar 2025 21:13 next collapse

China, the new bogeyman to replace the USSR

HiddenLayer555@lemmy.ml on 16 Mar 2025 22:18 collapse

Has been since 1991

Dengalicious@lemmygrad.ml on 17 Mar 2025 03:33 collapse

Took a brief break for MENA to be the targeted one though

phoenixz@lemmy.ca on 16 Mar 2025 21:17 next collapse

This is a tough one

Open-ai is full of shit and should die but then again, so should copyright law as it currently is

PropaGandalf@lemmy.world on 16 Mar 2025 23:01 next collapse

yes, screw them both. let altman scrape all the copyright material and choke on it

meathappening@lemmy.ml on 16 Mar 2025 23:32 collapse

That’s fair, but OpenAI isn’t fighting to reform copyright law for everyone. OpenAI wants you to be subject to the same restrictions you currently face, and them to be exempt. This isn’t really an “enemy of my enemy” situation.

Melvin_Ferd@lemmy.world on 17 Mar 2025 21:37 collapse

Is anyone trying to make stronger copyright laws? Wouldn’t be rich people that control media would it?

Zink@programming.dev on 16 Mar 2025 21:34 next collapse

What I’m hearing between the lines here is the origin of a legal “argument.”

If a person’s mind is allowed to read copyrighted works, remember them, be inspired by them, and describe them to others, then surely a different type of “person’s” different type of “mind” must be allowed to do the same thing!

After all, corporations are people, right? Especially any worth trillions of dollars! They are more worthy as people than meatbags worth mere billions!

ArtificialHoldings@lemmy.world on 16 Mar 2025 22:09 next collapse

This has been the legal basis of all AI training sets since they began collecting datasets. The US copyright office heard these arguments in 2023: www.copyright.gov/ai/listening-sessions.html

MR. LEVEY: Hi there. I’m Curt Levey, President of the Committee for Justice. We’re a nonprofit that focuses on a variety of legal and policy issues, including intellectual property, AI, tech policy. There certainly are a number of very interesting questions about AI and copyright. I’d like to focus on one of them, which is the intersection of AI and copyright infringement, which some of the other panelists have already alluded to.

That issue is at the forefront given recent high-profile lawsuits claiming that generative AI, such as DALL-E 2 or Stable Diffusion, are infringing by training their AI models on a set of copyrighted images, such as those owned by Getty Images, one of the plaintiffs in these suits. And I must admit there’s some tension in what I think about the issue at the heart of these lawsuits. I and the Committee for Justice favor strong protection for creatives because that’s the best way to encourage creativity and innovation.

But, at the same time, I was an AI scientist long ago in the 1990s before I was an attorney, and I have a lot of experience in how AI, that is, the neural networks at the heart of AI, learn from very large numbers of examples, and at a deep level, it’s analogous to how human creators learn from a lifetime of examples. And we don’t call that infringement when a human does it, so it’s hard for me to conclude that it’s infringement when done by AI.

Now some might say, why should we analogize to humans? And I would say, for one, we should be intellectually consistent about how we analyze copyright. And number two, I think it’s better to borrow from precedents we know that assumed human authorship than to invent the wheel over again for AI. And, look, neither human nor machine learning depends on retaining specific examples that they learn from.

So the lawsuits that I’m alluding to argue that infringement springs from temporary copies made during learning. And I think my number one takeaway would be, like it or not, a distinction between man and machine based on temporary storage will ultimately fail maybe not now but in the near future. Not only are there relatively weak legal arguments in terms of temporary copies, the precedent on that, more importantly, temporary storage of training examples is the easiest way to train an AI model, but it’s not fundamentally required and it’s not fundamentally different from what humans do, and I’ll get into that more later if time permits.

The “temporary storage” idea is pretty central for visual models like Midjourney or DALL-E, whose training sets are full of copyrighted works lol. There is a legal basis for temporary storage too:

The “Ephemeral Copy” Exception (17 U.S.C. § 112 & § 117)

U.S. copyright law recognizes temporary, incidental, and transitory copies as necessary for technological processes.
Section 117 allows temporary copies for software operation.
Section 112 permits temporary copies for broadcasting and streaming.
ArtificialHoldings@lemmy.world on 17 Mar 2025 01:52 next collapse

BTW, if anyone was interested - many visual models use the same training set, collected by a German non-profit: laion.ai

It’s “technically not copyright infringement” because the set is just a link to an image, paired with a text description of each image. Because they’re just pointing to the image, they don’t really have to respect any copyright.

tacobellhop@midwest.social on 17 Mar 2025 04:48 collapse

Based on this, can I use chat gpt to recreate a Coca Cola recipe

ArtificialHoldings@lemmy.world on 17 Mar 2025 05:58 collapse

Copyright law doesn’t cover recipes - it’s just a “trade secret”. But the approximate recipe for coca cola is well known and can be googled.

chicken@lemmy.dbzer0.com on 17 Mar 2025 02:51 collapse

I don’t think it’s actually such a bad argument because to reject it you basically have to say that style should fall under copyright protections, at least conditionally, which is absurd and has obvious dystopian implications. This isn’t what copyright was meant for. People want AI banned or inhibited for separate reasons and hope the copyright argument is a path to that, but even if successful wouldn’t actually change much except to make the other large corporations that own most copyright stakeholders of AI systems. That’s not really a better circumstance.

tacobellhop@midwest.social on 17 Mar 2025 04:45 collapse

Actually I would just make the guard rails such that I’d the input can’t be copyrighted then the ai output can’t be copyrighted either. Making anything it touches public domain would reel in the corporations enthusiasm for its replacing humans.

chicken@lemmy.dbzer0.com on 17 Mar 2025 06:05 collapse

I think they would still try to go for it but yeah that option sounds good to me tbh

RandomVideos@programming.dev on 16 Mar 2025 22:28 next collapse

I feel like it would be ok if AI generated images/text would be clearly marked(but i dont think its possible in the case of text)

Who would support something made stealing the hard work of other people if they could tell instantly

Dengalicious@lemmygrad.ml on 17 Mar 2025 03:32 collapse

Stealing means the initial item is no longer there

RandomVideos@programming.dev on 17 Mar 2025 09:14 collapse

If someone is profiting off someone elses work, i would argue its stealing

rumba@lemmy.zip on 16 Mar 2025 23:45 next collapse

Okay, I can work with this. Hey Altman you can train on anything that’s public domain, now go take those fuck ton of billions and fight the copyright laws to make public domain make sense again.

meathappening@lemmy.ml on 17 Mar 2025 03:33 next collapse

This is the correct answer. Never forget that US copyright law originally allowed for a 14 year (renewable for 14 more years) term. Now copyright holders are able to:

  • reach consumers more quickly and easily using the internet
  • market on more fronts (merch didn’t exist in 1710)
  • form other business types to better hold/manage IP

So much in the modern world exists to enable copyright holders, but terms are longer than ever. It’s insane.

melpomenesclevage@lemmy.dbzer0.com on 17 Mar 2025 05:12 collapse

counterpoint: what if we just make an exception for tech companies and double fuck consumers?

rumba@lemmy.zip on 17 Mar 2025 05:36 collapse

Counter counterpoint: I don’t know, I think making an exception for tech companies probably gives a minor advantage to consumers at least.

You can still go to copilot and ask it for some pretty fucking off the wall python and bash, it’ll save you a good 20 minutes of writing something and it’ll already be documented and generally best practice.

Sure the tech companies are the one walking away with billions of dollars and it presumably hurts the content creators and copyright holders.

The problem is, feeding AI is not significantly different than feeding Google back in the day. You remember back when you could see cached versions of web pages. And hell their book scanning initiative to this day is super fucking useful.

If you look at how we teach and train artists. And then how those artists do their work. All digital art and most painting these days has reference art all over the place. AI is taking random noise and slowly making things look more like the reference art that’s not wholly different than what people are doing.

We’re training AI on every book that people can get their hands on, But that’s how we train people too.

I say that training an AI is not that different than training people, and the entire content of all the copyright they look at in their lives doesn’t get a chunk of the money when they write a book or paint something that looks like the style of Van Gogh. They’re even allowed to generate content for private companies or for sale.

What is different, is that the AI is very good at this and has machine levels of retention and abilities. And companies are poised to get rich off of the computational work. So I’m actually perfectly down with AI’s being trained on copyrighted materials as long as they can’t recite it directly and in whole, But I feel the models that are created using these techniques should also be in the public domain.

melpomenesclevage@lemmy.dbzer0.com on 17 Mar 2025 10:21 collapse

giving an exception to tech companies gives an advantage to consumers

No. shut the fuck up. these companies are anti human and only exist to threaten labor and run out the clock on climate change so we all die without a revolution and the billionaires flee to the bunkers they’re convinced will save them (they won’t, closed systems are doomed). it’s an existential threat. this is so obvious, I’m agreeing with fucking yudkowsky, of all fucking people-he is correct, if for entirely wrong nonsense reasons.

good for writing code

so, I have tried to use it for that. nothing I have ever asked it for was remotely fit for purpose, often referring to things like libraries that straight up do not exist. it might be fine if it can quote a long thing from stack exchange from a program anyone who’s been coding for a decade has ten versions of laying around in their home folder, but if you want a piece of code that does something particular, it’s worse than useless. not even as a guide.

AI

HOLY SHIT WE HAVE AI NOW!? WHEN DID THIS HAPPEN!? can I talk to it? or do you just mean large language models?

there’s some benefit in these things regurgitating art

tell me you don’t understand a single thing about how these models work, and don’t understand a single thing about the value meaning or utility of art, without saying “I don’t understand a single thing about how these models work, and don’t understand a single thing about the value meaning or utility of art.”.

misunderstooddemon@reddthat.com on 17 Mar 2025 00:05 next collapse

Too bad, so sad

0x0@programming.dev on 17 Mar 2025 17:35 next collapse

Oh no…
Anyway…

Rekorse@sh.itjust.works on 17 Mar 2025 21:22 next collapse

Getting really tired of these fucking CEOs calling their failing businesses “threats to national security” so big daddy government will come and float them again. Doubly ironic its coming from a company whos actually destroying the fucking planet while it achieves fuck-all.

Melvin_Ferd@lemmy.world on 17 Mar 2025 21:36 next collapse

Let them. Copyright is bullshit. What’s the issue. He’s right

mindaika@lemmy.dbzer0.com on 17 Mar 2025 21:51 next collapse

Seems like just yesterday Metallica was suing people for enjoying copyrighted materials

tophneal@sh.itjust.works on 17 Mar 2025 21:56 next collapse

<img alt="" src="https://sh.itjust.works/pictrs/image/972444bc-26f1-40ed-a985-64f9ddbc52e9.gif">

ef9357@lemmy.world on 18 Mar 2025 00:44 next collapse

Good, go away.

NewOldGuard@lemmy.ml on 19 Mar 2025 20:28 collapse

Oh no not the plagiarism machine however would we recover???

Please fail and die openai thx

Also copyright is bullshit and IP shouldn’t exist especially for corporate entities. Free sharing of human knowledge and creativity should be a right. Machine plagiarism to create uninspired mimicries isn’t a necessary part of that process and should be regulated heavily