Piracy for me but not for thee!
And "piracy’ is critical for cultural and data preservation.
Let me go one step further-
“Fair use and movies-and-music piracy exceptions” are “critical” to human development.
That would cut into their bottom line, they can’t have any of that!
Can the Internet Archive claim that it’s developing it’s own AI and should have rights to scan everything and serve it to “customers?”
You can read the IA’s take on these issues here: https://blog.archive.org/2023/11/02/internet-archive-submits-comments-on-copyright-and-artificial-intelligence/
Their take is quite different from the sentiment that seems to prevail in this community.
I keep getting error 429. Any chance you remember the basics and can share a summary?
The irony
What I don’t see is an explanation for why developing AI should be a priority. I don’t even think LLMs are really AI, so why should they get free poor plagiarism? I haven’t seen any convincing evidence that LLMs are a step towards AGI.
I am really dreading the LLM crime prediction future. It will be like a really bad version of Minority Report. CRIMESTOPPER 3000 says you’re a minority, you’re poor, and you criticize our largest shareholder, therefore you are going to jail to prevent your inevitable crime.
We should be powering these energy hogs off before they make the majority of us poorer.
Okay, then people can download whatever they want, maybe they are making their own ai too.
the federal government should embrace policy frameworks that preserve access to data for fair learning
Sounds like we’re good to download anything then as long as we get the copyright material for that purpose.
You’ll never believe it, but I just invented a new type of AI a few seconds after reading your comment.
I call it OSIRGT: One-Shot Immediate Regurgitation Generative Transformer.
It starts out as an empty model of variable-count weights ranging from 0 to 255 between a linear sequence of parameters. Whenever you feed it training data, it uses the incoming stream of bytes to adjust the weight at position
n
tolog2(2^k) * n^0
wherek
is the incoming byte. After a weight is updated,n
is increased by 1 and the process repeats until all training data is consumed. To use the model, provide a finite stream of zeroes and it transforms the 0 into another number based on the weight between the current parameter and the next one.You may be asking yourself, “isn’t that just an obtuse way to create a perfect copy of something?”
And to that, my good human, I say: shut up and use this open-source model training program with a built-in BitTorrent client.
im glad i don’t use google anymore