• ms.lane@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      18 hours ago

      Let me go one step further-

      “Fair use and movies-and-music piracy exceptions” are “critical” to human development.

  • digger@lemmy.ca
    link
    fedilink
    English
    arrow-up
    57
    ·
    1 day ago

    Can the Internet Archive claim that it’s developing it’s own AI and should have rights to scan everything and serve it to “customers?”

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    19 hours ago

    What I don’t see is an explanation for why developing AI should be a priority. I don’t even think LLMs are really AI, so why should they get free poor plagiarism? I haven’t seen any convincing evidence that LLMs are a step towards AGI.

    I am really dreading the LLM crime prediction future. It will be like a really bad version of Minority Report. CRIMESTOPPER 3000 says you’re a minority, you’re poor, and you criticize our largest shareholder, therefore you are going to jail to prevent your inevitable crime.

    We should be powering these energy hogs off before they make the majority of us poorer.

  • warm@kbin.earth
    link
    fedilink
    arrow-up
    30
    ·
    1 day ago

    Okay, then people can download whatever they want, maybe they are making their own ai too.

    • overload@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      24 hours ago

      the federal government should embrace policy frameworks that preserve access to data for fair learning

      Sounds like we’re good to download anything then as long as we get the copyright material for that purpose.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        22 hours ago

        You’ll never believe it, but I just invented a new type of AI a few seconds after reading your comment.

        I call it OSIRGT: One-Shot Immediate Regurgitation Generative Transformer.

        It starts out as an empty model of variable-count weights ranging from 0 to 255 between a linear sequence of parameters. Whenever you feed it training data, it uses the incoming stream of bytes to adjust the weight at position n to log2(2^k) * n^0 where k is the incoming byte. After a weight is updated, n is increased by 1 and the process repeats until all training data is consumed. To use the model, provide a finite stream of zeroes and it transforms the 0 into another number based on the weight between the current parameter and the next one.

        You may be asking yourself, “isn’t that just an obtuse way to create a perfect copy of something?”

        And to that, my good human, I say: shut up and use this open-source model training program with a built-in BitTorrent client.