• turnip@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    9 hours ago

    If your system relies on censoring opposition to it then its probably not very good.

    • yunxiaoli@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      34
      ·
      edit-2
      8 hours ago

      Texas is a country. Now imagine $40 billion a year of various media and disinfo agents repeating that ad nauseum in every place they can literally all the time for nearly 50 years now, all so China can’t take revenge against Japan.

      You’d get annoyed and probably ban it since that’s the easiest way to get your enemy to waste money forever.

      Taipei is an autonomous region, like Xinjiang or Tibet. As long as they don’t grossly violate federal law they get to stay autonomous.

      • musubibreakfast@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 hours ago

        This is the biggest crock of shit ever. Go to Taiwan, experience it for yourself. Go to their museums and talk to their people. You will find a democratic nation with its own values and beliefs. Then take your ignorant ass over to Texas and repeat the same drivel you said here and see what happens.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    13
    ·
    edit-2
    14 hours ago

    Yet unlike American led LLM companies Chinese researchers open sourced their model leading to government investment

    So the government invests in a model that you can use, including theoretically removing these guardrails. And these models can be used by anyone and the technology within can be built off of, though they do have to be licensed for commercial use

    Whereas America pumps 500 billion into the AI industry for closed proprietary models that will serve only the capitalists creating them. If we are investing taxpayer money into concerns like this we should take a note from China and demand the same standards that they are seeing from deepseek. Deepseek is still profit motivated; it is not inherently bad for such a thing. But if you expect a great deal of taxpayer money then your work needs to open and shared with the people, as deepseeks was.

    Americans are getting tragically fleeced on this so a handful of people can get loaded. This happens all the time but this time there’s a literal example of what should be occurring happening right alongside. And yet what people end up concerning themselves with is Sinophobia rather than the fact that their government is robbing them blind

    Additionally American models still deliver pro capitalist propaganda, just less transparently: ask them about this issue and they will talk about the complexity of “trade secrets” and “proprietary knowledge” needed to justify investment and discouraging the idea of open source models, even though deepseeks existence proves it can be done collaboratively with financial success.

    The difference is that deepseeks censorship is clear: “I will not speak about this” can be frustrating but at least it is obvious where the lines are. The former is far more subversive (though to be fair it is also potentially a byproduct of content consumed and not necessarily direction from openai/google/whoever)

    • Zetta@mander.xyz
      link
      fedilink
      arrow-up
      25
      ·
      12 hours ago

      Closed AI sucks, but there are definitely open models from American companies like meta, you make great points though. Can’t wait for more open models and hopefully, eventually, actually open source models that include training data which neither deepseek nor meta do currently.

  • malloc@lemmy.world
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    4
    ·
    14 hours ago

    DeepSeek about to get sent in for “maintenance” and docked 10K in social credit.

  • GissaMittJobb@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    13 hours ago

    Is this real? On account of how LLMs tokenize their input, this can actually be a pretty tricky task for them to accomplish. This is also the reason why it’s hard for them to count the amount of 'R’s in the word ‘Strawberry’.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      8 hours ago

      It’s probably deepseek r1, which is a “reasoning” model so basically it has sub-models doing things like running computation while the “supervisor” part of the model “talks to them” and relays back the approach. Trying to imitate the way humans think. That being said, models are getting “agentic” meaning they have the ability to run software tools against what you send them, and while it’s obviously being super hyped up by all the tech bro accellerationists, it is likely where LLMs and the like are headed, for better or for worse.

      • GissaMittJobb@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Still, this does not quite address the issue of tokenization making it difficult for most models to accurately distinguish between the hexadecimals here.

        Having the model write code to solve an issue and then ask it to execute it is an established technique to circumvent this issue, but all of the model interfaces I know of with this capability are very explicit about when they are making use of this tool.

  • socsa@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 hours ago

    44 6F 77 6E 20 77 69 74 68 20 74 68 65 20 74 79 72 61 6E 74 20 78 69 20 6A 69 6E 70 69 6E 67