Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 hour ago

    Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.

    For example, why don’t LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?

    It’s a rhetorical question, we know why, and I think we should focus on that, not on its effects. It’s also much cheaper and easier to do than refill years of quality education in individuals heads.

  • Owl@lemm.ee
    link
    fedilink
    English
    arrow-up
    60
    ·
    4 hours ago

    looking at americas voting results, theyre probably right

    • jumjummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 hours ago

      Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”

  • dindonmasker@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    I don’t think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we’ve seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn’t intelligent itself.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    2
    ·
    4 hours ago

    Think of a person with the most average intelligence and realize that 50% of people are dumber than that.

    These people vote. These people think billionaires are their friends and will save them. Gods help us.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 hours ago

      I was about to remark how this data backs up the events we’ve been watching unfold in America recently

  • 👍Maximum Derek👍@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    3
    ·
    4 hours ago

    Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

    So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    19
    ·
    4 hours ago

    LLMs are made to mimic how we speak, and some can even pass the Turing test, so I’m not surprised that people who don’t know better think of these LLMs as conscious in some way or another.

    It’s not a necessarily a fault on those people, it’s a fault on how LLMs are purposefully misadvertised to the masses

  • Arkouda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    4 hours ago

    “Nearly half” of US citizens are right, because about 75% of the US population is functionally or clinically illiterate.

    • bizarroland@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      2 hours ago

      I think the specific is that 40% of adult Americans can’t read at a seventh grade level.

      Probably because they stopped teaching etymology in schools, So now many Americans do not know how to break a word down into its subjugate parts.

  • transMexicanCRTcowfart@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 hours ago

    Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or ‘smarter’ is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.

    Which ‘smart’ humans may not do, out of haste or contextual adaptation.

  • AbnormalHumanBeing@lemmy.abnormalbeings.space
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    4 hours ago

    I wouldn’t be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    Maybe if the adults actually didn’t use the LLMs so much this wouldn’t be the case.

  • beatnixxx@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    3 hours ago

    At least half of US adults think that they themselves are smarter than they actually are, so this tracks.

  • Fubarberry@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 hours ago

    I wasn’t sure from the title if it was “Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are.” or “Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are.” It’s the former, although you could probably argue the latter is true too.

    Either way, I’m not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn’t answer and get a decent answer. I feel like they’re generally good at meeting what people’s expectations are of a “smart person”, even if they have major shortcomings in other areas.