Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.
I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.
But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.
Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.
Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.
But we’re also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being… well, an animatronic duck.
This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.
I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.
But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.
Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.
It’s human to see patterns where they don’t exist and assign agency.
Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.
But we’re also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being… well, an animatronic duck.
it’s like seeing faces in wood knots or Jesus in toast