Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
A LLM can “reveal” also that water ice melts into mapple syrup given the proper prompts, if people already can (consciously and not) lie proportionally to their biases I don’t understand why would somebody treat a LLM output as a fact…
I agree, but in this case, I think it doesn’t really matter if it is true. Either way, it is hilarious. If it is false, it shows how shitty AI hallucination is and the bad state of AI.
Should the authors who publish this mention how likely this is all just a hallucination? Sure, but I think Musk is such a big spreader of misinformation, he shouldn’t get any protection from it.
Btw. Many people are saying that Elon Musk has (had?) a small PP and a botched PP surgery.
It’s usually possible to ask the AI for the sources.
A proper journalist should always question the validity of their sources.
Unfortunately, journalism is dead. This is just someone writing funny clickbait, but it’s quite ironic how they use AI to discredit AI.
It makes sense for a journalist to discredit AI because AI took their jobs. This is just not the way to do it, because AI is also better at writing clickbait.
If an AI isn’t in web search mode, it will just invent most likely answer, to the question of the source. Changes are very high that such sources don’t even exist.
That’s why you ask for the sources, so you can check them.
I think this kind of prompting is an important part of how to use it in any meaningful manner.
You can also input your own sources and ask it to only use that. For instance by uploading a pdf of a law and ask it to figure out how to do something totally legal and then let it show where in the law it says so.
You’ll obviously still need to check that the law actually says so and that it isn’t hallucinating.
A LLM can “reveal” also that water ice melts into mapple syrup given the proper prompts, if people already can (consciously and not) lie proportionally to their biases I don’t understand why would somebody treat a LLM output as a fact…
I agree, but in this case, I think it doesn’t really matter if it is true. Either way, it is hilarious. If it is false, it shows how shitty AI hallucination is and the bad state of AI.
Should the authors who publish this mention how likely this is all just a hallucination? Sure, but I think Musk is such a big spreader of misinformation, he shouldn’t get any protection from it.
Btw. Many people are saying that Elon Musk has (had?) a small PP and a botched PP surgery.
It’s usually possible to ask the AI for the sources. A proper journalist should always question the validity of their sources.
Unfortunately, journalism is dead. This is just someone writing funny clickbait, but it’s quite ironic how they use AI to discredit AI.
It makes sense for a journalist to discredit AI because AI took their jobs. This is just not the way to do it, because AI is also better at writing clickbait.
If an AI isn’t in web search mode, it will just invent most likely answer, to the question of the source. Changes are very high that such sources don’t even exist.
That’s why you ask for the sources, so you can check them.
I think this kind of prompting is an important part of how to use it in any meaningful manner.
You can also input your own sources and ask it to only use that. For instance by uploading a pdf of a law and ask it to figure out how to do something totally legal and then let it show where in the law it says so. You’ll obviously still need to check that the law actually says so and that it isn’t hallucinating.