Testing suggests Google's AI Overviews tell millions of lies per hour (arstechnica.com)
from madeindex@lemmy.world to degoogle@lemmy.ml on 07 Apr 23:18
https://lemmy.world/post/45310244

cross-posted from: lemmy.world/post/45309948

**

90% of the time right means 10% of the time wrong, huge deal when you deal with billions of queries! **

#degoogle

threaded - newest

Godnroc@lemmy.world on 08 Apr 00:35 next collapse

You would think politicians would start fearing losing their jobs.

Dyskolos@lemmy.zip on 08 Apr 10:41 collapse

I’d say they lie more often than 1 in 10 times, but yes

billybob@lemmy.zip on 08 Apr 00:36 next collapse

It also uses Reddit as a reference A lot

finallymadeanaccount@lemmy.world on 08 Apr 13:15 next collapse

So if people ask it a question, it just tells them their account has been suspended?

madeindex@lemmy.world on 08 Apr 17:09 collapse

yeah I also read that they heavily train it on Reddit, used to be more last year though.

calmblue75@lemmy.ml on 08 Apr 12:05 next collapse

Finally, a competitor to Trump!

asdasd201@lemmygrad.ml on 08 Apr 12:42 next collapse

What would you expect from the Western artificial “intelligences”? I hope Iran hits their slop factories.

burble@lemmy.dbzer0.com on 08 Apr 16:04 next collapse

The amount of times I’ve gone to look up a topic that I know something about and see something wrong in the AI summary that I didn’t ask for…

QualifiedKitten@discuss.online on 08 Apr 18:11 next collapse

I’ve turned off the AI summaries, but occasionally ask one of DDG’s AIs a question, and it almost always has blatant errors in the responses. Yesterday, I did a manual search first, then asked 2 of the AI models if Arm & Hammer currently sells any non-clumping clay litters. It gave me a couple products that it claimed were non-clumping, but when I pulled up product listings to buy them, they were all very clearly labeled as clumping.

Makes it really hard to trust AI for things I don’t know when they’re so often so obviously wrong about things I do know and can easily verify.

sakuraba@lemmy.ml on 08 Apr 21:58 collapse

Just the action of writing non-clumping to the LLM will trigger it to focus on clumping most of the times and will give you that type of results

It is not intelligent at all lol

undefinedTruth@lemmy.zip on 08 Apr 20:49 next collapse

You give it too much credit. In order to be able to lie it would first need do be actually capable of understanding what it writes. LLMs are text prediction algorithms. They cannot think.

sakuraba@lemmy.ml on 08 Apr 21:57 collapse

AI can replace CEOs and politicians!