ChatGPT will now predict your age based on how you interact with it
from helloyanis@furries.club to privacy@lemmy.ml on 15 Feb 18:01
https://furries.club/users/helloyanis/statuses/116075716933077295

ChatGPT will now predict your age based on how you interact with it

It seems now #openai is doing age estimation too based on this email I received,

After Google who made the “Age signals API” into Android phones, now OpenAI will “predict your age based on how you interact with our services”.

No, I don’t want #chatgpt to analyze my age, thank you very much, (I always used the privage chat mode anyways, whatever impact it actually has on the data they use), so I will now switch to an alternative that doesn’t do that : https://chat.mistral.ai/

Don’t misunderstand me, protecting children is a good idea, but if that implies having to be analyzed by an #AI and having your experience change based on that, I’m heavily against it.

Although it’s convenient, if it starts analyzing my behaviour as well then I guess it will be a good time to start thinking a bit more on my own and only rely on AI as a last resort solution…

Their article https://help.openai.com/en/articles/12652064-age-prediction-in-chatgpt

@privacy@lemmy.ml @privacy@lemmy.world

#privacy

Email from OpenAI :

We wanted to let you know that we're updating our Privacy Policy to give you even more information about what data we collect, how we use it, and how you can control it.

Here’s what’s changing:

Finding friends on OpenAI services
You can now choose to sync your contacts to see who else is using our services. This is completely optional.

Age prediction & safeguards for teens
In the coming weeks, we’ll begin using age prediction across our services to help provide safer, more age-appropriate experiences for teens. Learn more.

What we’ve clarified:

New tools and features
We’ve added details about Atlas, parental controls for teen accounts, and other upcoming features such as Sora 2.

More transparency around data
We explain how long we keep data, your controls, and the legal bases we rely on when processing your personal data.

You can review and manage your data preferences anytime in your account settings.

Thanks,
The OpenAI Team

#privacy

threaded - newest

ageedizzle@piefed.ca on 15 Feb 18:39 next collapse

I recommend duck.ai as an alternative way of accessing ChatGTP. Is not a perfect solution but it’s better than accessing it straight from the OpenAI website

helloyanis@furries.club on 15 Feb 18:41 next collapse

@ageedizzle I plan on using https://chat.mistral.ai since it's a completely different model that is open source and based in france so regulated by GDPR. If you still use ChatGPT but from somewhere else, it kinda defeats the purpose to switch away from it, in my opinion.

ageedizzle@piefed.ca on 15 Feb 19:05 next collapse

It’s better from a privacy perspective. But yeah, you’d still ultimately be using ChatGTP at the end of the day so another model might be preferable 

Bonesince1997@lemmy.world on 15 Feb 19:34 collapse

“Le Chat” lol idk why this looked funny to me. Thanks for the link!

growsomethinggood@reddthat.com on 15 Feb 19:50 collapse

Extra funny when you learn what Chat GPT sounds like in French (chat, j’ai pété)

degen@midwest.social on 15 Feb 22:16 collapse

Of course it’s not weird that “chat” is chat in French, but it’s also cat which is silly

TachyonTele@piefed.social on 15 Feb 19:04 next collapse

I recommend just searching for what you want and reading the results yourself.

ageedizzle@piefed.ca on 15 Feb 19:10 next collapse

I do too. But if someone really does want to use these tools then there are better and worse ways of doing it

TachyonTele@piefed.social on 15 Feb 19:11 collapse

True, that’s a good point

akilou@sh.itjust.works on 15 Feb 21:37 collapse

Yes but sometimes using AI like this is so much easier. I find it really useful in troubleshooting tech problems because you can tell it your specific setup and iterate on solutions until something works. So much easier than reading through 1,000 StackExchange threads which approximate your problem

TachyonTele@piefed.social on 15 Feb 21:40 collapse

Just make sure you practice safe searching, son. We love you.

akilou@sh.itjust.works on 15 Feb 21:35 next collapse

Similarly, Proton has Lumo

lumo.proton.me/

ageedizzle@piefed.ca on 15 Feb 22:57 collapse

That link appears to be broken on my end.

akilou@sh.itjust.works on 15 Feb 23:14 collapse

Looks like the hyperlink got messed up. Added the full url

ageedizzle@piefed.ca on 15 Feb 23:20 collapse

Okay I see the url now thanks. I didn’t know proton had an AI. Seems neat.

SuspciousCarrot78@lemmy.world on 16 Feb 14:02 collapse

For sure. And in that spirit -

Another option for people: pay $10 to get an OpenRouter account and use providers that 1) don’t train on your data 2) have ZDR (zero data retention)

openrouter.ai

You can use them directly there (a bit clunky) or via API with something like OpenWebUI.

github.com/open-webui/open-webui

Alternatively, if you have the hardware, self host. Qwen 3-4B 2507 Instruct matches / exceeds ChatGPT 4.1 nano and mini on almost all benchmarks… and the abliterated version is even better IMHO

huggingface.co/Qwen/Qwen3-4B-Instruct-2507

huggingface.co/…/Qwen3-4B-Hivemind-Instruct-NEO-M…

It should run acceptably on even 10 yr old hardware / no GPU.

DieserTypMatthias@lemmy.ml on 15 Feb 19:26 next collapse

Try any model with LMStudio and with the DuckDuckgo search plugin to it. It works better than you might think.

Fancy_Gecko@lemmy.ml on 15 Feb 19:38 next collapse

i hate living in the crypto , ai and big social media era

TachyonTele@piefed.social on 15 Feb 19:43 next collapse

Makes it super easy to know what to avoid though

JonEFive@midwest.social on 15 Feb 19:48 collapse

It’s only getting worse too. The entire US economy is being propped up by AI and crypto. It’s like the sub-prime mortgage craze of the early 2000s. Lots of money going into a system that will never recoup the investment. Either they have to find a way to extract value, or the bottom’s gonna fall out. Just wait for the too big to fail AI and tech bailouts.

pineapple@lemmy.ml on 15 Feb 22:05 collapse

So by worse you mean better? I can’t wait for the AI bubble to pop personally.

ToTheGraveMyLove@sh.itjust.works on 16 Feb 03:43 collapse

Did you skip the last sentence?

pineapple@lemmy.ml on 16 Feb 05:46 collapse

Sorry I don’t get what you mean by skip the last sentence.

folaht@lemmy.ml on 16 Feb 08:34 collapse

It gets worse until the bubble pops is the idea, which I personally don’t think it will for the US.

pineapple@lemmy.ml on 16 Feb 08:43 collapse

Do you mean the AI bubble won’t pop? Currently that seams very unlikely, AI is entirely propped up by investors and they aren’t generating any real profit from consumers. Literally the definition of an unsustainable market.

folaht@lemmy.ml on 16 Feb 08:47 collapse

No, what is meant is that “as long is the AI bubble it will get worse, then it gets better”.
And to that I say “It won’t get any better after the bubble pops”.

There’s simply no incentive in the US from the left side to change anything that I think needs to change and instead has it’s priorities on things that I don’t think will help the underlying issues at all.

pineapple@lemmy.ml on 16 Feb 11:23 collapse

There’s simply no incentive in the US from the left side to change anything that I think needs to change and instead has it’s priorities on things that I don’t think will help the underlying issues at all.

Well welcome to US politics I guess.

Also I think things will get better after the AI bubble pops, there will be less AI overhype and AI shoe horning into everything. ram prices will go down, GPU prices will go down and therefore most consumer tech items will also go down.

solrize@lemmy.ml on 15 Feb 20:07 next collapse

So if I talk about hemorrhoids enough, it will think I’m old and let me into the pr0n sites! Cool!!!

schnurrito@discuss.tchncs.de on 15 Feb 22:56 collapse

You would be surprised what health problems people under 18 can have.

catdog@lemmy.ml on 15 Feb 20:35 next collapse

Makes sense to me. Use the LLM itself to counter children from using it in ways they aren’t allowed to.

jlow@discuss.tchncs.de on 15 Feb 21:27 next collapse

Ha, I thought this was an email from FairEmail at first. That would have been wild. This coming from Scam Altman, not surprising 🤷‍♀️

sudoer777@lemmy.ml on 15 Feb 21:32 next collapse

Good, maybe people will stop using it and switch to an open-weight Chinese model

pineapple@lemmy.ml on 15 Feb 22:03 collapse

If you use the app it’s still sending all of your data to the company that makes the model, I guess it depends if you prefer Chinese spyware or US spyware.

sudoer777@lemmy.ml on 15 Feb 22:09 next collapse

At least they use your data to make a model that’s open weight

Truscape@lemmy.blahaj.zone on 15 Feb 22:45 next collapse

Step 1 - don’t use apps on a phone and instead use an appropriate open source sandbox on an actual x86 machine.

The mobile-focused development cycle and its consequences have been a disaster for user agency.

grue@lemmy.world on 15 Feb 23:28 next collapse

The mobile-focused development cycle and its consequences have been a disaster for user agency.

Could you repeat that louder for the folks at the back?

(Also, it’s a statement that’s generally true in ways completely unreleated to AI.)

pineapple@lemmy.ml on 16 Feb 05:50 collapse

I don’t really see how using a sandbox protects you from data data collection aside from being able to deny access to things outside of the sandbox, it would still collect data from any way you interact with the app. And using webapps or websites is a viable substitute as well.

Truscape@lemmy.blahaj.zone on 16 Feb 06:01 collapse

(Selfhosting, not using a model that is not hosted on your own hardware)

pineapple@lemmy.ml on 16 Feb 06:21 collapse

Oh then yeah for sure.

Lfrith@lemmy.ca on 16 Feb 01:31 collapse

And using either of them doesn’t guarantee your data won’t get out anyways for products provided by corporations like the recent tiktok outcome showed where a billion dollar company would rather sell out than close down to protect users.

treatcover@lemmy.ml on 15 Feb 21:46 next collapse

I think if you really care about privacy, you should assume they do this, no matter what they say. By the way, great job on including the image transcript. :)

helloyanis@furries.club on 15 Feb 21:52 collapse

@treatcover Well, you can run Mistral locally but my laptop is not powerful enough haha
Also, yeah, a small copy paste goes a long way!

PiraHxCx@lemmy.dbzer0.com on 15 Feb 22:45 next collapse

guy using chat gpt to ask about anime waifus and the bot confused if he’s 12 or 45.

schnurrito@discuss.tchncs.de on 15 Feb 23:00 next collapse

At some point in the 2000s I chatted with the SmarterChild chatbot on MSN Messenger for a while. I was around 11 or 12 years old at the time.

I remember it once answering “sorry, web search is only available to adult users”, to which I responded “how do you know I’m just a child?”. I don’t remember what came before or after that… it might have asked me my DOB at some point before that, but am until today not sure how it knew that. Point is, none of this is new… 😁

otp@sh.itjust.works on 16 Feb 04:34 collapse

how do you know I’m just a child?"

“You just told me, kiddo!”

inb4_FoundTheVegan@lemmy.world on 15 Feb 23:32 next collapse

So if I don’t use it I’m dead?

chicken@lemmy.dbzer0.com on 16 Feb 00:07 next collapse

The whole idea of software services where the output is not a function of the input, but rather a function of the input plus all the data the service has been able to harvest about you has always been awful. It was awful when google started doing it years ago and it’s awful when LLM frontends do it now. You should be able to know that what you are seeing is what others would see, and have some assurance that you aren’t being manipulated on a personal level.

calmblue75@lemmy.ml on 17 Feb 06:17 next collapse

It’s shocking how much this ‘personalized’ stuff is normalized.

FineCoatMummy@sh.itjust.works on 17 Feb 21:07 collapse

Absolutely. Couldn’t agree more.

I just run everything possible locally which helps a lot. Nearly everything my friends do in “the cloud” I do on my own computer. Even stuff like spreadsheets they want to do in the cloud. It flummoxes me, how willing they are to share everything with big tech.

If you have a mid range or better GPU, you can even run a local LLM. I have used one for language translation. I cannot speak German but I was talking to a German speaker who did not speak English about a hobby. We could talk to each other despite not sharing a language. That’s practically sci-fi to me! I used a sandboxed LLM disallowed from any network access, to do that.

Even so, I am skeptical about most of what I see people use LLMs for. I am afraid of what they will allow bad actors to do. I am afraid of even worse corruption of the information space. I doubt the horse will re-enter the barn tho.

Sir_Kevin@lemmy.dbzer0.com on 16 Feb 00:36 next collapse

If at all possible, anyone that cares about their privacy should be self hosting.

lemmy.world/c/selfhosted

astutemural@midwest.social on 16 Feb 02:52 next collapse

This will last right up until all the right-wingers get pissed they’re being identified as teenagers based on their 5th grade writing level.

artyom@piefed.social on 16 Feb 04:32 next collapse

If you absolutely must use ChatGPT, you can do it with Duck.ai as a proxy to protect your personal info.

Maroon@lemmy.world on 16 Feb 12:04 collapse

I have duckduckgo as my default engine. But how to set up this proxy to ChatGPT?

artyom@piefed.social on 16 Feb 16:09 collapse

No setup required, just go to duck.ai

hector@lemmy.today on 16 Feb 09:26 next collapse

So corporations will decide whether you should be forced to surrender commercially valuable personal data to them, that makes them more money. Great system guys! If we can’t trust mega corporations with political connections, who can we trust?

glitching@lemmy.ml on 16 Feb 09:59 next collapse

ai;dr

[deleted] on 16 Feb 09:59 next collapse

.

BaroqueW@lemmy.world on 16 Feb 17:36 collapse

Damn, I’m stealing this like I am training an LLM

herseycokguzelolacak@lemmy.ml on 16 Feb 11:35 next collapse

Just don’t use ChatGPT. I recommend DeepSeek or Qwen:

chat.qwen.ai

www.deepseek.com/en/

PhoenixDog@lemmy.world on 16 Feb 12:29 collapse

I recommend neither.

BennyTheExplorer@lemmy.world on 16 Feb 11:41 next collapse

Just don’t use AI and you’ll be fine

melsaskca@lemmy.ca on 16 Feb 14:15 next collapse

Finally I see a use for AI. In a carnival midway. “I will guess your age for $250”.

JoeMontayna@lemmy.ml on 16 Feb 16:24 next collapse

No shit

rizzothesmall@sh.itjust.works on 17 Feb 06:39 next collapse

Start asking it questions about what to do with your grandkids, and what sort of music the youth are into, and how you’re supposed to spend all your disposable retirement money.

DancingBear@midwest.social on 17 Feb 06:43 next collapse

ChatGPT, hello. This is the first time we have chat. What can I give my wife for our 18th wedding anniversary.

Hello kids, start every conversation with ai like this and you’ll be fine

LeTak@feddit.org on 17 Feb 08:17 next collapse

Mhhhh another FairMail user, a person of culture. I only used it because it was the only OSS client with encryption and signature support

peacefulpixel@lemmy.world on 17 Feb 20:02 collapse

the amount of problems i don’t have to deal with because i simply don’t use GenAI

drunkpostdisaster@lemmy.world on 18 Feb 19:07 collapse

Until you boss decides to use it.