from fxnnur@lemmy.ml to privacy@lemmy.ml on 05 May 21:35
https://lemmy.ml/post/29647908
Lately, I’ve been thinking about how much sensitive information we casually feed into AI tools - often without pausing to consider the risks.
At first, it felt harmless: using ChatGPT to write emails, summarize articles, brainstorm ideas. But gradually, my prompts became more personal. I started including client names, project details, company data - even snippets from meetings and internal conversations. It felt convenient at the time, but eventually I started to question what I was really giving up.
Hardly anyone reads the terms of service, and even when companies claim they don’t store your prompts, the reality isn’t always clear. As AI tools become more embedded in our day-to-day work, it’s all too easy to overshare - especially when you’re working quickly.
If you care about privacy but still want to use AI, check it out: www.redactifi.com
Curious - anyone else using AI more but trusting it less?
threaded - newest
I don’t use AI at all. What you described is the principal reason. I also don’t like how these giant corpos are sucking up the entirety of human output to train these models without a care to the implications of it.