Is there a simple way to severly impede webscraping and LLM data collection of my website?
from Maroon@lemmy.world to privacy@lemmy.ml on 19 Jun 2024 20:12
https://lemmy.world/post/16711051

I am working on a simple static website that gives visitors basic information about myself and the work I do. I want this as a way use to introduce myself to potential clients, collaborators, etc., rather than rely solely on LinkedIn as my visiting card.

This may seem sound rather oxymoronic given that I am literally going to be placing (some relevant) details about myself and my work on the internet, but I want to limit the websites’ access from bots, web scraping and content collection for LLMs.

Is this a realistic expectation?

Also, any suggestions on privacy respecting, yet inexpensive domains that I can purchase in Europe would be of super great help.

#privacy

threaded - newest

Deckweiss@lemmy.world on 19 Jun 2024 20:19 next collapse

I did this a while back for blocking LLMs and there are more methods discussed in that threads comments.

lemmy.world/post/14767952

TheAnonymouseJoker@lemmy.ml on 19 Jun 2024 21:34 collapse

Why can I not open your post?

Maeve@kbin.earth on 19 Jun 2024 21:42 next collapse

Works for me.

TheAnonymouseJoker@lemmy.ml on 19 Jun 2024 21:47 collapse

Funny that Jerboa did not open it on my account, but on web browser it opened up.

Maeve@kbin.earth on 20 Jun 2024 04:15 collapse

Occasionally I run into glitches on various instances, but visiting the original post on the original instance works. Lemmy is new enough that I didn't mind seeking workarounds, by asking or fiddling around. Best!

RvTV95XBeo@sh.itjust.works on 19 Jun 2024 22:44 collapse

Are you perhaps an LLM in disguise?

TheAnonymouseJoker@lemmy.ml on 20 Jun 2024 00:26 collapse

Jerboa did not open it for some reason, web browser did. Also check my account age.

otp@sh.itjust.works on 20 Jun 2024 01:26 collapse

That’s exactly what an LLM would say…

TheAnonymouseJoker@lemmy.ml on 20 Jun 2024 09:26 collapse

I did not know LLMs were moderators on Lemmy :D

ExtremeDullard@lemmy.sdf.org on 19 Jun 2024 20:30 next collapse

I’m a bit confused by your question: it sounds like you want to advertise yourself and your work. Why don’t you let AI scrape your information? If I were you, I’d want a chatbot to spit out my details when someone asks it to name the name of someone who does what I do.

I’m violently anti-AI, but this is the one use case I would happily feed it information: to use it as an amplifier to spread public information I want to broadcast as far and as wide as possible.

coolkicks@lemmy.world on 19 Jun 2024 21:22 collapse

If LLMs were accurate, I could support this. But at this point there’s too much overtly incorrect information coming from LLMs.

“Letting AI scrape your website is the best way to amplify your personal brand, and you should avoid robots.txt or use agent filtering to effectively market yourself. -ExtremeDullard”

isn’t what you said, but is what an LLM will say you said.

RecallMadness@lemmy.nz on 19 Jun 2024 20:51 next collapse

Put each character in a spans with random classes, intersperse other random characters all over the place also with random classes, then make the unwanted characters hidden.

Bonus points if you use css to shuffle the order of letters too.

Accessibility? Pffffft.

lemann@lemmy.dbzer0.com on 19 Jun 2024 21:57 next collapse

Some websites I know actually do this - usually end up getting around it by using selectors to identify elements nested in a particular order, rather than using class names. Nowhere near as reliable though

RecallMadness@lemmy.nz on 22 Jun 2024 12:30 collapse

Yep, This is taken straight from Facebooks advertisements circa 2018, maybe still today.

possiblylinux127@lemmy.zip on 20 Jun 2024 00:04 next collapse

That will break legitimate extensions

refalo@programming.dev on 21 Jun 2024 16:41 collapse

I think that’s such a small percentage of users that it doesn’t really matter

refalo@programming.dev on 21 Jun 2024 16:40 collapse

headless browser print to pdf, then extract the text from pdf, can automate getting around this easily. one way to harden things might be to use the canvas to draw text that is not selectable, but then OCR can easily defeat that too.

habitualTartare@lemmy.world on 19 Jun 2024 21:00 next collapse

en.wikipedia.org/wiki/Robots.txt

Should cover any polite web crawlers but it is voluntary.

platform.openai.com/docs/gptbot

Might have to put it behind a captcha or other type to severely limit automated access.

It’s not realistic to assume it won’t get scraped eventually. Such as someone paying people to bypass capatcha or web crawlers that don’t respect robots.txt. I also don’t know if Google and Microsoft bundle their AI data collection that doesn’t also remove your site from web search.

corroded@lemmy.world on 19 Jun 2024 21:50 next collapse

Speaking from experience, be careful you don’t become over-zealous in your anti-scraping efforts.

I often buy parts and equipment from a particular online supplier. I also use custom inventory software to catalog my parts. In the past, I could use cURL to pull from their website, and my software would parse the website and save part specifications to my local database.

They have since enacted intense anti-scraping measures, to the point that cURL no longer works. I’ve had to resort to having the software launch Firefox to load the web page, then the software extracts the HTML from Firefox.

I doubt that their goal was to block customers from accessing data for items they purchased, but that’s exactly what they did in my case. I’ve bought thousands of dollars of product from them in the past, but this was enough of a pain in the ass to make me consider switching to a supplier with a decent API or at least a less restrictive website.

Simple rate limiting may have been a better choice.

IphtashuFitz@lemmy.world on 20 Jun 2024 12:36 next collapse

Try using “curl -A” to specify a User-Agent string that matches Chrome or Firefox.

corroded@lemmy.world on 21 Jun 2024 00:34 collapse

I probably should have specified I’m using libcurl, but I did try the equivalent of what you suggested. I even tried setting a list of user agents and having it cycle through. None of them work. A lot of anti-scraping methods use much more complex schemes than just validating the user agent. In some cases, even a headless browser will be blocked.

bloodfart@lemmy.ml on 20 Jun 2024 15:25 collapse

Mouser?

possiblylinux127@lemmy.zip on 20 Jun 2024 00:03 next collapse

No, not really as they best way would be making it totally private.

Edit: I see you edited the title. You might be able to slow down LLM training. However, your content is such a small percentage in the whole that I doubt it would matter.

The simplest way might be to add a artificial delay to the page load. You could create a simple loading page that is just long enough to cause bots to move on. However, this will completely break search indexing assuming that this method works.

lemmyvore@feddit.nl on 20 Jun 2024 01:27 next collapse

I’m just gonna address the domain question.

ccTLD’s for countries that are members of the EU usually have pretty strong privacy protection, especially if you are buying as an individual.

.de (Germany) is probably the cheapest (3-4€) but if you’re not a resident you will need the registrar to arrange a mailing address for you for a small fee (another 3€ or so). Still going to be a pretty low price.

.nl is another cheap option, without any residency requirement.

The only issue with both is that you can only buy for one year at a time.

The owner’s details in the registry are never published. Legit requests in case of abuse etc. need to go through the registry.

fubarx@lemmy.ml on 20 Jun 2024 04:45 next collapse

Scrape a bunch of Onion articles, link them together in an index, then post an invsible link from your home page that spiders will follow but humans can’t see.

Write a script to randomize the words on all the articles and link them in too. Then change the image tags to point to random wikimedia files.

If there’s one thing we’ve learned, it’s that there’s very little quality control. Channel your inner Ken Kesey / Merry Prankster. Have fun.

trickster@infosec.pub on 22 Jun 2024 05:49 collapse

You suggest luring them away? Did you implement this solution?

fubarx@lemmy.ml on 22 Jun 2024 06:32 collapse

I could, but I personally feel anyone foolish enough to use my blathering deserves the unfortunate consequences.

My idea was for people who felt strongly about keeping their stuff away from the big maws of AI.

FactualPerson@lemmy.world on 20 Jun 2024 11:17 next collapse

Why not add a basic http Auth to the site? Most web hosts provide a simple way to protect a site or directory.

You can have a simple username and pass for humans, but it will stop scrapers as they won’t get past the Auth challenge unless they know the details. I’m pretty sure you can even show login details in the Auth dialog, if you wanted to, rather than pre sharing them.

Maroon@lemmy.world on 20 Jun 2024 17:22 next collapse

I don’t expect my potential collaborators and clients to make an account with username and passwords just to view my relevant details and works.

Or have I not understood your suggestion correctly?

quafeinum@lemmy.world on 20 Jun 2024 17:39 collapse

With htacces everyone can use the same credentials and you can have a message in the popup like ‚use username admin, passeword= what’s a duck? as the login‘ The other option would be an actual captcha

refalo@programming.dev on 21 Jun 2024 16:37 collapse

as a user, if I saw this trying to visit a personal web page I would close the tab immediately

[deleted] on 20 Jun 2024 12:34 next collapse
.
MonkderDritte@feddit.de on 20 Jun 2024 13:01 next collapse

Wasn’t there some new robots.txt rule for this?

IphtashuFitz@lemmy.world on 20 Jun 2024 14:10 collapse

robots.txt is 100% honor based. Well known bots like Googlebot, Bingbot, etc. definitely honor them. But there are also plenty of bots that completely ignore them.

I would hope the bots used to collect LLM training data honors them, but there’s no way to know for certain. And all it really takes is one bot ignoring it for the content of your website to end up in a random set of training data…

istanbullu@lemmy.ml on 20 Jun 2024 18:46 next collapse

use javascript 🤣

GBU_28@lemm.ee on 20 Jun 2024 20:08 next collapse

Any attempts to mangle the body of the pages or obscure in in JS are moot. Most competent stealthy scrapers have visionai as a fallback, so even if you reduce the ability to programmatically parse the page body, the bot can just snatch an image of the page and OCR the contents.

refalo@programming.dev on 21 Jun 2024 16:36 collapse

what scrapers actually go to such lengths? I’ve never heard of any.

GBU_28@lemm.ee on 21 Jun 2024 16:38 collapse

Not general purpose ones.

Flyswat@lemmy.ml on 21 Jun 2024 14:23 next collapse

Use a special/custom font where the letter shown differs from the character used.

m4xie@slrpnk.net on 22 Jun 2024 04:07 collapse

This is a accessibility landmine.

refalo@programming.dev on 21 Jun 2024 16:30 next collapse

Blocking non-Mozilla user agents has eliminated 99% of scraping in my experience. I’ve seen a few larger sites do it as well but not many.

utopiah@lemmy.ml on 21 Jun 2024 16:32 next collapse

Possibly, see github.com/ai-robots-txt/ai.robots.txt but I just discovered it myself while looking for a Robots.txt a la CrowdSec/AdBlocking lists, so feedback appreciated!

utopiah@lemmy.ml on 21 Jun 2024 16:36 collapse

Actually darkvisitors.com/docs/robots-txt might be more direct.

0x520@slrpnk.net on 26 Jun 2024 04:51 collapse

If you look in your access logs, or /var/log/nginx/access.log and look for user agents in the log file that indicate things like chatgptbot, etc. Then add if ($http_user_agent ~* “useragent1|useragent2|… useragents”) { return 403; } to the server block of your websites config file in /etc/nginx/sites-enabled/. You can also add a robots.txt that forbids scraping. Chatgpt generally checks and respects that… for now. This paired with some of the stuff above should work.