The machines learn by taking what they find. they scrape the web and feed on images. if you put your work online, it is there for them. you cannot stop them completely, but you can make it harder.
Outline
1. Glaze / Nightshade / PixelGuard
- Tools that alter the image in small, unseen ways
- Make it harder for AI systems to learn from your work
- Do not change how the picture looks to the eye
2. Metadata Rights Statements
- Embed copyright and usage terms in the file
- State that the image cannot be reused or used for training
- Works with systems that respect IPTC and XMP standards
3. Content Protection
- Makes enforcement easier if someone uses your images
- Strongest safeguard when combined with other steps
4. Crawlers
- Some follow your rules, some ignore everything
- prepare for both when you put your work online
1. Glaze / Nightshade / PixelGuard
Glaze changes the surface of your pictures. To the eye, they look the same. To the machine, they look wrong. It cannot copy your style.
Nightshade goes further. It poisons the data. The machine sees a horse where there is a chair. It learns the wrong thing. Your work becomes a trap.
PixelGuard works in small ways. It shifts pixels that you do not notice. The machine notices. It cannot read the image cleanly. It stumbles.
2. Metadata Rights Statements
Metadata rides inside the file. It goes wherever the picture goes. It holds your name, the year, and the terms. It is not decoration. It is your mark.
You set it before you export the image. Any program can write it. Lightroom, Photoshop, Capture One. Even simple tools can fill the IPTC and XMP fields. The software does not matter. The words do.
The parts you need are few: your name, the year, a clear copyright line, and a short rights statement. That is enough. It tells anyone who opens the file that the work is yours. Some systems read it. Some do not. But the mark stays with the picture all the same.
The important parts are your name, the year, copyright line and a rights statement.
3. Content Protection
Content Credentials mark the file. They show who made it and what was done to it. Lightroom can write this record when you export. It can also send a copy to Adobe’s authenticity cloud. It does not guard the picture, but it tells the truth about it.
You can give your work more ground by posting it in many places. Flickr, Behance, Instagram, and others. The more places it lives, the harder it is for anyone to claim it as their own.
4. Crawlers
Crawlers move through the web and take what they find. Some give their name. Some don’t. The honest ones read your rules. If you say keep out, they keep out. The others ignore everything. They hide their name and take the work anyway. That’s the whole of it. One kind listens. The other doesn’t.
# robots.txt
# Block AI crawlers
User-agent: GPTBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: CCBot
Disallow: /
User-agent: OAI-SearchBot
Disallow: /
# Block Common Crawl (often used for AI datasets)
User-agent: CommonCrawl
Disallow: /
# Allow normal search engines (optional)
User-agent: Googlebot
Disallow:
User-agent: Bingbot
Disallow:
place the code at /public_html/robots.txt or you should be able to see the file https://yourdomain.com/robots.txt
- ✅ Blocks GPTBot
- ✅ Blocks Google-Extended
- ✅ Blocks CCBot
- ✅ Blocks OAI-SearchBot
- ✅ Blocks CommonCrawl
- ✅ Allows Googlebot and Bingbot