As AI redraws the boundaries of ownership, originality, and creative control, the fight over digital content is escalating. Artists posting their work online are finding themselves in a silent but relentless tug-of-war with AI companies scraping the web to train their models. Tools like Nightshade, designed to “poison” datasets and sabotage unauthorized training, gave creators a new line of defense. But new research shows even these safeguards can be cracked, pushing the conflict into a new and more volatile phase.
Until the rules of engagement between creators and AI companies are clearly drawn, and fairly enforced, the internet remains a live battlefield, where every piece of content is both an asset and a target.
Some companies are seeing an opportunity to help content creators regain control of their content from AI Bots. Cloudflare’s AI Audits, now called AI Crawl Control, automatically controls AI bots for free and taps into analytics to see how AI bots access their content. For the sites that are signing agreements with model providers to license the training and retrieval of content, Cloudflare’s analytics help site owners to audit and understand metrics that are common in these contracts, like the rate of crawling. The idea is to help website owners determine the compensation they believe they should receive from AI model providers for the right to scan their content.
“AI will dramatically change content online, and we must all decide together what its future will look like”. — Matthew Prince, co-founder and CEO, Cloudflare
“AI will dramatically change content online, and we must all decide together what its future will look like,” said Matthew Prince, co-founder and CEO, Cloudflare. “Content creators and website owners of all sizes deserve to own and have control over their content. If they don’t, the quality of online information will deteriorate or be locked exclusively behind paywalls. With Cloudflare’s scale and global infrastructure, we believe we can provide the tools and set the standards to give websites, publishers, and content creators control and fair compensation for their contribution to the Internet, while still enabling AI model providers to innovate.”
Last year, Google DeepMind came up with a tool to identify AI-generated text called SynthID. The company is applying the watermark to text generated by its Gemini models and making it available for others to use too.
In an effort to make it easier for artists to blacklist their work from AI scraping, Adobe introduced its free Content Authenticity web app, which allows creators to embed “do not train” tags and verifiable metadata into their work.
Artists are also going to Cara, the social platform that opposes AI unlike Instagram and Facebook.
The battle for control over online creativity is now unfolding on multiple fronts, technical, legal, and philosophical. With companies like Cloudflare offering AI bot filtering by default, platforms like Cara pledging to protect artists, and tools from Adobe and Google attempting to restore transparency, a new digital rights framework is taking shape.
But until the rules of engagement between creators and AI companies are clearly drawn, and fairly enforced, the internet remains a live battlefield, where every piece of content is both an asset and a target.