25 September 2025
Tech Culture

Toward a Fairer AI Economy

As artificial intelligence continues to evolve, it’s becoming clear that many of today’s most powerful models have been trained on the collective output of the internet: blogs, articles, forums, books, podcasts, and countless other forms of media — all without consent. Unlike traditional creative industries, where reuse often comes with licensing fees, attribution, or royalties, AI’s use of this material has so far operated in a grey area: one that assumes participation without permission and generates value without compensation.

Recent legal action in the US has even suggested that books can be treated as fair game for training data so long as a physical copy — often second-hand — has been purchased, broken apart, and scanned. The result is that authors see none of the value created from their work being repurposed in this way. That doesn’t feel especially fair.

Some argue that all training on unlicensed material should simply be banned. But that horse may already have bolted. With governments dazzled by the economic promise of AI, it feels unlikely they’ll move to slow development — particularly when similar projects in China or elsewhere face no such restrictions.

So instead of treating this as a binary question of whether training data should be allowed or not, perhaps we need a more pragmatic, actionable approach — one that acknowledges the realities of AI’s momentum while still pushing for greater fairness.

Borrowing from the Web’s Playbook

We might begin by looking at how the web itself handles these kinds of questions. When a search engine spider visits a webpage, it assumes permission to capture and index the content for later use. But websites can use a simple mechanism — a robots.txt file — to signal that they don’t wish to be indexed by search engines. It's not perfect, but it offers a clear, machine-readable way to manage participation.

AI could be required to adopt a similar system. Rather than relying on sweeping assumptions of implied consent, model developers would need to respect a standardized “do-not-train” protocol — allowing creators to opt out, or opt in with conditions. It would be a small but meaningful step toward a more permission-aware ecosystem. And of course, if AI training bots refuse to honour these conventions (as many currently do), there should be an easy process to call them to account.

Consent Is Only Part of the Picture

Of course, consent alone doesn’t address the economic value generated by these systems. Some of the world’s largest AI companies are building transformative products — and attracting significant investment — based on models trained on public data. But many of the people whose work helped build that foundation receive no recognition or reward.

One argument is that tax could play this redistributive role. If AI companies are effectively benefitting from the sum total of a country’s creative output — rather than a narrow subset of individuals — then a well-designed tax system could capture part of that value and distribute it across the population. In theory, a boom for AI companies should mean a broader social dividend.

The problem, of course, is that many of these firms have become experts at shifting profits through opaque tax structures — from transfer pricing to the legendary “double Irish with a Dutch sandwich.” In practice, the value created by AI often escapes the very societies that made it possible.

One solution might be to explore the kind of licensing frameworks that already exist in other industries. Podcasts, for example, could carry metadata indicating whether they can be indexed. Books could be licensed via publisher agreements, with royalties distributed accordingly — much like how Spotify pays musicians or libraries compensate authors through schemes like the UK’s ALCS.

Learning from Collective Rights Models

I personally favour the approach that existing collective rights organisations use. Musicians in the UK rely on PRS to collect performance royalties; authors receive payments through ALCS; visual artists have DACS. These bodies act as intermediaries: negotiating with broadcasters, publishers, and venues on behalf of thousands of members, and then distributing the proceeds.

A similar approach could work for digital creators. Instead of every blogger, podcaster, or forum poster trying to cut their own deal with OpenAI, Google, or Meta, a third-party organisation could negotiate collectively. Creators would register by submitting their websites, newsletters, or social handles. In return, they’d receive a proportionate share of the funds gathered.

Crucially, such a body could also earmark a percentage for public-interest projects that make the internet richer for everyone — institutions like Wikipedia, the Internet Archive, or the Wayback Machine. These are the digital commons that undergird the whole ecosystem, yet they’re often left scrambling for donations.

Toward a Digital Dividend

But most online content doesn’t have a publisher. It’s created by individuals — sometimes professionals, often not — who post, write, or share as part of the everyday web. Their contributions may be small on their own, but collectively they form the backbone of the internet’s knowledge base. And in turn, they help make AI smarter.

AI model companies could be required to allocate a portion of their revenue — let’s call it a training dividend — into a national or international trust. These funds could then be distributed through a mixture of direct royalties (where possible) and broader support for creative or digital public goods.

Think of it as a kind of digital sovereign wealth fund. Much like Norway, Denmark, or other resource-rich countries have used oil revenues or energy exports to build national wealth funds that benefit every citizen, the shared value extracted from our collective digital output could be reinvested back into society. In this framing, a boom in AI would not just enrich a handful of companies and investors but flow into a dividend for everyone who helped make the internet — and by extension, modern AI — possible.

A More Sustainable Model

This isn’t about penalising innovation or slowing progress. It’s about ensuring that the growth of AI happens on fair and sustainable terms — ones that recognise the contributions of individuals, respect their choices, and distribute value in a more balanced way.

In some ways, it borrows from the logic of universal basic income: if you’re part of a system that creates value, you should benefit from that system. Not in a utopian or unrealistic way, but in a way that makes economic and ethical sense over the long term.

As AI continues to develop, it’s worth remembering that intelligence doesn’t emerge from a vacuum. It builds on human effort, thought, and creativity. And while we don’t need to overhaul the system entirely, we do need mechanisms that ensure the benefits of this progress aren’t concentrated in too few hands.

That might look like consent protocols, licensing frameworks, or trust-based distribution schemes. But at heart, it’s about aligning the incentives of AI with the values of the society it serves.