vf42.com

Data Science & Software Development Solutions

Responsible Human Author

The EU AI Act, that got approved by the European parliament yesterday, contains the Transparency Requirements to indicate AI-generated content. While quite logical on the surface, this requirement only deals with the first-order effects and doesn't address the main concerns related to such content: the spread of misinformation and genuine human authors becoming uncompetitive in a world flooded by the generated content.

Since the machine-generated content will inevitably get widely spread in all possible areas, this transparency requirement is likely to lead to the same results as the infamous Cookie Law. Virtually every website has that cookie banner and most of us just click "accept everything", eager to get over it and access the content. Same way, we will end up in a world where the majority of the things you read, see or listen to will be accompanied by "AI has touched this" notifications.

An Alternative Solution

I advocate for the opposite. Every piece of published content should clearly state the name of the human or a group of people responsible for it. Ideally, that information should be verifiable and legally binding, similar to the disclosure of ultimate beneficiary owners of a company. We can call it "Responsible Human Author", or RHA.

At the end of the day, the AI doesn't create art or any other works on its own. There's a human who prompts the system, publishes the result and benefits from it. A machine can't be liable for any consequences of its work, the human can. That's why the human should be forced to have the skin in the game.

Spread of the Misinformation

As the content-generating machines get better, the spread of deliberate (deep)fakes and just occasionally hallucinated content may become a serious threat. How do you know what's true in the sea of conflicting information?

If everything that's published on the internet had a human name on it, and there was a way to confirm the authenticity of that name, the RHA would have an incentive to ensure the quality and truthfulness of the information. The personal reputation would be at stake.

You're a journalist and decide to use AI to help you write your articles and generate the pictures that will accompany them. That's fine, you're free to use whatever tools you like. But it's your responsibility to verify the content produced by these tools and guarantee that it's not misleading by having your name on the articles.

Threat to the authors

One of the issues raised during Hollywood's Writer Guild strike last year was related to the fear that ChatGPT and other generative tools will replace them. Eventually, the writers and the studios settled on a solution which is in line with the RHA idea. The writers get the freedom to use AI tools to assist their writing, but it's still the human author who's contracted by the studio to do the job.

That's the way to approach it: it's not the machine replacing a human author, it's the human author augmented by the machine. Of course, this will have some side effects:

  • As good authors become more productive, there will be more competition and smaller demand. The threshold to stay relevant and competitive will rise.
  • On the other hand, as we already see, there is a surge in the low-quality content amount which will only get worse. The consumers of the content should have a chance to see who's responsible for it and judge the reputation of the source through RHA.

As with any new technology, the demand for some jobs will decrease and some new ones will be created. The authors who embrace the changes will benefit from them, others will have to find a different job.