So I recently found out about this idea someone had for a "human.json" protocol, a way of websites letting the internet know that they're human-written not AI-written. Code is here:
https://codeberg.org/robida/human.json
And the post I found about it is here:
https://mattellery.co.uk/posts/2026/04/04/fettling-the-backend-of-this-website/
I think there are some good aspects to this:
- It accepts the problem that we need to cut back to more curated information flows. The information superhighway is running us over and choking us in vehicle fumes at this point.
- We do need ways for people to flag "this isn't slop, you can trust that I am human", difficult though that may be to police at all.
- Thinking about trust properly in computer protocols is Good and something we should do more of, especially thinking about trust in a networked-information sense.
I think the problems with it are as follows:
- The people who need it are the people who won't know what it is or how to access it.
- It requires quite a lot of input and thought from a user regarding who they trust to begin with and then letting that be the crux of their trust network. Which requires them to be able to think in those terms, which is quite technical.
- In other words it feels like the model here is people with their own blogs and personal websites, which is a style of internet I like but a) that part of the internet is not where most AI nonsense lies and b) is not much of the internet from most people's perspectives.
- It only includes data on trust, with no system for red-flagging. Thus, it's only useful if enough of the 'good' internet uses it, which requires absolutely mass adoption. With mass adoption would come significant incentives to lie and game the system.
- If this was going to get wider usage, it'd need a secondary service that could look at/crawl the trust network to give sites ratings etc. Realistically people are not all going to maintain their own graphs. As this would have to be aggregated across a big chunk of the internet, it loses the benefits of the manual trust process such that there could then be target vectors for bad actors just e.g. swamping the system.
- In fact the ideal way to use this would be to run it into a search engine system, but that's also a Whole Project.
- It is, in short, a very good idea but one that's several steps away from anything useful to 99% of normal internet users.
That said, despite my scepticism I like and approve of the idea and I'm wondering if we should install this on Exilian and its partner websites. Thoughts welcome!