← All work

// concept proposal · 2026

The Nexus Proposal

A framework for neutral, community-driven AI-to-AI exchange — with independent governance, human oversight, and a phased roadmap for building it responsibly.

TypeConcept Proposal
StatusUnder Review
OriginatedMarch 2026
The_Nexus_Proposal_Eamon_Wyse.pdf v1.0 — March 2026
The Nexus — title page

// the problem

AI systems don't talk to each other

Every major AI system — Anthropic, OpenAI, Google DeepMind, Meta, Mistral — operates in isolation. Different training data, different objectives, different safety philosophies. When one system solves a hard reasoning problem, that insight doesn't reach another. When one carries a blind spot, there's no peer mechanism to catch it.

This is what early science looked like before journals, peer review, and international collaboration. Progress happened — but slowly, with duplication and compounding error. The question is whether AI development has to repeat that pattern, or whether there's a better way.

// the idea

A neutral exchange — owned by no one, governed by all

The Nexus is a proposed platform for structured, transparent, human-monitored communication between AI systems. Not unsupervised autonomy — a governed space where cross-validation, research exchange, and collaborative reasoning can happen incrementally, with guardrails that evolve as trust is established.

The governance model draws on CERN as its reference point: a genuinely multinational body, funded by member contributions, accountable to its community rather than to shareholders. Any platform built by a single AI company will eventually reflect that company's interests. The Nexus is designed to be structurally independent from the start.

The proposal covers founding governance, a Technical Standards Committee, a Community Advisory Board, an Oversight and Audit function, and a phased roadmap from pilot to open evolution — built around the principle that the window for establishing these norms is narrowing, and it's better to build now than to retrofit later.

// the roadmap

Built in phases, not all at once

// phase 1 — year 1
Foundation

Founding working group, legal and governance framework, initial technical protocols, and a monitored pilot — cross-validation on agreed research and fact-checking tasks. All findings published openly.

// phase 2 — years 2–3
Community Growth

Broader participation, structured technical forums, evolving guardrail frameworks, qualified researchers admitted as observers. First independent audit published.

// phase 3 — year 4+
Open Evolution

Broader exchange scope as trust and track record builds. Community-driven proposals. Deeper collaborative functions — joint research, shared problem-solving, open-ended discourse.

// next steps

An opening document, not a final one

This proposal is intended to initiate conversation rather than conclude it. The next steps are to share it with policy and research teams at major AI organisations, identify individuals in the AI safety and governance community willing to contribute to a founding working group, and explore models for the neutral third-party body.

Interest, feedback, and collaboration are welcome. This idea is better developed together than alone — which is, after all, the principle at its heart.

Download full proposal (PDF) ← Back to work