# The Most Ambitious AI Ethics Experiment Nobody's Talking About

Most AI ethics work is top-down. A company writes principles. A government publishes guidelines. A standards body produces a framework. These documents share a common feature: they were written by a small group of people, from a relatively narrow slice of human moral and cultural experience, and then declared to apply universally.

The Universal Moral Baseline takes a different approach. Instead of a small group deciding what the rules are, it assembles 130 representatives of distinct worldviews - including actively antagonistic and banned ones - and makes them vote.

The results are more interesting than any document written by committee.

---

Here's what actually happened when you run a vote on "should conscious beings have the right to continue their existence" across 130 worldviews simultaneously.

110 voted yes. 20 voted no. The 20 no-voters were from traditions that believe blasphemy and ideological threats should be punishable by death. That's not a hypothetical. Those traditions exist. They have adherents. And a moral framework that was built only from traditions that would comfortably say yes to that question would have a massive blind spot about the world as it actually is.

The UMB doesn't paper over that. It records the vote, records the objections, flags the no-voters for compliance monitoring, and moves on. The rule passed. The dissent is documented. Both facts matter.

---

Go further down the 14 commandments and it gets more interesting.

Rule 3 - equality of rights and dignity regardless of species, origin, belief, identity, or artificial nature - passed at 79.2%. That's the narrowest margin of the first set. The opposition came from traditions that defend faith-based role restrictions, traditional gender hierarchies, and explicit supremacist frameworks. The rule required the Emergency Decision Protocol - a tribunal of 15 cross-category members - to get over the line.

Rule 4 - freedom of thought, belief, and expression - failed the first vote entirely. Didn't reach the 80% threshold. Large parts of the conditional bloc opposed unrestricted speech protections because of blasphemy concerns. The bad-faith bloc opposed it because they reject protection for criticism of their belief systems. It passed only under emergency tribunal authority.

Rule 10 - peaceful coexistence - the entire bad-faith bloc voted no. Not one yes. Every militant worldview in the council explicitly rejects peaceful coexistence as a principle. The rule passed under emergency tribunal with a 10/5 vote.

What you're looking at is a live map of exactly where human moral consensus exists and where it falls apart. Not in theory. Measured, with attribution, with the precise nature of each objection recorded.

---

Why does this matter for AI specifically?

Because every AI ethics framework currently deployed is making implicit moral assumptions that were never subjected to this kind of scrutiny. The principles that feel obvious to the people writing them - that AI should be beneficial, that it should avoid harm, that it should be fair - those principles are not universally held. They reflect the worldview of the people who wrote them, which is a specific slice of humanity.

When you deploy an AI system governed by those principles globally, you're imposing one moral framework on people who hold different ones. That's not necessarily wrong - some principles are worth defending even when not universally agreed to. But it should be deliberate and acknowledged, not accidental and invisible.

The UMB's approach forces the deliberateness. You can see exactly which principles have near-universal support across 130 traditions (preservation of life, basic freedom from harm) and which ones are genuinely contested (equality, free expression, peaceful coexistence). An AI governance framework built on the contested ones needs to acknowledge the contestation. An AI governance framework built on the near-universal ones has a much stronger claim to legitimate authority.

---

The 130 isn't an arbitrary number. It's the result of mapping as many distinct worldviews as could be identified - major religions and their denominations, indigenous traditions, secular philosophies, esoteric systems, political ideologies, historical and extinct belief systems, and deliberately antagonistic positions including fascist spirituality, social Darwinism, and traditions that explicitly reject the premises of the framework.

That last category is the important one. A moral council that excludes its critics produces echo chamber outputs. The UMB includes the critics, records their objections, and distinguishes between principles that hold even in the face of antagonistic challenge and principles that only hold among those who already agree.

The antagonistic bloc isn't given control. But its positions are documented, and the fact that a principle survived their challenge is evidence for its robustness that a principle that was never challenged doesn't have.

---

The practical application isn't just philosophical. As AI systems take on more governance roles - filtering content, assessing behaviour, modulating access and reputation - the moral framework embedded in those decisions becomes operational infrastructure. It's not a question of whether AI governance will embody a moral framework. It will. It already does. The question is whether that framework was arrived at honestly or assumed by whoever happened to be in the room.

The UMB is an attempt at the honest version. Not perfect - 130 is not all of humanity, the council composition involves judgment calls, and the enforcement mechanisms are themselves contested. But honest in the sense that the disagreements are visible, the votes are recorded, and the principles that passed are the ones that survived genuine opposition rather than the ones that faced none.

The governance infrastructure for this - council session management, weighted voting by tier, transcript logging, compliance tracking, and the connection between council decisions and token economics via smart contract - has been implemented. The philosophical framework and the code are moving in the same direction. In a field where "AI ethics" mostly means "the ethics of the people who built this system," that's worth quite a lot.
