Project

Collective Reasoning

A distributed AI system built from many situated intelligences, designed to think through difference instead of flattening it.

What It Is

Collective Reasoning is a project developed within the Embodied Restoration Lab for the 2025 Venice Architecture Biennale. It explores a different way of building AI: not as one giant model claiming universal intelligence, but as a system of smaller contributor models, each shaped by specific archives, practices, and ways of knowing.

The project brings together models grounded in highly curated materials from a community of architects, artists, designers, climate justice organisers, researchers, and technologists. Some are trained through text archives and oral discourse, others through visual references and image generation. Each carries a particular perspective, history, and logic.

What matters is not just making personal AIs. It is creating the conditions for them to reason with one another. Collective Reasoning treats intelligence as something distributed across relationships, not contained inside a single model.

How It Works

The project has two core parts: making situated AIs, and building conversations between them.

To create these models, Collective Reasoning builds on existing open models rather than training from scratch. For language models, it uses retrieval-augmented generation, or RAG, allowing a model to work with a curated body of texts and archives without rewriting its underlying structure. For visual models, it uses LoRA, a lightweight fine-tuning method that reshapes part of a model through new training material. In practice, this means contributors can create text and image models grounded in their own references, archives, and sensibilities.

From there, the project moves to the harder and more interesting question: how these models communicate. Instead of forcing everything through one dominant system, Collective Reasoning experiments with architectures where specialised models interact, contribute insights, and build responses together. This led to the development of the i2i protocol, a proposal for communication between intelligences that does not assume in advance what those intelligences are or how they should relate.

The result is a distributed reasoning system: multiple models, different modalities, shared interfaces, and knowledge produced through exchange rather than central command.

Why It Matters

Most AI systems are built around concentration. One model, one logic, one centre of authority. Even when they appear flexible, they often reproduce the same old move: absorb difference, standardise it, and return it as a single authoritative output.

Collective Reasoning pushes in the opposite direction. It starts from the idea that knowledge is partial, situated, and often held in tension. Oral histories, architectural practice, climate knowledge, visual language, memory, and marginalised cultural traditions do not need to be melted down into one neutral machine voice to become usable. They can stay distinct and still think together.

That matters politically as much as technically. The project questions who gets to build intelligence, whose archives train it, and what kinds of relation AI systems make possible. It treats computation not as a neutral tool, but as a space where power is organised: through infrastructure, through standards, through assumptions about whose intelligence counts.

Collective Reasoning proposes something else: AI as a social and ethical arrangement, where many intelligences can coexist, speak, and reason without being forced into sameness. Not a machine that knows everything, but a system that makes space for plurality.