Anticipatory Intelligence · Creative Intervention Design

Humans in conflict
are predictable.

Let's use that — before things break.

Presage AI is a proposal — a pairing of AI-driven pattern detection of conflict signals with creative intervention design — connecting the warning to the actors best placed to defuse it, including the ones traditional mediators overlook.

§ 01  Premise

Conflict rarely erupts. It accumulates. Rhetoric hardens before borders close. Trust frays before institutions break. Grievance narratives sharpen before the first shot.

Each escalation leaves a signature — in language, sentiment, migration, economics, and the rhythm of public discourse — and most of those signatures rhyme with history. AI's strongest faculty is recognising rhyme. The leverage point is clear: douse the fire at the hearth, before it is a fire.

§ 02  Recognising rhyme

A small demonstration.

Two historical signatures are hidden beneath a synthetic present. Trace the timeline. Where do they rhyme?

Pattern · demonstration
Hover the timeline. Listen for rhymes.
Illustrative · not real data
Move along the timeline. A thirty-sample window from the present is compared against two historical signatures. Most signatures rhyme with history.
§ 03  What Presage AI is

Three things, in sequence.

01

Pattern recognition

Presage AI detects the patterns that precede conflict — in language, sentiment, economic indicators, migration, and the rhythm of public discourse — across languages and sources, with human-in-the-loop validation.

02

Factor & actor analysis

Alongside every signal, Presage AI explains why: the pressure points, thickening grievances, influence networks, and the historical conflicts this one rhymes with.

03

Creative intervention design

Presage AI's distinctive contribution. Risk patterns translated into concrete intervention options — including the non-obvious: commercial actors with self-interest in stability, credible influencers, unexpected third parties, new forums.

§ 04  Responsible by design

Governance built in, not bolted on.

An AI system that operates on conflict signals carries real risks: surveillance misuse, algorithmic bias, narrative asymmetry, potential weaponisation. Presage AI's response is structural, not ornamental.

Cross-linguistic sentiment parity is not a luxury feature. Biased ears hear biased rhymes.
— principle
§ 05  Roadmap 2026 – 2027

Foundation, sandbox, pilot.

Q2–Q3 2026

Foundation

Appoint Programme Lead. Engage pilot consortium. Secure seed funding. Define data protocols, ethical review framework, governance.

Q4 2026 – Q1 2027

Sandbox

Build a technical sandbox integrating VIEWS forecasts, ACLED events, selected sentiment streams. Prototype the intervention engine. First independent ethics and bias audit.

Q2–Q4 2027

Pilot operations

Deploy with 3–5 partner organisations. Monthly intervention briefings for the pilot context. Publish first transparency report. Secure operational funding for scale.

Richard Wolfe
§ 06  About

Richard Wolfe — Amsterdam.

Richard Wolfe is an Amsterdam-based specialist in digital collaboration, personal productivity, and the practical application of AI. His career spans technology consulting, venture building, and executive education — including work with Nyenrode Business University on digital transformation.

Presage AI began from a single conviction: the AI capabilities now available to commercial enterprises should be directed, with equal rigour and greater urgency, toward preventing human suffering.

Presage AI is, at its heart, an invitation.

The tools exist. The data exists. The emphasis can now be on the orchestration — and a number of careful people willing to build it alongside those already at work.