Insights & Perspectives

Rigorous research on the forces shaping the future.

Analysis and argument on AI decision-making, institutional risk, and the gap between what systems promise and what they actually do.

The Inverse Confidence Law
Risk Management

The Inverse Confidence Law

ChatGPT cited six fake cases in Mata v. Avianca with the same confidence it would have used for real ones. The verbal certainty of an LLM is roughly uncorrelated with whether the answer is true.

Read article
The Ancestor's Error
Artificial Intelligence

The Ancestor's Error

Shumailov's Nature paper proved the mechanism in a closed loop. Ahrefs found 74% of new web pages contained AI text. The thought experiment is no longer hypothetical.

Read article
The Responsibility Fog
Risk Management

The Responsibility Fog

Air Canada lost a case over a chatbot for $812.02 in February 2024. That was the small lead indicator for everything California's AB 316 has now made law.

Read article
The Calibration Crisis
Decision Intelligence

The Calibration Crisis

A 2025 CHI paper showed that human confidence aligns with AI confidence, and the alignment outlasts the tool. The error rate stays. The calibration moves.

Read article
Institutional Invalidation
Enterprise AI

Institutional Invalidation

In February 2024 a Hong Kong firm wired $25 million to a synthetic CFO over a deepfake video call. Every protective layer in the firm's controls had quietly become invalid before the call rang.

Read article
The Taxonomy of Silence
Decision Intelligence

The Taxonomy of Silence

Epic's sepsis prediction model missed 67% of sepsis cases at Michigan Medicine. The audit method we built for AI cannot catch what the system never said.

Read article
The Forklift in the Weight Room
Future of Work

The Forklift in the Weight Room

Ted Chiang gave a metaphor in The New Yorker in 2024. I keep finding new ways to test it on the people I work with, and I keep failing to find a counterargument.

Read article
The Fidelity Trap
Human-AI Teaming

The Fidelity Trap

Air France 447 and a 2025 Polish endoscopy trial point at the same trap. The more reliable the system, the more thoroughly its absence becomes catastrophic.

Read article
Legibility and Its Discontents
Enterprise AI

Legibility and Its Discontents

James Scott's argument from 1998, run at the speed of inference. The map is quietly rebuilding the territory inside every firm that runs an AI summarization layer.

Read article

Initiate Contact

Ready to transform your decision architecture?

Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.

Schedule a Briefing
Insights | Syntheos