Decision systems
that survive an audit.

Syntheos designs and ships working platforms for federal R&D funders, defense leaders, research institutions, and universities. Every conclusion traces back to its source. Our systems run today at DARPA program offices, the Andrew W. Marshall Foundation, and Georgetown.

Each engagement ends with a working system your team runs. Not a deck that ages out in ninety days.

Scroll
0M+Publications Indexed

Bibliometric coverage of the global scientific literature.

0+Verified Claims

Every quantitative claim on every Syntheos site is backed by a reproducible source query. Open the ledger and the number reproduces live.

0+Peer-Reviewed Publications

Authored by the Syntheos team across science mapping, decision intelligence, computer science, and research evaluation.

0Black Boxes

Every output traces to source evidence through W3C PROV lineage. No opaque models. No silent fallbacks.

Trusted by leaders in defense, healthcare, research, and public policy

DARPA
Digital Science
RTI International
The Andrew W. Marshall Foundation
University of Michigan
The American Board of Anesthesiology
Center for Security and Emerging Technology
Noble Reach Foundation
DARPA
Digital Science
RTI International
The Andrew W. Marshall Foundation
University of Michigan
The American Board of Anesthesiology
Center for Security and Emerging Technology
Noble Reach Foundation
DARPA
Digital Science
RTI International
The Andrew W. Marshall Foundation
University of Michigan
The American Board of Anesthesiology
Center for Security and Emerging Technology
Noble Reach Foundation
DARPA
Digital Science
RTI International
The Andrew W. Marshall Foundation
University of Michigan
The American Board of Anesthesiology
Center for Security and Emerging Technology
Noble Reach Foundation
Government & Defense

Built for federal acquisition.

Decision intelligence systems designed from the ground up for classified environments, Congressional oversight, and the DoD AI mandate. We speak your language because we've done this work.

View Defense Capabilities
SAM.gov RegisteredNIST 800-171 In ProgressDFARS 7012 In ProgressResponsible AI by Design

Past Performance

DARPA, CSET, Marshall Foundation, Noble Reach

Mission Areas

Competitive Assessment, Technology Scouting, Decision Architectures

Acquisition Pathways

Direct Award, OTAs, BAAs, CSOs, Subcontractor on Major IDIQs

NAICS: 541611 • 541715 • 541512 • 541690|Request a briefing →

What we build

Four shapes of working system.

Different decisions need different infrastructure. These are the four shapes we build well, each one grounded in a system we've already shipped.

01

Program Analysis Products

Fixed-scope analytical sites where every quantitative claim ships with a reproducible query and a confidence level. Built for R&D funders who have to defend portfolio decisions on the record.

Proof: DARPA program offices

  • Deployed analytical site, one per program
  • YAML evidence ledger behind every claim
  • Reproducible SQL queries against your warehouse
  • Python refresh pipeline for standing data
02

Decision Platforms

A nine-stage assessment pipeline you run yourself. Every stage carries W3C PROV lineage, so any final recommendation traces back through the whole chain to the source evidence. Built for standing net-assessment and competitive-intelligence work.

Proof: Deepfield

  • Modular pipeline, nine stages, tuned to your domain
  • Knowledge graph, MCTS reasoning, wargaming
  • Adaptive compute budgets per stage
  • Full evidence lineage on every recommendation
03

Research Intelligence Tools

Knowledge-graph research consoles over specialist corpora. Hybrid retrieval, chat with citation badges, and a visual distinction between verified archive material and AI-generated inference. Built for institutions that want researchers to think with a corpus, not just read it.

Proof: Andrew W. Marshall Foundation

  • Knowledge graph built from your documents
  • Hybrid vector, full-text, and graph retrieval
  • Verified versus inferred node labeling
  • Saved investigative threads and filters
04

Human-AI Teaming Systems

Orchestrated AI assistance with human judgment in the loop. A delegation contract constrains what the AI is allowed to do, and database-level phase gates enforce it. Built for learning environments and decision-critical work where the AI must help but never decide.

Proof: Georgetown SEST

  • Agent orchestration across fast, deep, and QA tiers
  • Delegation contract constraining AI authority
  • Database-enforced phase gates
  • Voice and text surfaces, instructor tooling

Different problems, different infrastructure. Each engagement ends with a working system your team runs, not a document your team files.

Industries

Service Areas

Tech Transfer | R&D Decisions

Portfolio decisions in tech transfer are inherently high-uncertainty. You can't predict which bets will pay off, but you can make them systematically, document your reasoning, and move faster than the backlog allows.

Explore capabilities

Defense & National Security

Competitive assessment demands synthesis across military, economic, and technological domains over long time horizons. Traceable reasoning. Explicit assumptions you can challenge, test, and refine.

Explore capabilities

Policy & Research Funding

Research outcomes are diffuse, long-term, and hard to attribute. Traceable pathways from funding to real-world impact, with evidence that holds up to auditors, oversight, and the press.

Explore capabilities

Infrastructure & Public Risk

You can't fix everything. The job is to prioritize rationally and document why, so that when regulators, elected officials, or the public ask questions, you have evidence and reasoning, not just good intentions.

Explore capabilities

Healthcare & Biomedical Research

Clinicians don't need more alerts or dashboards. They need to know what changed, why it matters, and where to verify. Delivered where care happens, not in another system to check.

Explore capabilities

Custom Decision Systems

Some decisions don't fit a playbook. When the stakes are high and off-the-shelf doesn't work, we build around the decision itself: who owns it, what evidence it requires, who will scrutinize the outcome. Human judgment stays in the loop. The reasoning stays visible. Your data stays yours.

Explore capabilities

“Every system we build has to survive the moment someone asks how do you know. That's the whole business.”

Caleb Smith

Caleb Smith

Founder & CEO

Our Approach

From problem formulation to decision-ready intelligence. Transform strategic ambiguity into analytical clarity.

01

Frame the Contest

Most strategic failures begin with the wrong question. We start by structuring the problem. We identify what's actually decisive, surface hidden assumptions, and build analytical architectures tailored to the specific challenge.

02

Deploy Purpose-Built Systems

We've built platforms like Deepfield that combine knowledge graphs, MCTS reasoning, and bibliometric analysis, but every domain is different. We adapt our methods to the problem, not the problem to our methods. Sometimes that means original research, sometimes simulation, sometimes deep technical scouting.

03

Deliver Decision-Ready Intelligence

We don't stop at insights. Our systems deliver actionable courses of action with full evidence lineage, tested assumptions, and quantified confidence. This is the analytical infrastructure that enables decisive action in complex environments.

FAQ

Questions From Leaders Who've Been Here Before

These are the questions that come up when we talk to leaders who've seen enough vendor pitches to be skeptical. They're worth asking of anyone in this space, including us.

AI projects often fail because they're framed as technical projects when they're actually organizational change projects. Someone buys a platform, a team builds models, the models perform well on test data, and then nothing changes. The decisions get made the same way they always did, by the same people, using the same inputs. The AI sits on the side, technically functional and organizationally irrelevant.

The usual culprit is starting from the wrong end. "We have all this data, what can we do with it?" is a recipe for building impressive demos that don't matter. So is "leadership wants an AI initiative." These projects optimize for what's measurable by technologists: model accuracy, processing speed, data coverage. But the people making decisions don't care about F1 scores. They care about whether they can trust the output, explain it to their boss, and defend it when things go wrong.

We start from the other end: a specific decision that matters, made by specific people, who answer to specific stakeholders. What evidence do they need? What does defensible look like in their world? Where does their current process break down? The technical architecture follows from those answers, not the other way around.

This is less exciting than a general-purpose AI platform. It's slower to show results in a demo. But it's the difference between a system that gets used and a system that gets abandoned.

Initiate Contact

Ready to transform your decision architecture?

Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.

Schedule a Briefing