All Work
Program Analysis ProductDARPA program offices

DARPA program analysis sites: evidence you can audit

For program offices at DARPA, Syntheos delivers dedicated per-program analytical sites. Every claim on every page ties to a YAML evidence ledger that captures the exact database query, the expected result, and the confidence level — so the program office can answer any challenge with the underlying data in hand.

DARPA program analysis sites: evidence you can audit

The situation

Federal R&D funders get asked "how do you know" on a regular basis. By program reviewers. By Congressional staff. By the IG. Sometimes by the press. The usual answer points at a consultant's memo or an analyst's deck. That's a claim without a receipt.

When the stakes are high and the review is hostile, a memo isn't enough. A program office needs to be able to open the source of a number in front of the person asking and watch it reproduce.

What we built

For program offices at DARPA, we built dedicated analytical sites. One site per program. Each site walks a reader through the program's global competition picture, research security exposure, citation depth and engagement patterns, commercial translation activity, field emergence, case studies, and methods.

Every quantitative sentence on every page is backed by an entry in a YAML evidence ledger colocated with the section component that displays it. Each entry captures:

  • The exact claim text as it appears in the UI.
  • A confidence level: HIGH, MODERATE, or LOW.
  • A reproducible source. This can be a SQL query against the warehouse, a path to a data file with the code to verify, or an external URL.
  • The expected results the verification should return.

Before anything ships, a verification skill walks the entire site, runs every query, checks every file path, re-executes every calculation, and compares the outputs to what the ledger expects. A discrepancy stops the build. Either the claim changes or the ledger gets corrected.

How it's defensible

The defensibility isn't a marketing position. It's the mechanic.

A reviewer questions a number on page four. The program office opens the ledger entry next to it. The entry points at a table in the warehouse. The query runs in front of the reviewer. The number reproduces. If it doesn't, the claim comes off the site.

We also enforce rules about what specific metrics actually mean. A "highly influential citation" in Semantic Scholar's algorithm measures how substantively a citing paper engages with a cited source. It does not measure the cited paper's quality. It does not measure the citing institution's research impact. Mixing those up produces confident conclusions that are wrong in strategically important directions. The analysis skills we use on these sites enforce the correct interpretation and reject sentences that quietly flip them.

The same discipline applies to defense and intelligence terminology. "Threat," "risk," and "adversary" have formal meanings in the IC. We don't use them as generic labels for citation patterns. A bibliometric analysis observes what the data shows and calls that "a technology-transfer concern" or "a competition pattern," not "a risk assessment."

What it replaced

A PDF that's stale the day it's printed and can't be audited line by line. Or a live dashboard that produces numbers without telling you where they came from. Either way, the program office spends its credibility defending claims it can't independently reproduce.

What a similar engagement looks like

Per-program, these run 10 to 16 weeks. We take a set of seed publications, program metadata, and subject-matter-expert access on the client side. We deliver a deployed analytical site, the evidence ledger YAMLs, the data pipeline that refreshes the citation and bibliometric data, and a defined process for making claim changes so the site stays honest after the engagement ends.

It's a fit when you're a federal R&D program office (or equivalent) that has to defend program decisions to people who can and will challenge the underlying evidence. It's overkill if your work never leaves the inside of the program office.

For internal champions

Making the case inside your organization?

We've written a two-page business case for this engagement shape. Executive summary, problem statement, deliverables, risks, success metrics, investment range. Read it in the browser or print it to PDF and forward.

Read the business case

Initiate Contact

Ready to transform your decision architecture?

Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.

Schedule a Briefing