Deepfield: a modular assessment platform
An eleven-stage strategic assessment pipeline that turns a query into a defensible course of action with full evidence lineage.
See the engagementSyntheos researches, designs, and builds decision systems for problems that don't have one yet. The methods are our own, grounded in decades of peer-reviewed work across the science of science, AI policy, research security, and computer security and privacy. The team comes from federal R&D, defense, intelligence, and biomedical research. The systems run today at DARPA the Andrew W. Marshall Foundation, and Georgetown.
Each engagement ends with a working system your team runs. Not a deck that ages out in ninety days.
The bibliographic record under our science-of-science work — covering the global scientific literature.
Every deployed system ships with an evidence ledger. Each claim links to the source query that produced it, and the figure reruns live against the warehouse on demand. Auditors can pull any number at random. Regulators can verify the math without our help.
Authored by the Syntheos team across science mapping, AI policy, computer security and privacy, and research security — including work in top-tier venues like Nature, PNAS, USENIX Security, ICLR, FAccT, and Quantitative Science Studies.
Every output traces to source evidence through W3C PROV lineage. No opaque models. No silent fallbacks.
































Every engagement ends with a working system, not a slide deck. Each of the four below solved a different shape of problem.
An eleven-stage strategic assessment pipeline that turns a query into a defensible course of action with full evidence lineage.
See the engagementPer-program analytical sites where every quantitative claim links to a reproducible source and a stated confidence level.
See the engagementA knowledge-graph research console that opens Andrew Marshall's Office of Net Assessment tradition to a new generation of strategists.
See the engagementA deployed teaming platform where an AI orchestrator dispatches specialized agents, but phase gates and a delegation contract keep every real judgment in student hands.
See the engagementDecision systems built to survive Congressional oversight and meet the standards the DoD AI office is setting.
View Defense CapabilitiesDARPA, CSET, Marshall Foundation, Noble Reach
Competitive Assessment, Technology Scouting, Decision Architectures
Direct Award, OTAs, BAAs
Every Syntheos engagement is built for one decision, one institution, one threat model. The four below are not productized offerings. They are case studies of original systems we built for clients who couldn't find what they needed on the market — the shape your engagement takes will follow from the decision you have to make and defend, not from the four below.
Fixed-scope analytical sites where every quantitative claim ships with a reproducible query and a confidence level. Built for R&D funders who have to defend portfolio decisions on the record.
Proof: DARPA program offices
An eleven-stage assessment pipeline you run yourself. Every stage carries W3C PROV lineage, so any final recommendation traces back through the whole chain to the source evidence. Built for standing net-assessment and competitive-intelligence work.
Proof: Deepfield
Knowledge-graph research consoles over specialist corpora. Hybrid retrieval, chat with citation badges, and a visual distinction between verified archive material and AI-generated inference. Built for institutions that want researchers to think with a corpus, not just read it.
Proof: Andrew W. Marshall Foundation
Orchestrated AI assistance with human judgment in the loop. A delegation contract constrains what the AI is allowed to do, and database-level phase gates enforce it. Built for learning environments and decision-critical work where the AI must help but never decide.
Proof: Georgetown SEST
Different problems, different infrastructure. Each engagement ends with a working system your team runs, not a document your team files.
These are the high-risk decision domains the team came up through. The shape of the work changes from one to the next. The job doesn't: build the system that helps a particular decision get made well, and lets it survive scrutiny.
Federal funders, agency program offices, and the R&D portfolios that have to be defended to Congress, the press, and the next administration. Quantitative portfolio analysis, science mapping, growth-trajectory forecasting, and AI-assisted research-opportunity discovery brought to the call before it's made and to the audit that follows.
Explore capabilitiesLong-horizon research investment, archive intelligence, and program design for foundations whose decisions have to outlast their staff and define a field. We've built the systems behind specialist corpora, mapped funding portfolios across decades, and modeled where a single grant moves a research community.
Explore capabilitiesTranslational research forecasting, research-funding portfolios, and clinical-evidence synthesis for medical schools, biomedical agencies, and clinical specialty boards. Built on years of running a research-intelligence operation inside an academic medical center.
Explore capabilitiesAI research-trend analysis, research-security assessment, and long-horizon emerging-technology forecasting informed by years of work at the institutions that set this policy. The vocabulary, the methodology, and the relationships are ours, not borrowed.
Explore capabilitiesCompetitive assessment demands synthesis across military, economic, and technological domains over long time horizons. Traceable reasoning. Explicit assumptions you can challenge, test, and refine.
Explore capabilitiesSome decisions don't fit a playbook. When the stakes are high and off-the-shelf doesn't work, we build around the decision itself: who owns it, what evidence it requires, who will scrutinize the outcome. Human judgment stays in the loop. The reasoning stays visible. Your data stays yours.
Explore capabilitiesSyntheos is a small firm by design. Every engagement is staffed by principals — people who came here from the places where high-stakes decisions are actually made.
The Office of Naval Research. The Office of the Secretary of Defense. The MacArthur Foundation. Sandia. The Naval Research Laboratory. CSET. The University of Michigan Medical School. Between us we have decades of peer-reviewed work in science mapping, research evaluation, science policy, AI policy, computer security and privacy, and research security.
You don't get handed off to a junior team. You get the people whose names are on the papers.

Founder & CEO
Before architecture, the investigation. We study the decision your team has to make and defend, its evidence base, the failure modes it has historically been caught by, what the literature already settles and what it doesn't. The work product belongs to you.
The architecture follows from the research, not from a catalog. A research funder might need a bibliometric pipeline against the portfolio. An archive might need a knowledge graph over the corpus. A net-assessment team might need an eleven-stage workflow with provenance lineage end to end. We design the system that fits the decision, not the other way around.
We build it in your environment, on your data, against the security posture your auditors already recognize. Every claim the system produces traces back through W3C PROV lineage to the source evidence. The methods stay yours when we're done.
The engagement ends when your team is running the system without us. We document the pipelines, train your operators, and stay close enough by phone to fix anything broken in the first month. After that, the system is yours, the data is yours, and the decisions made on top of it are yours.
These are the questions that come up when we talk to leaders who've seen enough vendor pitches to be skeptical. They're worth asking of anyone in this space, including us.
AI projects often fail because they're framed as technical projects when they're actually organizational change projects. Someone buys a platform, a team builds models, the models perform well on test data, and then nothing changes. The decisions get made the same way they always did, by the same people, using the same inputs. The AI sits on the side, technically functional and organizationally irrelevant.
The usual culprit is starting from the wrong end. "We have all this data, what can we do with it?" is a recipe for building impressive demos that don't matter. So is "leadership wants an AI initiative." These projects optimize for what's measurable by technologists, like model accuracy, processing speed, and data coverage. Decision-makers care about whether they can trust an output, explain it to their boss, and defend it when something goes wrong. F1 scores are not on that list.
We start from the other end. A specific decision that matters, made by specific people, who answer to specific stakeholders. What evidence do they need? What does defensible look like in their world? Where does their current process break down? The technical architecture follows from those answers, not the other way around.
This is less exciting than a general-purpose AI platform. It's slower to show results in a demo. But it's the difference between a system that gets used and a system that gets abandoned.
Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and see whether the fit is right.
Schedule a briefing