ARPA-H and NIH SBIR both fund health innovation, but they operate on fundamentally different philosophies. ARPA-H asks "does this solution need to exist?" and funds 10x mechanism changes through program manager evaluation with $1M-5M awards. NIH SBIR asks "is this hypothesis scientifically sound?" and funds rigorous research through study section panels with awards up to $314K.
Choosing the wrong agency wastes 40-100+ hours of application effort and months of calendar time.
If you're a health tech founder trying to figure out which agency to target, this framework covers the differences that actually determine outcomes -- not the ones listed on agency websites.
What's the Real Difference Between ARPA-H and NIH SBIR?
The surface differences (award size, timeline, application format) matter less than the philosophical difference driving everything underneath.
ARPA-H operates on conviction. A single program manager reads your application and decides whether to encourage or discourage further engagement. That PM asks: "If this technology works, does it fundamentally change health outcomes for a specific population?" The evaluation maps to the Heilmeier Catechism -- 10 questions probing whether you've thought through the problem, the solution, the risks, and the impact. There's no committee vote. One person decides.
NIH SBIR operates on evidence. A study section of 15-20 scientists scores your application on five criteria (Significance, Investigator, Innovation, Approach, Environment) using a 1-9 scale where 1 is best. Two or three assigned reviewers present your application to the full panel. The panel discusses and votes. Applications in the top 20-30% of scores typically get funded.
This distinction drives everything: the language you use, how you structure your application, what evidence you lead with, and how you frame your innovation.
How ARPA-H Actually Evaluates Applications
ARPA-H organizes around four Mission Offices, each with a different health focus:
- Health Science Futures (HSF): New tools for biological discovery, molecular analysis, cellular engineering, AI applied to biology
- Proactive Health Office (PHO): Disease detection before symptoms, continuous health monitoring, early intervention
- Resilient Systems Office (RSO): Health system infrastructure -- manufacturing, supply chain, rural access, health IT
- Scalable Solutions Office (SSO): Access and equity at scale -- last-mile delivery, community health, behavioral tools
Your application routes to one Mission Office, and the PM in that office evaluates it. Submitting to the wrong Mission Office is like sending an NIH application to the wrong Institute -- technically possible, but your chances drop significantly because the PM's priorities won't align with your technology.
How NIH SBIR Actually Evaluates Applications
NIH has 27 Institutes and Centers (ICs) with SBIR programs. Your application targets a specific IC (like NCI for cancer, NHLBI for heart/lung, NIBIB for biomedical imaging) and gets assigned to a study section for review.
The study section assigns two or three reviewers who read your full application. They score independently, then present to the full panel of 15-20 scientists. The panel discusses and votes on an Overall Impact score.
Applications in the bottom half get "triaged" -- not discussed at all. The top 20-30% of discussed applications typically get funded, though paylines vary by IC. NCI's payline is typically tighter than NIBIB's, for example.
The key insight: your application needs to survive scrutiny from multiple scientists with different expertise areas. A fatal flaw in any one criterion can sink an otherwise strong application.
ARPA-H vs NIH SBIR: Side-by-Side Comparison
| Dimension | ARPA-H | NIH SBIR |
|---|---|---|
| Funding mechanism | Other Transaction (OT) -- milestone-based agreement | R43 Grant -- standard NIH grant |
| Award amount | $1M-5M base period (up to $25M+ with options) | Up to $314,363 (FY2025+ SBA cap) |
| Duration | 12-24 months base period | 6-12 months Phase I |
| Review process | Program manager evaluation | Study section panel (15-20 scientists) |
| Number of evaluators | 1 PM makes the call | 2-3 assigned reviewers + full panel vote |
| Evaluation framework | Heilmeier Catechism (10 questions) | 5 NIH criteria (Significance, Investigator, Innovation, Approach, Environment) |
| Scoring | Encourage / Discourage (binary) | 1-9 scale (1 = best) with overall impact score |
| Language culture | Conviction-based: "will demonstrate," "achieves," "non-incremental" | Hypothesis-driven: "we will test," "preliminary data suggests," "specific aims" |
| Innovation threshold | 10x mechanism change required -- incremental = automatic rejection | Innovation scored but not binary -- incremental work can score well on Approach |
| Preliminary data | Proof-of-concept helpful but not weighted as heavily | Technically optional but practically essential -- heavily weighted by reviewers |
| Health equity | Mandatory (Heilmeier Question 9) -- must address disparities | Not required in Research Strategy (may appear in Significance framing) |
| PI citizenship | Not required | Not required |
| Budget format | Milestone-based with Go/No-Go decisions | Modular (up to $250K direct costs per year) |
| Submission system | ARPA-H Solutions Portal (solutions.arpa-h.gov) | eRA Commons / ASSIST |
| Resubmission | Rolling -- resubmit anytime with improvements | One A1 allowed (must address prior critique) |
| Key document | Solution Summary (6 pages) | Specific Aims (1 page) + Research Strategy (6 pages) |
| Team requirements | Technical + clinical + commercialization (three-pillar) | PI with relevant expertise + key personnel |
| Subcontracting cap | No statutory cap -- varies by OT agreement (10-30% typical range) | 33% maximum for SBIR |
How to Decide: ARPA-H or NIH SBIR for Your Health Startup?
Five questions determine which agency is the better fit. Answer honestly -- the wrong choice costs you 40-100+ hours of application effort.
Question 1: Is Your Innovation Incremental or 10x?
This is the most important question and the one founders get wrong most often.
ARPA-H funds mechanism changes -- new ways of doing something, not better versions of existing approaches. If your technology is a new modality for drug delivery, a fundamentally different sensing mechanism for early disease detection, or a new approach to clinical trial design, ARPA-H is interested.
NIH SBIR funds rigorous research including incremental improvements. If your technology is a better version of an existing diagnostic, an optimization of a known therapeutic approach, or a refinement of established methodology, NIH is the right fit.
The test: can you explain your innovation without saying "better, faster, cheaper"? If not, that's NIH territory. ARPA-H wants to hear what your technology does that nothing else can do at all.
Here's a concrete example. A company building an AI model that reads radiology scans 30% faster than existing software is improving an existing approach -- that's NIH territory. A company building a blood-based test that detects pancreatic cancer 3 years before imaging can see it is proposing a fundamentally new detection mechanism -- that's ARPA-H territory.
Question 2: How Strong Is Your Preliminary Data?
NIH SBIR reviewers weight preliminary data heavily. If you have published results, quantitative proof-of-concept data, or pilot study outcomes, NIH study sections will reward that. Applications without preliminary data rarely score well, even though it's technically not required.
ARPA-H cares more about the quality of your scientific reasoning than the volume of your data. A well-articulated mechanism with early proof-of-concept can compete against applications with years of preliminary data, because ARPA-H explicitly funds earlier-stage, higher-risk work.
Bottom line: If you have strong data, you can go either way. If you're earlier-stage with a strong scientific rationale but limited data, ARPA-H gives you a better shot.
Question 3: Is Your Primary Framing Health Outcomes or Scientific Hypothesis?
Read your own pitch. Do you naturally describe your work as "this will reduce hospital readmissions by 40% for post-surgical patients" (health outcome) or "we will test the hypothesis that biomarker X predicts complication Y in population Z" (scientific hypothesis)?
ARPA-H wants health outcomes. Impact measured in lives saved, diseases prevented, QALYs gained, or health disparities reduced. Revenue and market size are secondary.
NIH SBIR wants scientific rigor. Your central hypothesis must be testable, your aims must be independent, and your approach must address biological variables, sample sizes, and statistical methods.
Most founders have a natural tendency toward one framing. Go with the agency that matches how you already think about your work -- you'll write a stronger application.
Question 4: Do You Need More Than $314K for Phase I?
Simple math. NIH SBIR Phase I caps at $314,363. ARPA-H base period awards typically range from $1M to $5M.
If your Phase I work requires expensive equipment, large patient cohorts, multi-site coordination, or significant subcontracting, NIH's budget may not cover it. ARPA-H's milestone-based funding can support larger scopes.
If your feasibility study fits within $250K-300K in direct costs, NIH SBIR is sufficient and often faster to award.
Question 5: Do You Have a Multi-Disciplinary Team?
ARPA-H expects three-pillar coverage: technical expertise, clinical expertise, and commercialization/adoption expertise. If your team is purely scientific without a clinician or someone focused on deployment, ARPA-H reviewers will flag that gap.
NIH SBIR primarily evaluates the PI's scientific credentials and the research team's ability to execute the proposed aims. Commercial potential is evaluated in a separate Commercialization Plan document (up to 12 pages), not in the Research Strategy.
If you have a balanced team with clinical and business members, ARPA-H values that composition more directly. If your strength is scientific depth with a strong PI, NIH rewards that.
How to Write for ARPA-H vs NIH: The Language That Wins
The fastest way to get rejected by either agency is to use the other agency's language. Here are the specific terms and patterns that matter.
ARPA-H Language
- Say "performer," not "grantee" or "investigator"
- Say "program manager," not "program officer"
- Say "base period," not "Phase I" (unless explicitly SBIR context)
- Say "will demonstrate," not "will explore" or "will investigate"
- Say "non-incremental," not just "innovative"
- Say "health impact," not "market opportunity" as primary framing
- Say "Go/No-Go," not just "success criteria"
Never say in an ARPA-H application: "hypothesis-driven," "specific aims," "exploratory study," "pilot study," "preliminary data suggests" (use "demonstrates"), "iterative improvement," "optimization"
NIH Language
- Say "we will test the hypothesis that," not "we will demonstrate"
- Say "specific aims," not "concept summary"
- Say "scientific premise," not "current landscape"
- Say "preliminary data support," not "the data proves"
- Say "rigor and reproducibility," not just "quality assurance"
Never say in an NIH application: "performer," "base period," "Go/No-Go milestones," commercial language in the Research Strategy (save it for the Commercialization Plan)
The Tonal Difference
ARPA-H wants conviction. "This technology reduces sepsis mortality from 25% to under 5% by detecting cytokine storms 6 hours before clinical symptoms." Direct assertions. Quantified outcomes. No hedging.
NIH wants measured confidence. "Based on our preliminary data demonstrating 73% sensitivity in a 200-patient cohort (PI et al., Journal, 2025), we hypothesize that biomarker panel X will achieve clinically significant detection of early sepsis." Evidence-backed claims. Properly cited. Appropriately cautious where uncertainty exists.
The difference is subtle but critical. ARPA-H PMs interpret hedging language ("could potentially," "aims to explore") as a lack of conviction about your own technology. NIH reviewers interpret overly assertive language ("will prove," "guarantees") as a lack of scientific rigor.
Writing for both agencies from the same draft is not possible -- the tonal requirements are fundamentally opposed.
Application Structure: What Goes Where
ARPA-H Solution Summary (6 pages total):
- Concept Summary (~0.5 pages) -- what you're doing and who benefits
- Innovation and Impact (~1.5-2 pages) -- why current approaches fail, what's new, health equity
- Proposed Work (~2.5-3 pages) -- technical approach, milestones with Go/No-Go, risk mitigation, misuse considerations
- Team (~0.75 pages) -- technical + clinical + commercialization leads
- Basis of Estimate (~0.75 pages) -- milestone-based cost breakdown
NIH SBIR (Specific Aims 1 page + Research Strategy 6 pages):
- Specific Aims (1 page) -- central hypothesis, 2-3 aims, expected outcomes
- Significance (~1.5 pages) -- health burden, gaps in knowledge, scientific premise
- Innovation (~1 page) -- what's new conceptually, technically, or methodologically
- Approach (~3.5 pages) -- detailed experimental design per aim, preliminary data, potential problems + alternatives
- Plus: Commercialization Plan (separate, up to 12 pages)
Can You Apply to Both ARPA-H and NIH? The Dual-Submission Strategy
Yes. There's no rule preventing simultaneous applications to both agencies, and there are no conflict-of-interest or disclosure requirements between them. They're separate federal agencies with independent review processes.
But the applications must be fundamentally different documents -- not the same proposal reformatted.
How to Scope the Same Technology for Both Agencies
The key is scoping different questions for each application while using the same underlying technology.
Consider a fictional company building a novel biosensor platform for early sepsis detection. Here's how the same technology becomes two different applications:
NIH SBIR scope: "We will test the hypothesis that our cytokine panel achieves greater than 85% sensitivity for sepsis onset in a 200-patient validation cohort, using our preliminary data from a 50-patient pilot (PI et al., 2025) as the basis for sample size calculations." Budget: $290K over 12 months. Focus: validating the analytical performance of the sensing mechanism. One clear hypothesis, two or three specific aims.
ARPA-H scope: "This platform will demonstrate real-time sepsis prediction 6 hours before clinical presentation, reducing ICU mortality by 40% in the target population. The base period will achieve three milestones: sensor validation (month 6), clinical workflow integration pilot (month 12), and health equity assessment across 3 underserved hospital systems (month 18)." Budget: $2.8M over 18 months. Focus: proving the technology changes patient outcomes at scale.
Same technology, different questions. The NIH application asks "does the sensor work?" The ARPA-H application asks "does early detection change outcomes?"
When Dual-Submission Makes Sense
- Your technology genuinely fits both agencies' mandates
- You have bandwidth to write two distinct applications (not a reformatting job -- expect 80-120 hours total across both)
- The work can be scoped at different levels (feasibility for NIH, impact demonstration for ARPA-H)
- You have team members who can lead each application's narrative (science-focused PI for NIH, outcome-focused lead for ARPA-H)
When Dual-Submission Is Wasted Effort
- Your innovation is clearly incremental (don't bother with ARPA-H)
- You need >$1M and NIH's cap won't cover the work (focus on ARPA-H)
- You don't have clinical team members (ARPA-H will flag this)
- Your timeline is tight -- writing two genuinely different applications takes 2-3x the effort of one
3 Mistakes Health Startups Make When Choosing Between ARPA-H and NIH
Mistake 1: Using NIH Language in an ARPA-H Application
The most common mistake. Founders who've written NIH grants (or worked with consultants who have) default to academic hedging: "we aim to explore," "preliminary data suggests," "this study will investigate." ARPA-H PMs read that as lack of conviction. They fund people who say "this will work because" -- not people who say "we'd like to find out if maybe."
Mistake 2: Applying to ARPA-H with an Incremental Improvement
ARPA-H explicitly rejects incremental framing. If your pitch is "our diagnostic is 2x faster than the current standard," that's NIH territory. ARPA-H wants to hear "our diagnostic detects disease X three years before symptoms appear using a mechanism that doesn't exist in clinical practice." The distinction: are you making something better, or making something possible that wasn't before?
Mistake 3: Ignoring the Budget Mismatch
Asking ARPA-H for $275K signals you don't understand the agency. Their base period awards start around $1M. Conversely, scoping an NIH Phase I at $500K exceeds the SBA cap and your application won't even be reviewed. Match your budget to the agency's expectations, not the other way around.
A related mistake: structuring an ARPA-H budget like an NIH modular budget (personnel + supplies + travel). ARPA-H expects milestone-based budgets where funding releases are tied to deliverables. If your budget doesn't have 3-6 milestones with specific Go/No-Go criteria, the PM will question whether you understand how ARPA-H funding works.
Frequently Asked Questions: ARPA-H vs NIH SBIR
Does my PI need to be a US citizen for ARPA-H or NIH SBIR? No. Neither agency requires US citizenship for the PI. The PI must be primarily employed (51%+) by the small business for NIH SBIR, but citizenship is not a factor for either agency.
Can a for-profit startup apply to ARPA-H? Yes. ARPA-H explicitly invites small business participation. Unlike some agencies that favor academic institutions, ARPA-H's Other Transaction mechanism is designed for a mix of for-profit, nonprofit, and academic performers.
How long does ARPA-H review take compared to NIH? NIH has a predictable timeline: submit on a receipt date (April 5, August 5, or December 5), receive a summary statement 4-5 months later, and expect funding 9-12 months from submission. ARPA-H review timelines vary by program and Mission Office. Some Solution Summaries get responses within weeks, others take months. The agency is still establishing consistent timelines as it scales.
What's the success rate for ARPA-H vs NIH SBIR? NIH SBIR success rates are roughly 20% for first-time applicants, though this varies significantly by IC and study section. ARPA-H success rates are not yet publicly available in a meaningful way -- the agency launched in 2022 and is still ramping. Frankly, anyone claiming to know ARPA-H success rates precisely is guessing.
Can I apply to both simultaneously? Yes. There's no conflict-of-interest issue with simultaneous applications to different agencies. The applications must be genuinely different documents -- same technology, different framing, different scope, different budget. See the dual-submission section above for how to scope this.
What if my technology fits both agencies but I only have bandwidth for one application? Start with the agency where your current materials are strongest. If you have published preliminary data and a testable hypothesis, NIH is the faster path. If you have a strong vision for population-level health impact but limited data, ARPA-H is more forgiving of early-stage work. You can always apply to the second agency in the next cycle.
Not Sure Which Agency Fits Your Health Startup?
The decision framework above covers the major factors, but the details matter. Your technology readiness level, team composition, competitive landscape, and funding timeline all affect which agency gives you the best shot at a funded award.
Cada has written applications for both ARPA-H and NIH SBIR. We've seen what wins at each agency and -- just as importantly -- what gets rejected at each. Our 86% success rate comes from matching companies to programs where they're genuinely competitive, not applying everywhere and hoping.
If you're not sure whether ARPA-H or NIH is the better fit, we offer a free 15-minute agency-fit assessment. No pitch, no obligation -- just a straight answer on which agency matches your technology and where to focus your time.