$1.6B Secured
500+ Proposals Written
NSF, DoD & NIH Expertise
Agency Tactics

How to Win an SBIR Grant: The Cross-Agency Tactical Guide

NalinLast updated: March 31, 2026

The five things that most influence SBIR award decisions: (1) a specific problem with a clear gap in current solutions, (2) a detailed, testable technical approach, (3) a credible commercialization path, (4) a qualified team with domain experience, and (5) alignment with the funding agency's mission. That's based on patterns across 500+ proposals. But the weight each agency puts on these factors differs significantly -- and that's where most founders lose.

Most SBIR Phase I proposals get rejected -- NSF funds about 20-25%, NIH ranges 13-24% by Institute, DoD averages around 15%. The proposals that win aren't necessarily backed by better technology. They're framed for what reviewers actually score.

How each agency scores your proposal

Your technology domain drives the agency decision: NSF for broad science and deep tech, NIH for biomedical and health, DoD for defense or dual-use. Once you know your lane, everything below applies. (For help choosing, see our SBIR guide for startups.)

Before writing a word, understand what you're being graded on. Each agency publishes its review criteria. Most applicants skim them. Winners reverse-engineer their proposals from them.

NSF NIH DoD
Review method Panel of 3+ experts Study section (3+ reviewers), 1-9 scale where 1=best Technical evaluators + program staff
Top criterion Intellectual Merit + Broader Impacts (equal) Approach (strongest predictor of funding) Technical merit (most important)
Commercialization weight High -- scored from Phase I Lower in Phase I, higher in Phase II Medium -- 1-page strategy, tied for second
Preliminary data Helpful, not required Critical for competitiveness Helpful, less expected at Phase I
First-time friendliness Moderate -- Project Pitch screens early Lower -- data-heavy field favors experienced applicants Higher for open topics (AFWERX especially)
Resubmission New proposal, next cycle Formal A1 with intro page Topics rotate; may not reappear
Key differentiator Broader Impacts matter Approach score predicts everything Mission alignment is table stakes

If you're applying to NSF and you don't address Broader Impacts, you lose points. If you're applying to NIH and your Approach section is vague, you're dead. If you're applying to DoD and you can't articulate mission alignment in the first paragraph, the evaluator moves on.

10 strategies that actually matter

These aren't generic tips. They're patterns from writing 500+ proposals across 30+ agencies -- what separates the 17% that get funded from the 83% that don't.

1. Frame the innovation gap, not your product

Every agency funds research into unsolved problems. Not products. The #1 mistake founders make is describing what they've built instead of what's unknown.

  • NSF: "This scientific question has never been answered" (Intellectual Merit requires advancing knowledge)
  • NIH: "This clinical need has no adequate solution because of X barrier" (drives Significance score)
  • DoD: "This capability gap affects warfighter readiness and no current solution addresses it" (mission alignment)

Write the first paragraph of your technical section as a problem statement with citations. Reviewers decide within the first page whether your proposal is research or a product pitch.

2. Contact the program officer before you submit

This is the single highest-ROI pre-submission activity. Program officers (POs) will tell you whether your technology fits their program -- saving you weeks of work on a mismatched proposal.

What to share: a 1-paragraph non-confidential summary of your proposed innovation and research question.

What to ask: "Is this a fit for your program?" "Are there specific subtopics or solicitation areas I should target?"

What NOT to do: ask them what to write, pester with multiple follow-ups, or ask questions answered in the solicitation.

At NIH, send your Specific Aims page. POs can redirect you to a better-matched Institute. At NSF, the Project Pitch is your built-in PO interaction. At DoD, contact the Topic Point of Contact (TPOC) listed for each topic before the Q&A deadline.

3. Write your Approach section like it's the only section that matters

At NIH, the Approach score is empirically the strongest predictor of whether you get funded. At NSF and DoD, technical detail in the methodology section carries similar weight under different names.

What reviewers want to see:

  • Milestone-based work plan with specific go/no-go decision points
  • Measurable success criteria for each aim or objective (quantified, not vague)
  • Risk mitigation -- acknowledge what could go wrong and explain your backup plans
  • Feasibility arguments supported by preliminary data, published literature, or computational models

What kills proposals: "We will develop a prototype and test it with users." That's a product development plan. Reviewers want research methodology.

4. Calibrate your preliminary data to the agency

The "Goldilocks zone" for preliminary data varies by agency:

  • NIH: Preliminary data is close to a requirement. Proposals without it are at a serious disadvantage. Pilot study results, benchmarks, clinical observations -- something tangible.
  • NSF: Helpful but not required for Phase I. Demonstrating awareness of the technical landscape matters more than having lab results.
  • DoD: Comfortable funding earlier-stage concepts if the technical approach is sound. Prototypes or demo results help but aren't expected.

If you lack data: published literature supporting your approach, computational models, analogous results from adjacent fields, or third-party test results on related technology can partially substitute. Frame them honestly -- don't overstate what they prove.

5. Nail the commercialization plan

Agencies increasingly evaluate whether your technology will reach the market. The weight varies, but no agency ignores it.

  • NSF: Commercialization is scored from Phase I. Include market size, customer validation, revenue model, and competitive landscape. At least one letter of support from a potential customer is expected.
  • DoD: Dual-use framing (military + commercial applications) is strongest. A letter from a military end-user confirming the capability gap ("Our unit currently has no solution for X and would evaluate this technology for operational deployment") carries serious weight.
  • NIH: Less weighted in Phase I, but showing a bench-to-bedside pathway still matters for your overall impact score.

The strongest signal in any commercialization section: evidence that real people will pay for this. Named customers, LOIs, pilot commitments. Three pages of TAM analysis can't substitute for one sentence from a buyer.

6. Present your team to counter academic dominance

Reviewers subconsciously benchmark your team against university-affiliated competitors with PhDs and publication records. Startups need to present qualifications strategically.

If your founders don't have PhDs: lead with technical accomplishments. Patents, prototypes, production systems, industry experience. Frame commercial experience as an advantage for technology translation -- this is what program officers actually want to fund.

The "window dressing" trap: don't add famous advisors who won't do real work on the project. Reviewers see through this, and it can actively hurt your credibility.

STTR option: STTR is SBIR's sister program that requires a formal university or research institution partner. If you genuinely need lab access or academic collaboration, STTR lets the research institution perform up to 40% of the work. This can strengthen your team while bringing in real expertise.

7. Build a realistic budget

Budget misalignment is a silent killer. Reviewers check whether your proposed work actually costs what you say it does.

  • Know the caps: NSF $305K, NIH ~$314K, DoD varies ($50K-$250K by branch)
  • DoD contracts: take the profit fee (typically up to 7%). Many first-time applicants leave this on the table. NSF and NIH are grants, not contracts, so this doesn't apply there -- instead, make sure you're capturing your full indirect cost rate.
  • Align hours to tasks: every budget line should trace back to a specific aim. Unallocated labor hours or vague "consulting" items are red flags.
  • Indirect cost rates: these cover overhead (rent, utilities, admin). Without a government-negotiated rate, most agencies accept a flat "de minimis" rate of 10% of your direct costs. If your actual overhead is higher, consider negotiating a rate with your cognizant agency
  • Common disqualifiers: marketing costs, entertainment, non-US travel without justification, exceeding the cap by even $1

For a detailed walkthrough, see our post on structuring your SBIR budget.

8. Secure 3-5 strategic letters of support

Letters of support validate your market, team, and approach simultaneously. They're the most underrated component of a strong proposal.

Strongest letter writers, ranked:

  1. Potential customers or end-users who confirm the market need
  2. Strategic partners for distribution or co-development
  3. Domain experts or key opinion leaders
  4. Investors who signal commercial viability

One specific letter from a potential customer beats five generic letters from advisors. "We would integrate this into our clinical workflow across 12 sites pending successful Phase I results" is worth more than "I have known Dr. Smith for 15 years and believe in her work."

Reach out 3-4 weeks before the deadline. Provide a draft they can edit and sign -- don't make them write from scratch.

9. Write for tired reviewers

SBIR reviewers read 8-15 proposals per cycle. They're often cross-disciplinary. Your proposal competes against fatigue.

  • Anchor sentences: open each section with a clear statement that could stand alone as a summary
  • 10th-grade reading level: avoid jargon walls. Reviewers come from multiple disciplines.
  • Visual aids: one good figure can replace a page of text. Diagrams, workflow charts, and comparison tables break up walls of prose.
  • Bold your key claims. Reviewers skim. Make your strongest points visually findable.
  • Spend 25% of your time editing. Content clarity, structure, and grammar. Poor writing ruins good science.

The executive summary is where most reviewers form their initial impression. Problem, innovation, approach, team, market -- all on one page. If it's weak, nothing else saves you.

10. Plan for resubmission before you submit

With a ~17% success rate, planning for resubmission isn't pessimism. It's strategy.

  • NIH: you get one formal resubmission (A1). Include a 1-page Introduction to Revised Application addressing every reviewer concern. Consult your PO after receiving the summary statement.
  • NSF: no formal resubmission mechanism. Submit a new proposal to the next cycle incorporating panel feedback.
  • DoD: topics rotate. If yours doesn't reappear, look for adjacent topics at the same agency or a different DoD component.

The #1 resubmission mistake: submitting the same proposal with surface edits. Reviewers notice. Successful resubmissions involve structural rework -- rethinking the approach, adding data, reframing the problem -- not line-by-line responses.

For more on interpreting feedback and resubmitting, see our post on what to do after an SBIR rejection.

Common rejection reasons

Rejection Reason How Common How to Fix
Vague problem statement, no innovation gap Very common Open with a cited, quantified problem. Not your solution.
No clear innovation over existing solutions Very common Include an explicit comparison: current state vs. your approach
Weak commercialization plan Common Customer letters + market data + revenue model
Approach lacks detail or feasibility Common Milestone-based work plan with go/no-go decision points
Budget doesn't match work plan Common 1:1 mapping between budget lines and proposal tasks
Team qualifications not demonstrated Moderate Lead with relevant experience and role assignments, not degrees
Scope too ambitious for Phase I Moderate Phase I = minimum viable proof of feasibility. Save the rest for Phase II.
Poor writing quality Moderate 25% of time on editing. Anchor sentences. Visual aids.
Generic or missing support letters Moderate 3 specific letters from customers, partners, or domain experts
Administrative non-compliance Rare but fatal Pre-submission checklist against every solicitation requirement

The meta-strategy

If there's one thing that separates consistently funded companies from first-time applicants, it's this: treat the proposal as a document written for the reviewer, not for yourself.

You know your technology is good. The proposal's job isn't to explain your technology to people who already believe in it. It's to convince tired, skeptical, cross-disciplinary reviewers in about 30 minutes of reading time that your problem is real, your approach is sound, your team can execute, and the result will matter commercially.

Everything else -- formatting, structure, data, letters -- serves that core objective.

Want a second opinion before you submit?

We've written 500+ proposals across 30+ agencies and helped secure $1.6B+ in non-dilutive funding. If you want to pressure-test your proposal framing, identify the weak points reviewers will flag, or figure out which agency gives you the best shot, our Strategy Review is where to start.

Frequently Asked Questions

About 17% for Phase I across all agencies. NSF is the highest (~20-25% through their two-step pitch process). NIH varies by Institute (13-24%). DoD averages about 15%. First-time applicants face even lower odds -- roughly 75-80% of first submissions are rejected.
A competitive Phase I proposal takes 100-300 hours depending on agency and experience level. Start at least 8-12 weeks before the deadline. First-time applicants need to add 2-6 weeks for system registrations (SAM.gov, Grants.gov, login.gov) before they can even submit.
It depends on the agency. NIH strongly expects it -- proposals without preliminary data are at a serious disadvantage. NSF and DoD are more forgiving for Phase I, though feasibility evidence always helps. If you lack lab data, published literature, computational models, or analogous results from adjacent fields can partially substitute.
Yes, but each proposal must be tailored to the specific agency's solicitation topics and review criteria. You cannot submit the same proposal to multiple agencies. Many companies apply to 2-3 agencies simultaneously with distinct proposals targeting different aspects of their technology.
The top reasons: vague problem statements with no clear innovation gap, weak commercialization plans, approach sections that lack specificity and measurable milestones, budget misalignment with the work plan, and poor writing quality. Administrative non-compliance (missed deadlines, wrong format, exceeded page limits) kills proposals before they're even reviewed.
Yes. This is the highest-ROI pre-submission activity. Share a 1-paragraph non-confidential summary of your technology and ask about fit. For NIH, send your Specific Aims page. NSF's Project Pitch process serves as a built-in program officer interaction. Do not ask them what to write.
You receive reviewer feedback (summary statement at NIH, panel summary at NSF). Read it thematically, not defensively. At NIH, you get one formal resubmission (A1) with a 1-page introduction addressing reviewer concerns. At NSF, submit a new proposal next cycle. At DoD, topics rotate -- look for adjacent topics.
NSF scores on Intellectual Merit, Broader Impacts, and Commercial Potential (panel of 3+ reviewers). NIH uses 5 criteria -- Significance, Innovation, Approach, Investigators, Environment -- on a 1-9 scale where 1 is best; Approach is the strongest predictor. DoD evaluates Technical Merit (most important), Personnel, and Commercialization (tied for second).

Ready to explore your funding options?

We'll map your technology to the most relevant programs and tell you where to start. 15 minutes, no obligation.

Book Strategy Review