Course Home Module 6 — Post-Award Management & Compliance NOFO Reference Pre-Work Request Training
IN-PERSON INTENSIVE — 2-DAY FORMAT

Federal Grant
Writing Course

Intermediate Level  |  6 Modules  |  GPC-Aligned

A practitioner-developed curriculum for grant professionals who already have foundational experience and need to perform at a higher level — particularly in federal grants.

6 Modules ~90 Min Each 2 Days 6 Work Products

What This Course Is

  • Intermediate-level. Designed for practitioners who already write grants and need to sharpen their federal skills.
  • Practitioner-developed. Every concept connects directly to real federal programs — Head Start, CSBG, LIHEAP, Weatherization, and more.
  • Work-product focused. Every module ends with a tangible deliverable you can use the next day.

Who This Is For

  • Grant writers at community action agencies, Head Start grantees, and nonprofits with federal funding portfolios.
  • Staff advancing from foundational to competitive federal grant writing.
  • Organizations managing HHS, DOE, USDA, or HUD grants.

G1VE Advisory Federal Grant Writing Course

A 6-module intermediate curriculum aligned with Grant Professional Certified (GPC) competencies. Each module produces a tangible deliverable the participant can immediately apply to their work.

6 Modules GPC-Aligned ~90 Min / Module 6 Work Product Deliverables
MOD. TITLE KEY SKILL DELIVERABLE DAY
1
Decoding the Federal NOFO Analyzing federal funding opportunities, scoring criteria interpretation, go/no-go frameworks Work Product
NOFO analysis worksheet with reviewer scoring map
Day 1
2
Logic Models & Theory of Change Building logic models that earn maximum reviewer scores, connecting inputs to measurable outcomes Work Product
Complete, reviewer-ready logic model
Day 1
3
Budget Development & Justification SF-424A format, indirect cost rates (NICRA vs. de minimis), matching requirements, 2 CFR 200 cost principles Work Product
Draft federal grant budget with complete budget narrative
Day 1
4
Evaluation Design That Scores Federal evidence hierarchy, formative vs. summative design, measurable indicators, data collection planning Work Product
Evaluation plan with methodology and indicators
Day 2
5
Writing the Competitive Narrative Competitive narrative structure, reviewer psychology, writing for the federal reviewer Work Product
Scored narrative outline / draft narrative section
Day 2
6
Post-Award Management & Compliance SF-425, PIR, 2 CFR 200 compliance, subrecipient monitoring, audit readiness Work Product
Post-award compliance checklist and reporting calendar
Day 2

Two Full Days. Six Modules. Six Work Products.

1

Day One

Modules 1–3  |  ~5.5 hours of instruction with breaks
  • Module 1 — Decoding the Federal NOFO
  • Module 2 — Logic Models & Theory of Change
  • Module 3 — Budget Development & Justification
2

Day Two

Modules 4–6  |  ~5.5 hours of instruction with breaks
  • Module 4 — Evaluation Design That Scores
  • Module 5 — Writing the Competitive Narrative
  • Module 6 — Post-Award Management & Compliance

Everything Included in the In-Person Intensive

Printed Course Materials

Worksheets and templates provided for every module — yours to keep and use after training.

Real-Time Feedback

Live feedback on actual NOFOs, budgets, and proposals during hands-on exercises.

On-Demand Recordings

Post-training recordings for review and reference at your own pace.

Email Q&A Support

90-day access with a 48-hour response commitment for follow-up questions.

Post-Training Follow-Up Session

1-hour virtual session within 30 days to review progress and answer questions.

Certificate of Completion

Issued upon completion of both training days.

Developed & Delivered by a Federal Grants Practitioner

AB
Anthony Bammer
Managing Partner — G1VE Technologies & G1VE Advisory
Atlanta, Georgia  |  Grant Professionals Association Member
Anthony Bammer's background is in post-award grant management, nonprofit strategy, and federal funding advisory. He has managed and advised on grants from HHS, DOL, DOT, USDA, HUD, EPA, and the Department of Education. He is a member of the Grant Professionals Association and the developer of AFIS (Advanced Policy & Funding Intelligence System), a real-time federal funding intelligence platform.
$750
Per organization / per training engagement
Full payment due upfront  ·  Agreement already executed

Includes instructor travel from Atlanta. Lodging and meals not included — if overnight stay is required, client provides accommodations or a $150/day per diem applies.

Training can begin within 5 business days of payment.
MODULE 1 — DAY 1

Decoding the Federal NOFO

Key Skill
Analyzing federal funding opportunities, scoring criteria interpretation, go/no-go frameworks
Deliverable
NOFO analysis worksheet with reviewer scoring map  Work Product

What This Module Is About

This module builds the analytical foundation for everything that follows. You will learn to read a federal Notice of Funding Opportunity (NOFO) the way a competitive grant writer does — not cover to cover, but strategically. By the end of this session, you will know how to extract funder intent, decode scoring criteria, and make a disciplined go/no-go decision before investing a single hour of writing time.

By the End of This Module, You Will Be Able To:

  1. Identify the five structural sections of a federal NOFO (Purpose, Background, Eligibility, Application Requirements, Review Criteria) and explain the function of each.
  2. Analyze the Purpose and Background sections to articulate the funder's stated problem, preferred solution approach, and program priorities in their own words.
  3. Reverse-engineer reviewer scoring criteria to identify the specific program design, data, and narrative elements that earn maximum points in each scored section.
  4. Apply a structured go/no-go framework to evaluate organizational fit, capacity, and competitive positioning before committing to a proposal.
  5. Construct a NOFO analysis worksheet and reviewer scoring map that will serve as the strategic blueprint for the full proposal.

Session Agenda

SEGMENTTIMEFOCUS
Why Most Grant Writers Read NOFOs Wrong15 minCommon mistakes, strategic reading mindset
Anatomy of a Federal NOFO20 minFive structural sections and their functions
Reading Scoring Criteria as a Reverse Blueprint20 minExtracting funder intent, mapping point values
The Go/No-Go Framework15 minDecision matrix, competitive positioning
Hands-On Exercise: NOFO Analysis Worksheet20 minBuild your scoring map and go/no-go decision

Key Teaching Points & Concepts

Why Most Grant Writers Read NOFOs Wrong

  • Reading cover to cover wastes time and buries the strategic signal in administrative noise
  • The Review Criteria section is the most important section — start there
  • Funder intent is embedded in the Purpose and Background — read these second
  • Eligibility and application requirements are last — they are filters, not strategy

Anatomy of a Federal NOFO

  • Purpose: the funder's problem statement and theory of change
  • Background: legislative authority, program history, and current priorities
  • Eligibility: who can apply and what qualifications are required
  • Application Requirements: what to submit, format, page limits, attachments
  • Review Criteria: how applications are scored — the competitive blueprint

Reading Scoring Criteria as a Reverse Blueprint

  • Each scored section tells you exactly what the reviewer is looking for — treat it as a checklist
  • Point values signal priority: higher points = more writing investment required
  • Sub-criteria within each section are the specific proof points you must address
  • Federal programs like Head Start (45 CFR Part 1302) and CSBG have defined performance standards that should align with your narrative

The Go/No-Go Framework

  • Five dimensions: Eligibility, Capacity, Competitive Positioning, Burden vs. Value, Timeline
  • Score each dimension 1–5; total score guides the decision
  • 20–25 = Strong go  |  12–19 = Conditional go (identify gaps)  |  Below 12 = No-go
  • Document no-go decisions — they protect organizational capacity and credibility
Hands-On Exercise — NOFO Analysis Worksheet & Reviewer Scoring Map
Work Product
  1. Open the federal NOFO provided (or bring your own active NOFO).
  2. Column A — List every scored section from the Review Criteria.
  3. Column B — Record the point value for each section.
  4. Column C — Write 2–3 sentences summarizing exactly what the reviewer is looking for in that section, in plain language.
  5. Column D — List the specific program data, design elements, or narrative proof points your organization can bring to that section.
  6. Complete the Go/No-Go Decision Matrix: score your organization on Eligibility, Capacity, Competitive Positioning, Burden vs. Value, and Timeline (1–5 scale each).
  7. Total your score: 20–25 = Strong go. 12–19 = Conditional go (identify gaps). Below 12 = No-go — document your reasoning.

In-Person Intensive — Facilitator Notes

  • Opens Day 1 at 9:00 AM. Distribute printed NOFO analysis worksheet template and a sample NOFO (or the participant's live NOFO if provided in advance).
  • Emphasize that every module that follows builds directly on the scoring map they create here.
  • Allow 5 minutes at the end for participants to share their go/no-go decision and reasoning.
  • Alternative format note: In a virtual live format, this module is delivered via Zoom with screen-shared NOFO examples and a shared Google Sheet for the worksheet exercise.
Live NOFO Examples for This Module
Teaching Resource

All six NOFOs on the NOFO Reference page are appropriate for Module 1 exercises. The three recommended for in-class walkthrough are:

Head Start Competitive (CFDA 93.600 | HHS-2025-ACF-OHS-CH-0085): The most structurally complete federal NOFO for this exercise — scoring criteria are named, weighted, and organized by section. Walk through Section V.1 (Review Criteria) as a class.
AmeriCorps State and National (CFDA 94.006): Compare criteria structure to Head Start. AmeriCorps is more prescriptive on the evidence tier requirement — a teaching point about how different federal agencies weight evidence differently.
HUD Continuum of Care (CFDA 14.267 | FR-6800-N-25): Use as a go/no-go exercise focused on organizational eligibility and risk — participants must be part of a local CoC to apply. Teaches that eligibility includes both legal qualifications and strategic positioning.
View all 6 NOFOs on the NOFO Reference page →
MODULE 2 — DAY 1

Logic Models & Theory of Change

Key Skill
Building logic models that earn maximum reviewer scores, connecting inputs to measurable outcomes
Deliverable
Complete, reviewer-ready logic model  Work Product

What This Module Is About

A logic model is not a box-and-arrow diagram — it is the intellectual architecture of your entire proposal. This module teaches you to build a logic model that does what federal reviewers actually reward: connects real community need to specific activities to measurable, credible outcomes in a way that makes the program design feel inevitable.

By the End of This Module, You Will Be Able To:

  1. Distinguish between a logic model as an internal planning tool vs. a reviewer-facing proposal instrument, and explain why the distinction matters competitively.
  2. Construct a five-column logic model (Inputs → Activities → Outputs → Short-Term Outcomes → Long-Term Outcomes) populated with data and language drawn directly from a federal NOFO.
  3. Write a theory of change statement that articulates the causal pathway from program inputs to anti-poverty outcomes in two to three sentences.
  4. Identify and correct the three most common logic model failures that reduce reviewer scores (generic outcomes, missing outputs, and disconnected inputs).
  5. Produce a complete, reviewer-ready logic model mapped to the scoring criteria of a specific federal program.

Session Agenda

SEGMENTTIMEFOCUS
What Reviewers Actually Want From a Logic Model15 minInternal tool vs. reviewer instrument; competitive distinction
The Five Columns: Building Left to Right20 minInputs, Activities, Outputs, Short-Term Outcomes, Long-Term Outcomes
Theory of Change: Writing the Causal Argument20 minCausal pathway, anti-poverty framing, NOFO language alignment
Common Failures and How to Fix Them15 minGeneric outcomes, missing outputs, disconnected inputs
Hands-On Exercise: Build Your Logic Model20 minComplete five-column logic model using your program

Key Teaching Points & Concepts

What Reviewers Actually Want From a Logic Model

  • Reviewers want to see that your program design is coherent — that inputs actually produce activities that actually produce outcomes
  • A logic model submitted as a proposal attachment must be more precise and evidence-grounded than an internal planning document
  • Federal programs like Head Start require logic models that connect to 45 CFR Part 1302 school readiness goals

The Five Columns: Building Left to Right

  • Start with Long-Term Outcomes — anchor everything in the funder's stated program purpose
  • Short-Term Outcomes must be measurable within the grant period with a named instrument (ASQ-3, HSES, pre/post assessment)
  • Outputs are quantities: number of children enrolled, households served, training hours, homes weatherized
  • Activities must be specific: "Weekly home visiting sessions" not "Services"
  • Inputs include staff, funding, facilities, partnerships — be specific about what you are bringing

Theory of Change: Writing the Causal Argument

  • A theory of change statement answers: IF we provide [inputs/activities], THEN [outputs] will occur, LEADING TO [outcomes] because [rationale]
  • The rationale should reference evidence — program models, research, or prior performance data
  • For CSBG: connect to ROMA (Results-Oriented Management and Accountability) National Performance Indicators

Common Failures and How to Fix Them

  • Generic outcomes: "Participants will improve their lives" → Fix: "85% of enrolled Head Start children will demonstrate age-appropriate development in 4 of 6 domains as measured by Teaching Strategies GOLD"
  • Missing outputs: jumping from activities directly to outcomes without quantifying what is produced
  • Disconnected inputs: listing staff or funding that does not connect to any specific activity
Hands-On Exercise — Complete, Reviewer-Ready Logic Model
Work Product
  1. Start with Column 5 (Long-Term Outcomes) — write the 1–2 long-term anti-poverty outcomes your program is designed to achieve. Anchor these in your NOFO's stated program purpose.
  2. Move to Column 4 (Short-Term Outcomes) — write 3–4 measurable outcomes participants will achieve within the grant period. These must be measurable with a specific instrument or data source (e.g., ASQ-3 for child development, pre/post assessment for GED, HSES data for Head Start).
  3. Fill Column 3 (Outputs) — quantify what you will produce: number of children enrolled, households served, training hours delivered, homes weatherized.
  4. Fill Column 2 (Activities) — list the specific program activities that produce those outputs. Be specific: "Weekly home visiting sessions," not "Services."
  5. Fill Column 1 (Inputs) — list the staff, funding, facilities, community partners, and data systems your organization brings to this program. Be specific: "1.0 FTE Family Services Coordinator," not "Staff."
  6. Write your Theory of Change statement: 2–3 sentences connecting your inputs to your long-term outcomes. Use the IF/THEN/BECAUSE structure.
  7. Review your completed logic model against the scoring criteria from Module 1. Does every scored element have a corresponding row in your logic model?

In-Person Intensive — Facilitator Notes

  • Second module of Day 1 (approximately 10:45 AM after a break). Distribute printed five-column logic model template.
  • Walk through a completed sample logic model for a Head Start or CSBG program before the exercise.
  • For Coastal Plain specifically, reference the Head Start school readiness goals and CSBG ROMA National Performance Indicators as the outcome anchors.
  • Remind participants that the logic model they build here will be used directly in Module 4 (Evaluation Design) and Module 5 (Narrative Writing).
  • Alternative format note: In a virtual live format, participants complete the logic model in a shared Google Slides template with real-time facilitator feedback.
Live NOFO Examples for This Module
Teaching Resource
Head Start Competitive (CFDA 93.600): The logic model must trace: Inputs (teachers, facilities, curriculum, CACFP meals, partners) → Activities (classroom instruction, home visits, family engagement, health screenings) → Outputs (enrollment, attendance, screenings completed) → Short-Term Outcomes (developmental milestones per Teaching Strategies GOLD) → Long-Term Outcome (children enter kindergarten ready to succeed). The Head Start Program Performance Standards (HSPPS) define exactly what reviewers expect to see.
OCS AHSS Demonstration (CFDA 93.569): The NOFO requires applicants to select at least 2 of 12 service categories — mapping these to logic model activities is a clean, bounded exercise. Use it when the participant’s work is CSBG-adjacent rather than early childhood.
View full NOFO details on the NOFO Reference page →
MODULE 3 — DAY 1

Budget Development & Justification

Key Skill
SF-424A format, indirect cost rates (NICRA vs. de minimis), matching requirements, 2 CFR 200 cost principles
Deliverable
Draft federal grant budget with complete budget narrative  Work Product

What This Module Is About

Federal grant budgets are not spreadsheets — they are arguments. This module walks you through SF-424A line by line, explains the cost principles under 2 CFR 200 that determine what is allowable and what is not, and teaches you to write a budget narrative that makes every cost feel not just justified but necessary.

By the End of This Module, You Will Be Able To:

  1. Complete an SF-424A correctly, including all budget object class categories and the non-federal share column.
  2. Apply 2 CFR 200 cost principles — allowable, allocable, and reasonable — to distinguish compliant from non-compliant budget line items.
  3. Differentiate between a Negotiated Indirect Cost Rate Agreement (NICRA) and the de minimis 10% indirect cost rate, and determine which applies to a given grant situation.
  4. Calculate and document matching requirements (cash and in-kind), with specific attention to Head Start's 20% non-federal share requirement.
  5. Write a budget narrative that justifies each line item using the language and logic of the scoring criteria — not just accounting descriptions.

Session Agenda

SEGMENTTIMEFOCUS
The SF-424A Line by Line20 minBudget object class categories, non-federal share column
2 CFR 200 Cost Principles in Plain Language20 minAllowable, allocable, reasonable — with real examples
Indirect Costs: NICRA vs. De Minimis15 minMTDC calculation, when each rate applies
Match and In-Kind: Getting It Right15 minHead Start 20% match, cash vs. in-kind documentation
Hands-On Exercise: Draft Budget + Budget Narrative20 minComplete SF-424A and write justification narrative

Key Teaching Points & Concepts

The SF-424A Line by Line

  • Section A (Budget Summary): total federal and non-federal share by program year
  • Section B (Budget Categories): Personnel, Fringe, Travel, Equipment, Supplies, Contractual, Other, Indirect
  • The non-federal share column is not optional — it is a compliance requirement for most federal programs
  • Equipment threshold under 2 CFR 200: items over $5,000 per unit are equipment; below is supplies

2 CFR 200 Cost Principles in Plain Language

  • Allowable: the cost type is permitted under the program's authorizing statute and 2 CFR 200
  • Allocable: the cost benefits the grant program in proportion to the amount charged
  • Reasonable: the cost reflects what a prudent person would pay under similar circumstances
  • All three tests must be met — a cost that is allowable but not allocable is still non-compliant

Indirect Costs: NICRA vs. De Minimis

  • NICRA: a negotiated rate agreement with your cognizant federal agency — apply the negotiated rate to the approved base
  • De minimis: 10% of Modified Total Direct Costs (MTDC) — available to organizations that have never had a NICRA
  • MTDC excludes: equipment, capital expenditures, patient care, rental costs, tuition, and subawards over $25,000
  • For LIHEAP and Weatherization: check program-specific indirect cost limitations in the NOFO

Match and In-Kind: Getting It Right

  • Head Start minimum: 20% of total project cost must come from non-federal sources
  • In-kind match must be documented with the same rigor as cash: volunteer hours at fair market value, donated space at appraised rate
  • Match cannot be from other federal funds (with limited exceptions)
  • BIL Weatherization: per-unit cost cap applies — know the current cap before budgeting
Hands-On Exercise — Draft Federal Grant Budget with Complete Budget Narrative
Work Product
  1. Open the SF-424A template provided. Fill in Section A (Budget Summary) for a hypothetical or real grant program — use your organization's actual program structure if possible.
  2. Section B (Budget Categories): For each object class (Personnel, Fringe, Travel, Equipment, Supplies, Contractual, Other, Indirect), enter a line item with a dollar amount.
  3. For Personnel: list each position, FTE percentage, annual salary, and grant-funded portion. Example: "Head Start Teacher, 1.0 FTE, $38,000/yr, 100% grant-funded = $38,000."
  4. For Indirect Costs: identify whether your organization has a NICRA on file. If yes, apply the negotiated rate to the correct base. If no NICRA, apply the de minimis 10% rate to Modified Total Direct Costs (MTDC — which excludes equipment, capital expenditures, and subawards over $25K).
  5. Non-Federal Share column: calculate and enter your match. For Head Start: minimum 20% of total project cost. Document whether match is cash or in-kind.
  6. Budget Narrative: for each line item, write 2–4 sentences that justify the cost: what it is, why it is necessary for the program, how the amount was calculated, and which 2 CFR 200 cost principle makes it allowable. Do not just restate the number — make the argument.

In-Person Intensive — Facilitator Notes

  • Final module of Day 1 (approximately 1:30 PM after lunch break). Have SF-424A templates pre-printed.
  • Walk through a completed sample budget before the exercise.
  • For Coastal Plain specifically, reference the Head Start 20% match, LIHEAP's annual draw structure, and the BIL Weatherization per-unit cost cap as real-world examples.
  • End Day 1 with a 10-minute recap: what does the participant take home tonight, and what should they review before Day 2?
  • Alternative format note: In a virtual live format, the SF-424A walkthrough uses a shared screen with a pre-populated sample budget that participants can copy and modify.
Live NOFO Examples for This Module
Teaching Resource
Head Start Competitive (CFDA 93.600): The primary budget exercise baseline. Use the SF-424A format with the 20% non-federal share column populated. Reference the approved budget structure from an existing Performance Agreement as the real-world model.
AmeriCorps State and National (CFDA 94.006): Match calculation uses an MSY (Member Service Year) cost formula — a different approach from Head Start’s percentage-of-total-cost method. Strong teaching contrast for the match and indirect cost section.
USDA Community Facilities (CFDA 10.766): The income-based match calculation — where the grant percentage is higher in lower-income service areas — illustrates that match is not always a fixed rate. Rural counties below state median income often qualify for the most favorable grant-to-match ratio.
View full NOFO details on the NOFO Reference page →
MODULE 4 — DAY 2

Evaluation Design That Scores

Key Skill
Federal evidence hierarchy, formative vs. summative design, measurable indicators, data collection planning
Deliverable
Evaluation plan with methodology and indicators  Work Product

What This Module Is About

Evaluation is the section where most intermediate grant writers leave points on the table. This module teaches you how the federal evidence hierarchy works, how to design an evaluation plan that satisfies both formative and summative requirements, and how to write measurable indicators that reviewers score at the top of the scale.

By the End of This Module, You Will Be Able To:

  1. Explain the federal evidence hierarchy (strong evidence, moderate evidence, promising evidence, and theory) and identify what level of evidence each major federal program — including Head Start and CSBG — requires applicants to reference.
  2. Design a formative evaluation component that measures implementation fidelity during the grant period.
  3. Design a summative evaluation component that measures participant outcomes at program completion and connects to the long-term outcomes in the logic model.
  4. Write measurable performance indicators using the SMART framework (Specific, Measurable, Achievable, Relevant, Time-bound) for at least three program outcomes.
  5. Produce a complete evaluation plan that includes methodology, data collection instruments, data sources, reporting frequency, and the role of an internal or external evaluator.

Session Agenda

SEGMENTTIMEFOCUS
How Federal Reviewers Score Evaluation Sections15 minCommon point-loss patterns, what top-scoring plans include
The Federal Evidence Hierarchy20 minStrong, moderate, promising, theory — program-specific requirements
Formative vs. Summative Design20 minImplementation fidelity, outcome measurement, baseline comparison
Writing Indicators That Score15 minSMART framework applied to federal grant outcomes
Hands-On Exercise: Build Your Evaluation Plan20 minComplete evaluation plan with methodology and indicators

Key Teaching Points & Concepts

How Federal Reviewers Score Evaluation Sections

  • Most intermediate writers describe what they will measure but not how — reviewers deduct points for missing methodology
  • Top-scoring evaluation plans name the instrument, the data source, the collection frequency, and the responsible party
  • Connecting evaluation to the logic model (Module 2) demonstrates coherent program design

The Federal Evidence Hierarchy

  • Strong evidence: randomized controlled trials (RCTs) — What Works Clearinghouse Tier 1
  • Moderate evidence: quasi-experimental designs — WWC Tier 2
  • Promising evidence: correlational studies with controls — WWC Tier 3
  • Theory: logic model and rationale without experimental evidence
  • Head Start requires applicants to reference evidence-based program models; CSBG references ROMA NPI data

Formative vs. Summative Design

  • Formative: measures implementation during the grant period — are you doing what you said you would do?
  • Summative: measures outcomes at the end — did participants achieve what you said they would achieve?
  • Both are required in most federal evaluation sections — do not submit only one
  • For Head Start: PIR data serves as the primary summative data source; monthly program data reviews are formative

Writing Indicators That Score

  • SMART: Specific, Measurable, Achievable, Relevant, Time-bound
  • Example: "By September 30, 2027, 85% of enrolled Head Start children will demonstrate age-appropriate development in at least 4 of 6 domains as measured by Teaching Strategies GOLD."
  • Avoid: "Participants will improve their outcomes" — this is not measurable and will not score
  • For CSBG: align indicators to ROMA National Performance Indicators (NPI) — reviewers recognize and reward this alignment
Hands-On Exercise — Evaluation Plan with Methodology and Indicators
Work Product
  1. Open your logic model from Module 2. Identify 3 short-term outcomes you will evaluate.
  2. For each outcome, write one SMART indicator. Example: "By September 30, 2027, 85% of enrolled Head Start children will demonstrate age-appropriate development in at least 4 of 6 domains as measured by Teaching Strategies GOLD."
  3. For each indicator, identify: the data collection instrument or source (e.g., HSES, Teaching Strategies GOLD, ASQ-3, pre/post survey, case files), who collects the data, how often data is collected, and who analyzes and reports results.
  4. Design your formative evaluation: describe one process you will use to monitor implementation during the grant period (e.g., monthly program data review, quarterly site visits, fidelity checklists).
  5. Design your summative evaluation: describe how you will measure outcomes at the end of the grant period and how you will compare results to your baseline.
  6. Identify the evaluator role: will evaluation be conducted internally (name the position) or by an external evaluator? If external, note what qualifications are required.
  7. Write a 1-paragraph evaluation methodology statement suitable for a proposal narrative section.

In-Person Intensive — Facilitator Notes

  • Opens Day 2 at 9:00 AM. Reference the What Works Clearinghouse and the HHS evidence standards as context for the evidence hierarchy discussion.
  • For Coastal Plain, connect to the Head Start Program Information Report (PIR) and CSBG ROMA National Performance Indicators as existing data infrastructure they can leverage.
  • Distribute the evaluation plan template pre-printed.
  • Remind participants that the evaluation plan connects directly to the logic model from Module 2 — the short-term outcomes in Column 4 are the outcomes they are evaluating here.
  • Alternative format note: In a virtual live format, participants complete the evaluation plan in a shared template with breakout room time for small group review.
Live NOFO Examples for This Module
Teaching Resource
AmeriCorps State and National (CFDA 94.006): The best live example of prescribed federal performance measures. AmeriCorps National Performance Measures (NPMs) are required — applicants select aligned output and outcome measures from a defined menu. Walk through selecting an NPM for a hypothetical job skills training program as the SMART indicator exercise.
EPA Brownfields Job Training (CFDA 66.815): EPA explicitly scores on outcome measurement — job placement rates, wage levels, certifications achieved. These are measurable, bounded indicators that transfer well as a parallel example alongside AmeriCorps NPMs. Also illustrates that evaluation expectations vary by agency mission.
View full NOFO details on the NOFO Reference page →
MODULE 5 — DAY 2

Writing the Competitive Narrative

Key Skill
Competitive narrative structure, reviewer psychology, writing for the federal reviewer
Deliverable
Scored narrative outline / draft narrative section  Work Product

What This Module Is About

A technically sound proposal that is poorly written will not score at the top — because federal reviewers are human beings reading dozens of applications under time pressure. This module teaches you the structure, sequencing, and writing techniques that separate competitive narratives from complete ones, and gives you a practical system for drafting sections that are specific, evidence-backed, and reviewer-friendly.

By the End of This Module, You Will Be Able To:

  1. Apply the "reviewer's reading sequence" — understanding how a reviewer moves through a proposal and what they are looking for in the first paragraph of each section.
  2. Structure a narrative section using the PEAR framework (Problem → Evidence → Approach → Results) to answer the scored criteria completely and in the right order.
  3. Use the scoring criteria as a checklist while drafting, ensuring every point available is addressed with specificity and evidence.
  4. Avoid the five narrative mistakes that most commonly reduce scores: vague language, unsupported claims, buried program design, missing population data, and response to the wrong question.
  5. Produce a scored narrative outline for a complete proposal section, with draft language for the opening paragraph of at least two scored sections.

Session Agenda

SEGMENTTIMEFOCUS
How Reviewers Actually Read Your Proposal15 minReviewer psychology, reading sequence, first-paragraph stakes
The PEAR Framework for Narrative Structure20 minProblem → Evidence → Approach → Results applied to scored sections
Writing to the Criteria, Not the Prompt20 minScoring criteria as a checklist, sub-criterion mapping
The Five Narrative Killers15 minVague language, unsupported claims, buried design, missing data, wrong question
Hands-On Exercise: Scored Narrative Outline20 minBuild outline and draft opening paragraph using PEAR

Key Teaching Points & Concepts

How Reviewers Actually Read Your Proposal

  • Reviewers read under time pressure — they are looking for proof points, not prose
  • The first paragraph of each section sets the reviewer's expectation for the entire section — lead with your strongest content
  • Reviewers use the scoring criteria as a checklist — if a sub-criterion is not addressed, it is not scored
  • Clarity and specificity are competitive advantages — a reviewer who has to re-read a sentence has already lost confidence

The PEAR Framework for Narrative Structure

  • Problem: ground the section in community need data — specific, local, sourced
  • Evidence: cite the research or program model that supports your approach
  • Approach: describe your specific program design — what you will do, for whom, how often, delivered by whom
  • Results: preview the measurable outcomes you will achieve — connect to your logic model and evaluation plan

Writing to the Criteria, Not the Prompt

  • The application instructions tell you what to submit; the scoring criteria tell you what earns points — these are not the same
  • Map every sub-criterion to a paragraph or section of your narrative before you write a single word
  • Sequence your narrative in the order the reviewer will look for the information — highest-stakes content first

The Five Narrative Killers

  • Vague language: "we will work to improve outcomes" → replace with specific, measurable commitments
  • Unsupported claims: assertions without data, citations, or evidence
  • Buried program design: describing your approach in the needs section instead of the program design section
  • Missing population data: failing to quantify who you serve, where, and with what documented need
  • Response to the wrong question: answering what you want to say instead of what the criterion asks
Hands-On Exercise — Scored Narrative Outline / Draft Narrative Section
Work Product
  1. Select one scored section from your NOFO analysis worksheet (Module 1). Choose a section worth 20 or more points.
  2. Using your scoring map from Module 1, list every sub-criterion the reviewer will score within this section.
  3. For each sub-criterion, write 1–2 sentences describing exactly how your program addresses it. This is your outline.
  4. Sequence your outline in the order the reviewer will look for the information — lead with the highest-stakes content.
  5. Draft the opening paragraph of this section using the PEAR framework: start with the Problem (grounded in your community needs assessment data), add Evidence (cite a data source), introduce your Approach (your program design), and preview your Results (the outcome you will achieve).
  6. Review your draft: does the first sentence tell the reviewer what this section is about? Does every sub-criterion appear? Is there any vague language ("we will work to," "we hope to," "we strive to") — replace it with specific, measurable commitments.
  7. Trade your draft with the facilitator for real-time feedback.

In-Person Intensive — Facilitator Notes

  • Mid-Day 2 module (approximately 10:45 AM after a break). This module works best when the participant has a real, active NOFO.
  • If Coastal Plain has an upcoming Head Start continuation or discretionary grant, use it.
  • Emphasize that this module is where the work from Modules 1–4 comes together. The NOFO analysis (Module 1), logic model (Module 2), budget (Module 3), and evaluation plan (Module 4) are the raw materials — this module is about assembling them into a proposal.
  • Alternative format note: In a virtual live format, participants share their draft opening paragraph in the chat for group feedback and facilitator markup.
Live NOFO Examples for This Module
Teaching Resource
Head Start Competitive (CFDA 93.600): The gold standard for teaching the PEAR narrative framework. The Project Description section of the Head Start NOFO maps directly: Problem (community need data from CNA), Evidence (research base for Head Start model), Approach (program design tied to HSPPS), Results (school readiness outcomes per Teaching Strategies GOLD). Every scored narrative section has a named criterion and a point value — use the scoring map from Module 1 as the drafting checklist.
View the Head Start NOFO on the NOFO Reference page →
MODULE 6 — DAY 2

Post-Award Management & Compliance

Key Skill
SF-425, PIR, 2 CFR 200 compliance, subrecipient monitoring, audit readiness
Deliverable
Post-award compliance checklist and reporting calendar  Work Product

What This Module Is About

Winning the grant is only half the job. This module covers what happens after the award letter arrives — the reporting requirements, compliance obligations, and audit readiness practices that protect your funding, your organization's reputation, and your ability to compete for future grants.

By the End of This Module, You Will Be Able To:

  1. Identify the standard federal post-award reporting requirements for the grant programs Coastal Plain manages — including the SF-425 Federal Financial Report, the Head Start Program Information Report (PIR), and CSBG performance reports — and explain when each is due and who is responsible.
  2. Construct a grant reporting calendar that maps all reporting deadlines for an active grant portfolio.
  3. Apply 2 CFR 200 Subpart D requirements to post-award grant management — including procurement standards, property management, and subrecipient monitoring.
  4. Identify the conditions that trigger a Single Audit and explain how the Schedule of Expenditures of Federal Awards (SEFA) is prepared.
  5. Produce a post-award compliance checklist and reporting calendar that can be immediately applied to an active grant.

Session Agenda

SEGMENTTIMEFOCUS
The Post-Award Landscape: What Grantees Are Required to Do15 minOverview of post-award obligations, common compliance failures
Federal Reporting Requirements by Program20 minSF-425, Head Start PIR, CSBG annual report, LIHEAP performance data
2 CFR 200 Subpart D in Practice20 minProcurement, property management, subrecipient monitoring
Single Audit and SEFA Basics15 min$750K threshold, SEFA preparation, audit readiness
Hands-On Exercise: Compliance Checklist + Reporting Calendar20 minBuild your 12-month reporting calendar and compliance checklist

Key Teaching Points & Concepts

The Post-Award Landscape

  • Post-award compliance is not optional — missed reports and compliance failures can result in repayment demands, award suspension, or debarment
  • The award document (Notice of Award) is the controlling document — read it before the NOFO for compliance purposes
  • Most federal programs have both financial reporting (SF-425) and programmatic reporting requirements — both are required

Federal Reporting Requirements by Program

  • SF-425 Federal Financial Report: typically due 30–90 days after each reporting period; covers expenditures, unliquidated obligations, and program income
  • Head Start PIR: annual report due in July; covers enrollment, staff qualifications, health services, and family outcomes
  • CSBG Annual Report: covers ROMA NPI data, agency activities, and community impact
  • LIHEAP: annual performance data report due to HHS; covers households served, benefits provided, and leveraged resources

2 CFR 200 Subpart D in Practice

  • Procurement: must follow your organization's written procurement policy; federal minimums apply for competitive bidding thresholds
  • Property management: equipment purchased with federal funds must be inventoried, tagged, and tracked; disposition rules apply at closeout
  • Subrecipient monitoring: if you pass federal funds to another organization, you are responsible for their compliance — written agreements and monitoring are required

Single Audit and SEFA Basics

  • Single Audit threshold: organizations that expend $750,000 or more in federal awards in a fiscal year must have a Single Audit
  • SEFA (Schedule of Expenditures of Federal Awards): lists all federal awards expended during the year by CFDA number, program name, and amount
  • Audit readiness: maintain organized grant files, document all cost allocations, and reconcile financial reports to general ledger monthly
Hands-On Exercise — Post-Award Compliance Checklist & Reporting Calendar
Work Product
  1. Select one active federal grant your organization manages (e.g., Head Start, CSBG, or LIHEAP).
  2. List all reporting requirements for that grant: financial reports (SF-425), performance/program reports (PIR, CSBG annual report, LIHEAP performance data), audit requirements, and any funder-specific reports.
  3. For each report: record the due date, the responsible staff position, the data sources required to complete it, and the consequence of a missed or inaccurate submission.
  4. Build a 12-month reporting calendar: map every deadline on a month-by-month grid. Identify months with multiple overlapping deadlines — these are your high-risk windows.
  5. Compliance checklist: for your selected grant, check off 10 standard 2 CFR 200 compliance items: approved budget on file, cost allocation plan documented, procurement policy followed, property inventory current, subrecipient agreements executed, indirect cost rate documented, matching contributions tracked, program income reported, closeout timeline confirmed, SEFA entry prepared.
  6. Identify the top 3 compliance risks for your organization based on what you learned today. Write one action item for each.

In-Person Intensive — Facilitator Notes

  • Closes Day 2 (approximately 1:30 PM after lunch). This is the capstone module — connect it back to every prior module.
  • A strong NOFO (Module 1) sets up a realistic scope of work. A clear logic model (Module 2) makes performance reporting easier. A well-constructed budget (Module 3) reduces audit risk. An evaluation plan (Module 4) provides the data structure for progress reports.
  • End with a full course recap: what are the participant's top 3 action items from the 2 days? Have them write these down before receiving their Certificate of Completion.
  • Alternative format note: In a virtual live format, the course closes with a live Q&A and a shared action planning document that participants complete before logging off.
Live NOFO Examples for This Module
Teaching Resource
HUD Continuum of Care (CFDA 14.267 | FR-6800-N-25): The CoC’s consortium structure is the best live example for teaching subrecipient monitoring under 2 CFR 200.332 — the lead applicant is legally responsible for monitoring all member organizations. Walk through what a subrecipient monitoring plan looks like in practice.
AmeriCorps State and National (CFDA 94.006): AmeriCorps uses a specific post-award reporting system (eGrants) and requires quarterly progress reports — a good comparison to the Head Start PIR and CSBG annual report. Use it to show participants that post-award reporting systems are program-specific, not universal.
Formula programs (Head Start PIR, CSBG ROMA, LIHEAP, Weatherization DOE SWS, CACFP): Use the Formula Program Reference table from the NOFO Reference page to walk through the full post-award reporting landscape for a community action agency. This grounds the compliance checklist exercise in the participant’s actual grant portfolio.
View full NOFO details and the Formula Program Reference table on the NOFO Reference page →
COURSE TEACHING RESOURCE

Live Federal NOFO Reference

Six real federal funding opportunities used as teaching instruments throughout the course. Each is directly relevant to community action agencies, Head Start grantees, and nonprofits with federal funding portfolios — the same organizations you work in.

Module 1: Any NOFO below Module 2: NOFOs 1, 3 Module 3: NOFOs 1, 2, 5 Module 4: NOFOs 2, 4 Module 5: NOFO 1 Module 6: NOFOs 2, 6
CFDA 93.600
MOD 1 MOD 2 MOD 3 MOD 5
HHS / Administration for Children and Families / Office of Head Start
Head Start / Early Head Start Competitive Grant
HHS-2025-ACF-OHS-CH-0085
Award: Up to $20,276,444 (~$70.7M total, ~9 awards) Match: 20% non-federal share Cadence: 5-year recompetition

Head Start is the gold standard teaching NOFO in this course — scoring criteria are explicit, weighted, and published. The 20% match requirement, SF-424A format, and PEAR-ready narrative structure make it a direct teaching instrument for four modules. For organizations like Coastal Plain EOA, this is also the highest-stakes grant in their portfolio.

View NOFO / Program Page →
CFDA 94.006
MOD 1 MOD 4 MOD 6
AmeriCorps (formerly CNCS)
AmeriCorps State and National Competitive Grants
Annual cycle — see AmeriCorps NOFO portal
Award: $150,000–$1.5M; MSY cost capped Match: 24–50% sliding scale Cadence: Annual

AmeriCorps makes the federal evidence hierarchy explicit and consequential — unlike most NOFOs where “evidence-based” is a preference, AmeriCorps programs that cannot demonstrate an evidence tier face reduced scoring. National Performance Measures (NPMs) are prescribed, making it the best live example for teaching SMART indicators in Module 4.

View NOFO / Program Page →
CFDA 93.569
MOD 1 MOD 2
HHS / Administration for Children and Families / Office of Community Services
Affordable Housing and Supportive Services Demonstration (AHSS)
Periodic discretionary — monitor ACF OCS funding page
Award: 18-month cooperative agreement (amount varies) Match: 10% of total program costs Cadence: Periodic discretionary

The AHSS sits directly in the community action lane — CAAs and CSBG-funded organizations only. The eligibility requirement (applicants must own affordable housing units) makes it a textbook go/no-go teaching exercise: read eligibility before touching program design.

View NOFO / Program Page →
CFDA 66.815
MOD 1 MOD 4
U.S. Environmental Protection Agency (EPA)
Brownfields Job Training Cooperative Agreements
Annual — FY 2027 NOFO expected mid-2026
Award: $200,000–$800,000 Match: No formal match required Cadence: Annual

EPA evaluates grants differently than HHS/ACF — comparing scoring criteria across agencies teaches participants that NOFO analysis skills are transferable. The clean logic chain (recruit → train in environmental remediation → place in employment) makes it an ideal logic model and evaluation design exercise for a workforce-adjacent CAA.

View NOFO / Program Page →
CFDA 10.766
MOD 1 MOD 3
USDA Rural Development
Community Facilities Direct Loan and Grant Program
Rolling / no fixed NOFO cycle
Award: Varies; grant % based on community MHI Match: Varies by community income level Cadence: Rolling / continuous

USDA Rural Development is an underused funding source for HHS-focused organizations. The income-based match calculation — where communities below median income qualify for a higher grant percentage — is a concrete Module 3 teaching point that goes beyond fixed-rate match formulas. Rural counties in a CAA’s service area often qualify for the most favorable terms.

View NOFO / Program Page →
CFDA 14.267
MOD 1 MOD 6
HUD / Office of Community Planning and Development
FY 2024 and FY 2025 Continuum of Care Competition
FR-6800-N-25
Award: ~$3.9B nationally; individual awards vary Match: 25% non-federal match (most project types) Cadence: Annual

The CoC’s consortium application structure is the best live example for teaching subrecipient monitoring under 2 CFR 200.332 — the lead applicant is responsible for monitoring all member organizations. The 25% match requirement offers a direct comparison to Head Start’s 20% and the AHSS’s 10%, reinforcing that match requirements are program-specific, not universal.

View NOFO / Program Page →

Formula Program Reference — Post-Award Teaching Tools

These are not competitive NOFOs. They are the formula and entitlement grants that most community action agencies already manage. Use them in Module 6 as post-award management and compliance teaching examples.

Program Agency CFDA Key Reporting Requirement Core Compliance Theme
Head Start (base grant) HHS/ACF/OHS 93.600 Annual PIR via HSES; SF-425 quarterly 20% non-federal share; 45 CFR Part 1302; CLASS monitoring
CSBG HHS/ACF/OCS → State agency 93.569 Annual CSBG report; ROMA NPIs; Community Action Plan Organizational Standards; SEFA entry; Community Needs Assessment required
LIHEAP HHS/ACF/OCS → State agency 93.568 Annual LIHEAP performance report Eligibility documentation; drawdown pacing; priority targeting
Weatherization (DOE) DOE → State energy office 81.042 Annual production reports; DOE SWS audit Per-unit cost caps; quality control inspections; SWS compliance
CACFP USDA/FNS via ACF 10.558 Monthly claims via CACFP system Meal pattern compliance; site monitoring
PREPARATION

Getting the Most Out of Your 2 Days

Before your in-person training, complete these three preparation items. Arriving prepared will allow you to use your own real grants — not hypothetical examples — throughout the exercises.

1

Bring a Live NOFO

Identify one federal NOFO that is either currently open or that you anticipate will be released in the next 6 months that your organization is likely to pursue. Print it or download it. We will use it as a working document throughout Day 1. If you are unsure which NOFO to bring, contact Anthony Bammer at [email protected] before the training date.

2

Pull an Approved Budget + Prior Narrative

Locate one approved federal grant budget from your current portfolio — your most recent Head Start Performance Agreement budget or CSBG contract budget works well. Bring the SF-424A and, if available, the approved budget narrative. We will use it as a real baseline for Module 3.

3

Print Your Current Logic Model (If One Exists)

If your organization has an existing logic model for Head Start, CSBG, or any other federal program, bring a printed copy. If you do not have one, that is fine — we will build one from scratch in Module 2. If you have a community needs assessment or program design document, bring that instead.

Questions Before Training?

Contact Anthony Bammer directly at abammer@g1ve-technologies .us with any questions about what to bring or how to prepare. He will respond within 48 hours.

ENROLLMENT

Request This Training

Reach out to confirm your option selection. An engagement letter will be sent for signature within one business day.

Four Steps to Get Started

1

Select Your Option

In-Person Intensive — 2-day, 6-module format delivered at your facility.

2

Confirm via Email or Phone

Contact Anthony Bammer directly to confirm your training date and location.

3

W-9 & Payment

W-9 provided upon request. 50% deposit secures your training date.

4

Training Begins

Training begins within 5 business days of executed agreement and payment.

Anthony Bammer

Title Managing Partner, G1VE Technologies & G1VE Advisory
Location Atlanta, Georgia

In-Person Intensive

Training Fee $750
Payment Schedule Full payment upfront
Accepted Methods ACH transfer, check, credit card (via Stripe)
W-9 Provided upon request
Travel Instructor travel from Atlanta included. Lodging/meals not included — client provides or $150/day per diem applies if overnight stay required.
✓  Printed course materials, worksheets, and templates
✓  Real-time feedback on actual NOFOs, budgets, and proposals
✓  On-demand recordings for post-training review
✓  Email Q&A support — 90-day access (48-hr response)
✓  Post-training follow-up session (1 hr, virtual, within 30 days)
✓  Certificate of Completion