Grant writing is one of those workflows where 80% of the work is structurally the same and 20% is funder-specific, and the structurally-the-same 80% somehow takes the most time anyway. The organizational background paragraph. The theory of change. The budget narrative. The evaluation framework. The outcomes-to-date section. Each application asks for these in a slightly different shape, and the small reshaping consumes most of the application week.

A working notes setup doesn't write the grant for you. It holds the source material — your organizational record, your funder histories, your past applications, your evidence files — in one place where the agent can read across all of it and draft the next application grounded in your actual material instead of your end-of-Friday reconstruction. The fundraising-side cousin of this workflow lives in [AI for fundraising: draft decks, grants, and donor letters](/blog/ai-for-fundraising/), and the deeper donor-management loop is in [AI notes for fundraising and donor management](/guides/sales-accounts/ai-notes-fundraising-donor-management/).

## A vault that mirrors how applications actually flow

The shape that holds up across a grant calendar is one top-level page per active application — `Foundation X, Capacity Building 2026` — with sub-pages for the funder profile, the application requirements, the draft, and the evidence pack. A separate top-level `library` page holds the reusable organizational material: org background, theory of change, evaluation framework, outcomes data, board bios, key staff bios, prior wins.

Capy supports unlimited page nesting, so a multi-year initiative with multiple funders, multiple sub-grants, and multiple reporting cycles can fan out without forcing you to flatten anything. Plain markdown matters because the agent can read across the library and the active application in one query when you ask it to draft the next answer.

## A library that's actually used

Every grant-writing operation has some version of a library — a shared drive of past applications, an "org background" document that gets pasted into everything, a folder of outcome data. The classic problem is that the library decays the moment it gets written. The org background is two years old. The outcomes are pre-pandemic. The theory of change has shifted but the library doc hasn't.

A working library is small, current, and actually read. Each top-level item is a single markdown page with a "last updated" line at the top. The org-background page is one page, not one folder. The theory-of-change page links to the actual program documents but is itself a 400-word distillation. The outcomes page has the latest evaluation numbers and a paragraph of context. (For the broader fundraising surface this slots into, see our writeup of [AI for fundraising](/blog/ai-for-fundraising/).)

When you start a new application, ask Capy to read across the library and draft the standard sections in the funder's preferred length and tone. The agent pulls from current source material. You edit, customize for the funder, and move on.

## Funder histories that compound

Each major funder has a personality — the questions they care about, the framing they respond to, the past applications that landed and the ones that didn't, the program officers' specific interests. Most grant writers carry this in their head. The cost is real: every program-officer turnover or grant-writer transition restarts the funder relationship from zero.

A `funder profile` page per major funder holds the basics: program areas, typical grant size, application cycles, recent funded projects, the program officer's name and what they care about. Drop public material on the page — the funder's annual report, recent press releases (which auto-convert to markdown via docstrange so they're searchable), strategic documents. Add your private notes: the conversations you've had with the program officer, the questions they asked on the last site visit, the framing they responded to in your last successful application.

Before drafting a new application to that funder, ask Capy to read the funder profile and tell you what the funder is likely to care about most given the project you're proposing. The shape of the prep changes from "let me re-read the RFP" to "let me start with what I know about how they read."

## Past applications as raw material

The past applications you've written — funded or not — are the most valuable training corpus for the next one. They're already in the right voice, the right structure, and the right shape for funders. The standard problem is that they're scattered across Google Docs and shared drives.

Park each past application as a sub-page under the funder it went to. Whether it was funded or not goes in the page header along with the program officer feedback if you got any. When you start a new application, ask Capy to read across the past applications to similar funders and surface the language that's worked, the framings that recur, and the outcomes data you've cited most often. The next draft starts from a structured assembly of your prior work, not from a blank page.

This is the kind of compounding that solo grant writers and small dev teams rarely achieve because the capture cost feels too high. A vault makes it the cheaper option.

## An evidence pack the agent helps you keep current

Most applications require an evidence pack: outcomes data, evaluation reports, financial statements, organizational chart, board roster, recent program reports. Each piece lives somewhere different and the assembly cost per application is real.

A `current evidence pack` page in the library holds the latest version of each piece. PDFs of evaluation reports drop in and auto-convert to markdown via docstrange so the agent can quote from them. Financial statements stay as PDFs but become searchable. Outcomes data sits as a markdown table that updates quarterly. The page header tracks "last updated" per item.

When the next application requires the evaluation report's executive summary, ask Capy to pull the relevant paragraphs and adapt them for the application's word limit. When it requires the latest outcomes data in narrative form, the agent reads the table and writes the paragraph. The evidence pack stops being the bottleneck of the week.

## Drafting the application from the library and the funder profile

The actual draft is the part everyone dreads. Inverting it: the draft isn't where the work lives. The work lives in keeping the library and the funder profile current. Once those are right, the draft is the agent doing assembly and you doing the judgment edits.

Tell Capy: "draft the project narrative for this application using our current theory-of-change page, the outcomes data from the evidence pack, the prior application to this funder for the structural cues, and the funder profile to weight which framings to lead with." You get a first draft that's grounded in your actual material and shaped for the actual funder. You edit the parts that need your judgment — the project-specific details, the budget rationale, the tone — and ship. The agent acting on the document directly is the [Cursor-for-documents](/blog/claude-code-for-documents/) idea applied to the application doc.

The applications that take a week historically take a fraction of that, not because the writing got faster but because the assembly stopped happening from scratch.

## Track applications across the calendar

A grant calendar with 15–30 active applications a year is hard to hold in your head. Which ones are in draft, which are submitted, which are in review, which got declined and why, which got funded and what the reporting cycle is.

An `applications` inline database in the library page captures this, with rows for funder, project, amount requested, submission date, status, decision date, and notes on the outcome. The database lives directly in the markdown page via the `:::database:::` directive — alongside the prose context, not in a separate tracker.

Ask Capy to read across the database periodically and surface patterns: the funders most likely to fund our work, the project types that get funded most often, the rejection reasons that recur. Some patterns are actionable (this funder consistently declines our request size, so try a smaller one), some are diagnostic (we're getting declined for the same reason across three funders, so the framing needs work).

## Reporting cycles that don't catch you off guard

Funded grants come with reporting requirements that decay between cycles. The mid-year report is due, you can't remember exactly what the funder asked for last time, and you spend a day reconstructing the format.

A `reporting` sub-page per funded grant holds the reporting requirements, the past reports, and the dates of the next ones. Capy's `web_search` tool can pull the funder's published reporting guidance into the page when you need to update it. Before each report, ask the agent to draft the next one using the prior format, the latest outcomes data, and the program updates from the relevant operating pages.

The report stops being a sprint and starts being a continuous output of the grant work the team is doing anyway.

## What this isn't

Capy isn't a grants-management database, isn't a funder discovery service, and isn't a substitute for the relationships with program officers that make grants actually happen. The vault holds the *writing and tracking* part of the grant cycle — the library, the funder histories, the past applications, the evidence pack, the active draft, the application calendar — which is the part currently spread across Google Drive, a shared inbox, and the executive director's memory.

It's also single-user by design. The grant writer (or executive director who's writing) owns the vault. Outputs ship to funders as applications and reports through the channels they already use; internal review happens through whatever document-sharing system the org already runs.

## A small first test

The cheapest way to see whether this fits is to pick one upcoming application — even one you've already started — and try it in this shape. Drop the funder's RFP onto a Capy page (PDF auto-converts to searchable markdown), pull your current org background and a relevant past application from the library, and ask Capy to draft the project narrative section. If the draft is closer to ship-ready than what you'd produce in the same time from a blank page, you've got a sense of what running the rest of the grant calendar in this shape would do.

[Try Docapybara free](/accounts/signup/). Load one funder's history and your current evidence pack, and see what the agent does with the next application.