A regulatory submission is a slow-motion writing project layered on top of a fast-moving operation. The operation has been running for months — generating the data, the documents, the customer interactions, the safety reports, whatever the regulator wants to see. The submission has to assemble all of that into a coherent narrative under deadline pressure, in a format the regulator expects, with citations back to the underlying record. Most teams write the submission the month before the deadline and discover, painfully, how much of the source material is sprawled across systems and people.
A vault of plain markdown notes with an integrated agent isn't a substitute for the regulatory work itself — that's still a careful exercise that depends on people who know the rules. But it does make the source-material side tractable: holding the operational record, the prior submissions, the regulator correspondence, and the open questions in one place where the agent can read across all of it. The same shape underwrites AI notes for compliance and audit preparation and the contract-negotiation rationale discipline, both of which intersect with regulatory work in most operating businesses.
A vault shaped around the submission cycle
The shape that holds up across submission types is roughly: one top-level page for the regulatory function, with sub-pages for each active submission, the prior-submissions archive, the regulator correspondence threads, the standard operating procedures referenced in submissions, and the open questions database. Each active submission sub-page holds the running outline, the source-material bucket, the per-section drafts, and the review log.
Capy supports unlimited page nesting, so a complex multi-volume submission can fan out by section, exhibit, or jurisdiction without forcing structure on a single-form filing. The whole vault is plain markdown. That matters because when you sit down to draft a section, you ask the agent to read the relevant operational notes, the prior submission's parallel section, and any new regulator guidance, and to draft a v1. You're editing, not writing from blank.
Operational source material captured the moment it lands
The painful part of submission work isn't the writing — it's the source-material assembly. The customer complaint that should have been logged in week one. The protocol deviation that the operations lead mentioned in passing in March. The supplier qualification that the procurement team handled six months ago. By the time you're drafting the submission, those events are scattered across emails, ticketing systems, and people's memories.
A working setup: a "source material" page per submission category where the operations team drops short entries as events happen. Two minutes per entry. Tag the date, the type, and a one-line description. Over a quarter, the page becomes a primary-source archive of the operational reality the submission has to describe. (The capture-while-it's-fresh habit is the same one underwriting AI notes for documenting lessons learned after every project — different domain, identical mechanic.)
When you draft the submission, ask the agent to read the source material for the relevant period and propose how each event maps into the submission's required sections. Edit. The submission is grounded in what actually happened, not in what people remember happening.
Prior submissions that teach the next one what to be
Most regulated businesses have a stack of prior submissions sitting in Drive or a document-management system. Each one was written by someone slightly different, reviewed by someone different, and following the format that was current at the time. The institutional memory of "how we write our submissions" lives in three or four people's heads.
Drop the prior submissions on the prior-submissions archive page. They auto-convert to markdown via docstrange, which means the agent can read them as searchable text the same as any other note. When you draft the next submission, ask the agent to read the last three submissions of the same type and follow the structure, voice, and level of detail that the regulator has accepted. The agent isn't generating from generic web data; it's generating in your team's actual voice with your team's actual structure.
You edit. The first draft lands in the format the reviewer expects, which means the internal review cycle gets shorter. (The same drafting-from-prior-work habit makes the investor-relations and board updates writable in a sitting.)
Regulator correspondence held alongside the file
Most regulator interactions get filed as PDF attachments to emails. The cover letter, the request for information, the deficiency notice, the response, the follow-up. By the time you're preparing the next submission to the same regulator, the prior correspondence is scattered across inboxes and you can't quite remember what was outstanding.
A working setup: a sub-page per regulator relationship with the running thread of meaningful exchanges. Drop the correspondence as it lands — the regulator's letters auto-convert to markdown the same way. Two minutes to add the entry. Over time, the page becomes the primary-source archive of the relationship.
Before drafting the next submission, ask the agent to read the regulator's correspondence and surface anything that's still outstanding from the last cycle, anything they've signaled they care about, anything that suggests their current focus. Your submission addresses those things proactively instead of being surprised by the deficiency notice three months later.
A decisions log for the rationale behind each submission choice
Submissions force a steady stream of judgment calls — what to include, what to disclose, what to cite, how to characterize a borderline event. Each call gets made in a working session and forgotten by the next submission cycle. When the regulator pushes back, or when the next cycle requires a similar call, the rationale is gone.
The fix is mechanical. An inline decisions database in the submission page via the :::database::: directive — rows for date, decision, rationale, who made the call, and what triggered it. The database lives directly in the page, not in a separate tab.
After every working session on the submission, ask the agent to read the recap and propose entries. You confirm or edit; the rationale lands in writing while it's still fresh. When the regulator asks why you characterized a particular event a particular way, you have the actual reasoning, not a reconstructed memory.
PDFs from regulators and consultants, finally readable
Regulatory work generates a steady stream of long documents you should read carefully and probably won't finish carefully. The new guidance document from the regulator. The consultant's gap analysis. The competitor's publicly-filed submission you're trying to learn from. Most of these get skimmed once and lost.
Drop the PDFs on the relevant page. They auto-convert to markdown via docstrange, which means the agent can read them as searchable text the same as any other note. Ask the agent to summarize the new guidance in plain English with the changes from the prior version called out separately. Or to read a competitor's publicly-filed submission and tell you how their structure differs from yours. Or to compare two consultants' gap analyses and tell you where they agree and disagree.
The conversion runs once per upload and the document stays searchable from then on. This is what makes "chat with the regulatory guidance" actually work — the agent isn't running OCR on every query; it's reading text it already has.
The same agent helps the internal review without replacing the human reviewer. Before the human read, ask the agent to compare the submission against the prior submission of the same type and surface anything structurally different — sections that are shorter, sections that omit a category previously included, citations that point to outdated sources. The agent isn't catching the substantive errors that need an expert eye; it's catching the structural slips that distract the reviewer from the real work. The reviewer then spends their time on the substantive parts. (The same review-augmentation pattern shows up in our writeup of contract negotiation with AI notes — different artifact, same mechanic.)
What this isn't
Capy doesn't claim regulatory certifications and isn't a substitute for the validated systems your regulators expect. The system of record for the actual submission, the validated document-management system, the audit trail your regulator will inspect — those still live in the tools your industry's compliance regime requires. Capy is for the unstructured side: the source-material capture, the prior-submission reference, the regulator-correspondence thread, the decision rationale, the draft work. That's the part that's currently sprawled across email, Drive, and people's memories.
It's also single-user by design. One regulatory lead, one vault. If your team needs a multi-user shared workspace where the submission team edits the same artifacts with role-based permissions, that isn't this product. The shape that fits is the regulatory lead running the personal connective layer alongside the team's validated tools. Pricing tiers are on the pricing page.
A note on responsibility: the regulatory determinations themselves — what to include, how to characterize, whether the submission is complete and accurate — remain the regulated entity's responsibility under whatever regime applies. The agent is a writing and reading assistant. It is not a regulatory consultant.
A small first test
Take the most recent submission you sent. Drop it on a Capy page along with one or two months of operational notes from the period it covers, and one piece of regulator correspondence from the cycle. Ask the agent to read across all of it and draft a one-page summary of: what's likely to come up in the next cycle, what's still outstanding from the regulator, and what operational events from the period weren't captured in the submission. If the summary catches something you'd otherwise have walked into the next cycle without addressing, that's the agent doing for you what regulatory work asks for and rarely gets.
Try Docapybara free. Load one prior submission and one quarter of operational notes and see what falls out.