The auditor's email arrives on a Tuesday: site visit in three weeks, here's the document request list. You scan the list and your stomach tightens. Half the policies they're asking for are in a Drive folder that nobody has touched since the last audit. The training records are split across the LMS exports, the HR system, and a spreadsheet someone updates manually. The vendor risk assessments exist but the most recent one is from fourteen months ago and the original author left the company in October. You have three weeks.
This is how most audit prep goes — three weeks of frantic stitching after a year of nothing. The audit is rarely the problem. The lack of a working evidence trail in the eleven months between audits is the problem.
This post is about keeping the evidence trail in one searchable place all year, so audit prep is a confirmation pass rather than a reconstruction project. A scope note: a notes app is a place for your own structured private notes — policies, evidence, working trackers, audit-prep material. It is not itself a regulatory or compliance platform. Whether your evidence handling meets the obligations of a particular framework is a judgment you make with your compliance and legal advisors. We're a notes app.
Why audit prep is usually painful
The honest reason audits feel like fire drills is that compliance documentation is invisible work. Nobody notices when the vendor assessment is current. Everybody notices when it's missing. So policies get written, used a few times, and then drift behind whatever the team is actually doing.
A few cross-cutting patterns make it worse:
- Evidence is scattered across systems. Policies in Drive. Training records in the LMS. Vendor assessments in a shared drive nobody opens. Incident logs in a spreadsheet. Risk register in a different spreadsheet. Pulling all of it together for an auditor is a project in itself.
- Drift is invisible. A process changes in March. The policy doc that describes the process doesn't get updated. By the time of the next audit in November, the gap is real but nobody noticed.
- Cross-references are silent. Your access-control policy references the offboarding procedure. The offboarding procedure references the equipment-return checklist. The checklist references a tool you stopped using. Nobody re-reads twelve docs in one sitting to catch the chain.
The fix isn't a more expensive compliance tool. The fix is keeping the working trail in one place where an AI assistant can actually read across all of it. The same shape applies to adjacent operational documentation — see Standard Operating Procedures, Without the Wiki Maintenance Tax and Build a Company Wiki from Casual Notes.
The shape: one vault, one structure, all year
In Docapybara, the audit-prep vault is a single nested workspace where every piece of compliance evidence lives as a markdown page. A typical structure:
Compliance → Policies (one page per policy, kept current)
Compliance → Evidence → organized by control area (Access Control, Change Management, Vendor Risk, Incident Response, etc.)
Compliance → Trackers (live databases for things like vendor reviews, training completion, access reviews)
Compliance → Audits → 2026-Q2 (the working folder for an active audit, with the document request list and your status against each item)
Compliance → Reference (the framework requirements themselves — SOC 2 trust criteria, ISO 27001 controls, whatever you map against)
Page nesting goes as deep as needed. Plain markdown means the pages are searchable, copyable, and exportable. When the auditor asks for the access-review evidence, you can copy out the relevant page as text. When a regulator changes a control description, you can paste the new text in place.
A live database of every control and its evidence
This is where audit prep stops being a frantic stitching project and becomes a steady-state activity. A :::database::: directive embeds a live database directly inside any markdown page. So your control-tracking page can include a table with one row per control, columns for control ID, description, owner, last-reviewed date, evidence location (link to the relevant evidence page), status (Current, Needs Review, Out of Date, Gap), and notes.
Six column types are available, which covers most compliance-tracking shapes. Sort by last-reviewed date and you see what's stale. Filter by status and you see the gaps. Filter by owner and you know who to nudge.
When the assistant updates a row — "mark the access-review control as completed for Q2, attach the evidence page, and set next-review for Q3" — the change is live across the page. When you ask "what controls are due for review in the next 30 days?", it reads the database and tells you.
For vendor risk specifically, a similar database tracks vendors with columns for vendor name, last assessment date, risk tier, contract end, owner, status. When you ask "which vendors haven't been reassessed in over 12 months?", the answer is one query away.
The agent reads across your evidence and finds gaps
Capy, the assistant inside Docapybara, has 27 tools and reads across your entire vault when you ask. The kinds of questions that become answerable in seconds:
- "For SOC 2 control CC6.3 (logical access removal), pull the evidence we've collected this year and tell me whether anything's missing." The agent finds the relevant evidence pages, summarizes what's there, and flags what looks incomplete.
- "My access control policy says we review user access quarterly. Do my evidence pages show four reviews completed this year?" It cross-references the policy text against the evidence trail and reports.
- "My change management policy references a CAB approval step. Is that consistent with how the engineering team's recent change log describes the actual process?" It reads both pages and surfaces the contradiction.
That third one is the part that pays for the rest of the setup. Auditors find drift between what the policy says and what the evidence shows. Catching that drift before the auditor does is how you pass cleanly. Wiki tools assume each page is self-contained. Real compliance docs overlap — the access policy touches offboarding touches change management touches incident response. When the agent can read them all at once, it spots the inconsistencies humans miss. The agent-acts-on-docs idea behind that is described in Claude Code for Documents, and the related incident-response thread shows up in Running an Incident with AI Notes.
PDFs of policies, framework requirements, and external evidence
Most compliance work involves PDFs — the framework requirements themselves (SOC 2 trust criteria PDF, ISO 27001 controls), past audit reports, signed vendor security questionnaires, attestations from third parties, scanned signed policies. They're useful as reference and a hassle to actually use because PDFs aren't searchable as text.
Drop the PDFs into Docapybara and the conversion pipeline turns each one into markdown the agent can read. So when you ask "what does our prior SOC 2 report say about the audit findings on logical access?", the agent pulls the relevant section as text. When you ask "which of our vendors' SIG questionnaires confirm they have an SDLC?", it scans across the questionnaire PDFs and gives you the list.
The original PDF stays one click away when you need to point the auditor at the actual signed document. The text version makes the content searchable. The institutional record stops being a closed paper file.
Recording the walkthrough so the policy reflects reality
Some of the most valuable evidence for an audit is the walkthrough — the conversation with the control owner about how the process actually runs. Auditors do these in person. Internal teams should be doing them too, before the audit, to catch drift.
Docapybara records audio inside the workspace and transcribes with speaker labels — so when the security engineer walks you through how the access review actually happens, the conversation is captured. You can ask the assistant "turn this transcript into a process narrative I can compare against the written policy." What comes back is a structured first draft that lets you see, side by side, where the written policy and the actual process have diverged.
This matters more for tacit operational knowledge — the steps the team does without thinking about it. Writing those steps cold is hard because they live in the doer's hands, not their head. Talking through them while the conversation is captured surfaces the details that matter, including the ones that would never have made it into a policy doc.
Audit working folders that don't decay
When an audit is active, you usually create a working folder with the auditor's request list, your status against each item, and the evidence you're sending. In most companies that folder lives somewhere different every audit and gets archived to a place nobody opens.
In Docapybara, the audit working folder is just a nested set of markdown pages — Audits → 2026-Q2 — with one page per request item, a top-level status database, and links to the evidence pages it draws from. When the audit closes, the folder doesn't get archived to obscurity. It stays in the vault, searchable, and the next audit's working folder starts by asking the assistant: "Based on the 2026-Q2 audit folder, draft a request-tracking page for the new audit cycle, copying over the controls that are likely to be re-evaluated."
The institutional learning compounds. By the third audit, the prep cycle is a confirmation pass against an already-current evidence trail, not a reconstruction project. The institutional-knowledge-walking-out-the-door version of this shape is covered in How to Document Institutional Knowledge Before People Walk Out the Door.
Try Docapybara free
The smallest useful test: open Docapybara, pick one control area (say, access reviews), paste in the relevant policy, and create a short evidence page for the most recent quarter's review. Then ask the assistant "based on the policy and this evidence, are there any gaps I'd want to close before an external auditor looked at this?" Five minutes of setup, and you'll know whether having the trail in one place — with an assistant that reads across it — changes the audit-prep math.
Try Docapybara free — bring your most recent audit request list, the policies that took the most heat last time, and a folder of legacy compliance PDFs. See how the workspace handles them.