You're a founder hiring your fifth or fifteenth person. Each role is supposedly different — different team, different stack, different seniority — and yet the work of running the loop is depressingly similar each time. You write a JD that's basically the last JD with the title swapped. You source candidates by re-pinging the same people. You run intros, run technicals, debrief the team in a Slack thread that loses the thread by Friday. By the time you're at the offer stage, the early candidate signals have decayed and you're going partly on vibes.
The unglamorous truth about hiring well at small scale is that it's mostly documentation work. You're not under-staffed for hiring; you're under-instrumented. AI notes earn their keep here when they hold the loop's connective tissue — JDs, scorecards, debriefs, references — in one place where the agent can read across all of it and surface what matters when it matters. The same documentation habit underwrites annual planning and goal setting and the way first-time founders move faster — hiring just makes the cost of bad documentation more visible.
The hiring vault, in plain English
A working setup looks roughly like this. One top-level page per open role, with sub-pages for the JD, the scorecard, candidate notes, and the offer history. One cross-cutting page for past hires, where you write a short retrospective on each person you've hired — what the loop saw vs. what showed up in the first ninety days. One inline candidate database that lives across all open roles, with rows for name, role, stage, source, last touch, and the link to their notes page.
The database lives directly in your pipeline page via the :::database::: directive. You don't switch tabs to a separate ATS-lite tracker. The candidate notes are plain markdown notes the agent can read across in one query.
Capy supports unlimited page nesting, so you can let a complex senior search sprawl into per-stage sub-pages while keeping a contractor search as a single page. You don't have to commit to a structure in advance.
Job descriptions that don't sound like job descriptions
Most JDs sound the same because most JDs are the same. A founder writes one once, copies it for the next role, and quietly drifts toward boilerplate over time. Candidates skim, the wrong people apply, and the loop fills with pattern-matching against a generic template instead of your actual bar.
The faster path is also the more honest one. Tell the agent: read my last three JDs, my retrospective on the most recent hire, and the scorecard for this role, and draft a JD that's specific to what we actually need. The agent pulls forward the structure you already use and grounds the specifics in the actual scorecard rather than inventing soft requirements. The draft is rarely perfect; it's usually a much better starting point than the empty page or the recycled JD.
A small thing that pays off: keep one short page called "things we actually look for" with three or four sentences in your own voice — the trait, the anti-pattern, the thing that's a hard no. Have the agent reference it when drafting JDs. The JDs will quietly get less generic.
Recording intros and screens with speaker labels
The hardest part of running a loop while doing the rest of your job is remembering what the candidate actually said in the intro call when you're sitting in the technical four days later. Most founders solve this by writing notes during the call, which means they're taking notes instead of actually listening.
Record the call inside Capy. The transcript comes back with speaker diarization — labels like "Speaker 1: …" so you can tell who said what. Park the recording on the candidate's notes page. Ask the agent to draft a one-paragraph summary, pull every concrete answer the candidate gave to a behavioral question, and flag any answer that contradicts something on the resume.
You're not over-engineering this. The summary makes the technical interview smarter because you walk in remembering what was said. The transcript is the source of truth if a debrief later turns into "wait, did they say two years or four years on that project."
Debriefs that aren't a Slack thread
Interview debriefs decay fast. The standard pattern — interviewer drops a hot take in Slack, two more chime in, the principal pings everyone for a vote at noon — produces decisions that no one can reconstruct three weeks later. If the candidate is borderline and you punt, you've lost the option to compare them honestly to the next candidate in the same loop.
A working setup: have an inline database called "interview debriefs" inside the role's page, with rows for interviewer, dimension scored, score, supporting evidence, follow-up question. Each interviewer fills their row right after the interview. Then you ask the agent: read all four debrief rows, summarize the pattern across them, surface any dimension where the scores diverge by more than one point, and propose the three follow-ups that would resolve the disagreement. (The decisions-database mechanic also underwrites AI notes for co-founders: alignment, decisions, and accountability.)
You walk into the debrief meeting with a structured read instead of a bunch of vibes. The decision is faster because the disagreement is already named. And six months later, when you do the post-hire retrospective, the debrief is still legible.
Reference checks that produce actual signal
Reference calls are wasted on most founders because the whole conversation gets reduced to "yeah, they were great." That's a sign you're asking the wrong questions, not that the references didn't have useful information.
Record the reference call. Get the transcript with speaker labels. Then ask the agent to compare what the reference said to what the candidate said in their own intro call about the same project. If the candidate claimed they led X and the reference says they contributed to X, that's worth knowing before the offer goes out, not after.
The transcripts also let you batch references for a candidate against each other. If three references all hedge on the same dimension — "she's great, just sometimes a bit…" — the pattern is the answer. The agent can surface that pattern in a way that scrolling through three call transcripts doesn't.
Closing offers from your past closes
Offer conversations, especially with senior candidates who have other offers, are partly a writing exercise. You're trying to make a case that this is the job they should take. Most founders write that case from scratch every time, which means it's never as good as the time you really had to fight for someone and the message landed.
Drop the candidate's notes page, the scorecard, and the offer details on a fresh page. Tell the agent: read my last three close-emails for senior hires, read this candidate's notes, draft a v1 close-email that emphasizes the parts of the role that matched what they care about most. The draft is yours to edit, but it descends from your own past closes — not a generic SaaS-founder template. The pattern is the same as drafting from your past briefs as an agency owner: your prior best work is the training set.
Post-hire retrospectives, the small habit that compounds
The discipline that separates founders who get better at hiring from founders who don't is the post-hire retrospective. Ninety days in, write a short note: what did the loop see, what did the loop miss, what would you change about the scorecard for next time. Keep it short — three or four paragraphs — and keep it honest.
When you start the next loop, the agent reads the retrospectives across your last several hires and surfaces the patterns. "You consistently overweight slide quality in the technical and underweight written communication." Patterns in your own past judgment are some of the highest-signal information you have, and they're the kind of signal that's almost impossible to see without a notes layer that holds them.
What this isn't
Capy isn't an ATS. It doesn't post to job boards, schedule interviews, or send templated emails. If you need a real applicant tracking system at scale, you'll still want one. The hiring shape Capy fits is the small, high-stakes loop a founder runs personally — five to fifteen open roles a year, a handful of finalists per role, every offer being a real decision.
It's also single-user by design. One founder, one vault. If you want a team workspace where every interviewer has their own seat with shared candidate visibility, that isn't this. The shape that holds up is a vault that lives with the person doing the actual hiring.
A small first test
Take the role you're actively hiring for right now. Load the JD, the scorecard, and any debriefs you have so far on a single page in Capy. Ask the agent to write a one-paragraph summary of where the loop stands and what the highest-leverage next step is. If the summary names a tension you'd been carrying around without saying out loud, you've got a sense of what the agent does once you have a few months of loops in the vault.
Try Docapybara free. Run your next loop with the notes in one place and see what the agent surfaces.