<!-- release-blog: covers 5454554b, 28b87f7c, 88364654, 792f0059, e85e2940, 3e6bce87, 7754ff92, 79632f04, 59088f07, 1b93f929, c1a185dd, 182b43a8 -->

We wanted product analytics, error tracking, LLM analytics, and logs all in the same tool. PostHog covers all four. Wiring it up took three pieces — a standard frontend integration, a Caddy reverse-proxy with bundle renames to survive ad-blockers, and OTLP logs from Django so Capy errors land next to user events.

This is the engineering write-up. If you're integrating PostHog and hitting one or more of the same walls — events not arriving, recorder failing to load, logs sitting in a different tool — the patterns below should map onto your stack.

## What we wanted

Four signals, one place to look at them:

1. **Product events** — page views, feature use, funnel drop-offs.
2. **Session replays** — for the "why did this user bounce" debugging that events can't answer alone.
3. **LLM analytics** — token counts, latency, success rate per Capy turn.
4. **Application logs** — Django + worker errors, with enough context to correlate against events.

PostHog bundles all of those (replay, events, LLM analytics, logs). We use it for those four reasons; we're not making a category claim against Mixpanel or Datadog or Sentry. Different products fit different shapes; this one fit ours.

## The standard integration

The basic wiring is straightforward and you can lift this directly from the [PostHog docs](https://posthog.com/docs):

- Frontend snippet in the SPA's HTML head, initializing on app boot.
- `posthog.identify()` on the allauth login signal, so logged-in events tie to the user.
- Error boundary integration: when a React error boundary catches a render error, forward it to `captureException` so it shows up in PostHog Error Tracking, not just the browser console.

One non-obvious setting we ended up flipping: `cross_subdomain_cookie: false` in prod. With it on, PostHog sets a cookie scoped to `.docapybara.com`, which then triggers a `dmn_chk` cookie warning every page load because the SDK is checking domain compatibility against an unrelated subdomain. Disabling it stopped the warning and didn't affect identification because we don't run multiple Docapybara subdomains.

For LLM analytics we use PostHog's OpenAI wrapper around our Pydantic AI agent — every Capy turn sends `{model, input_tokens, output_tokens, latency_ms, status}` to PostHog. That part needs no special handling beyond pointing the wrapper at the same PostHog instance.

So far, so docs-shaped. The interesting parts come next.

## The ad-blocker problem

If you serve PostHog's SDK from `*.i.posthog.com`, ad-blockers eat your events. EasyList, EasyPrivacy, uBlock Origin's defaults — they all match `posthog` URLs and drop the requests. Some users see your site fine; PostHog never gets a peep about them. Sample sizes shrink, funnels lie, and you find out the hard way.

The fix is three layers, applied in this order:

### Layer 1: reverse-proxy through Caddy

We proxy PostHog through our own domain, so the requests look like first-party calls to `docapybara.com` instead of third-party calls to `posthog.com`. The pattern is in PostHog's docs ([PostHog reverse proxy via Caddy](https://posthog.com/docs/advanced/proxy/caddy)):

```bash
handle_path /relay-aXq3* {
    rewrite * {path}
    reverse_proxy https://us.i.posthog.com:443 {
        header_up Host us.i.posthog.com
        header_down -Access-Control-Allow-Origin
    }
}
```

`handle_path` strips the prefix; `rewrite` reconstructs the path PostHog expects; `header_up Host` rewrites the Host header so PostHog's edge accepts the request; `header_down -Access-Control-Allow-Origin` strips the upstream CORS header so our same-origin response Just Works.

Why `/relay-aXq3` and not the obvious `/ingest`? See layer 2.

### Layer 2: random the proxy prefix

We started with `/ingest` as the prefix — clean, readable, mnemonic. EasyList includes URL-pattern rules that match `*posthog*` *and* common analytics path names like `*/ingest/*`, `*/track/*`, `*/collect/*`. The proxy worked technically, and ad-blockers still dropped the request because the path itself looked like analytics.

So the prefix became `/relay-aXq3` — opaque, non-obvious, not on any rule list. The path no longer screams "analytics" and the requests pass through.

The lesson: don't just hide the destination, hide the shape of the URL too. If the path pattern matches an analytics convention, blockers don't need to know the destination.

### Layer 3: rename the recorder bundle filenames

The session-recorder is loaded as a separate JavaScript bundle. PostHog's SDK requests it by name: `posthog-recorder.js`. EasyPrivacy has a rule that matches that filename. Same for `dead-clicks-autocapture.js`.

Two changes fix it:

**In Caddy**, route opaque filenames to the real upstream filenames:

```bash
handle_path /relay-aXq3/static* {
    @recorder path /r-Mv7q.js
    handle @recorder {
        rewrite * /static/posthog-recorder.js
        reverse_proxy https://us-assets.i.posthog.com:443 { ... }
    }
    @deadclicks path /d-Mq3p.js
    handle @deadclicks {
        rewrite * /static/dead-clicks-autocapture.js
        reverse_proxy https://us-assets.i.posthog.com:443 { ... }
    }
    handle {
        rewrite * /static{path}
        reverse_proxy https://us-assets.i.posthog.com:443 { ... }
    }
}
```

**In the frontend**, wrap PostHog's `loadExternalDependency` so the SDK requests the renamed names instead of the real ones:

```bash
// Wrap the loader so any request for posthog-recorder.js
// goes to /r-Mv7q.js instead, and dead-clicks-autocapture.js
// goes to /d-Mq3p.js. Caddy maps it back upstream.
const originalLoad = posthog.loadExternalDependency
posthog.loadExternalDependency = (host, dep, callback) => {
    const renamed = {
        'recorder': 'r-Mv7q',
        'dead-clicks-autocapture': 'd-Mq3p',
    }[dep] ?? dep
    return originalLoad.call(posthog, host, renamed, callback)
}
```

The point of the renames: nothing about the request — neither path nor filename — matches a known analytics pattern. The request looks like a JS bundle named after a build hash, served from your own domain.

This isn't bulletproof. Aggressive blocker setups (uBlock Origin with extra lists, custom DNS-level blockers, browser extensions that hash-match scripts) can still drop the bundle. We're not trying to defeat every blocker; we're trying to recover the common case where someone has a default-config blocker installed and wants the site to work. That common case now works.

## Logs over OTLP

Events tell you *what* happened. Logs tell you *why*. Putting them in the same tool means you can pivot from "user A bounced from this page" to "what was happening server-side at that exact moment" without leaving PostHog.

PostHog Logs accepts OTLP/HTTP. We added an OTel handler to Django's logging config:

```bash
# webapp/util/posthog_logs.py
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor

resource = Resource.create({
    "service.name": f"{settings.DOMAIN}-{role}",   # e.g. docapybara-django
    "service.version": settings.GIT_SHA,
})
provider = LoggerProvider(resource=resource)
provider.add_log_record_processor(
    BatchLogRecordProcessor(
        OTLPLogExporter(endpoint=settings.POSTHOG_OTLP_ENDPOINT)
    )
)
handler = LoggingHandler(level=logging.INFO, logger_provider=provider)
logging.getLogger().addHandler(handler)
```

`role` is `"django"` for the web container and `"worker"` for the queue worker, so the same image labels itself differently depending on what it's doing. `service.name` built from `settings.DOMAIN` means logs from `harpb.com` and `docapybara.com` (same image, different domains) show up as different services in PostHog, not co-mingled.

Two bugs we hit along the way:

**`propagate=False` loggers were invisible.** Adding the handler to the root logger looked like enough. It wasn't — anywhere we'd configured `logger.propagate = False` (to keep noisy modules out of the root logger), the OTel handler never saw those records. The fix was to also attach the handler to those specific loggers, not just the root.

**The worker container was labelled wrong.** The first version of the OTel config was built once at module import and shared across all roles, so the worker reported itself as `docapybara-django`. The fix is computing `service.name` per-process at startup, so the worker actually says `docapybara-worker`.

## What broke along the way

A short honest list of things that surprised us, in case you trip over them too:

- **Caddy's `handle_path` strips the prefix *before* `rewrite` runs**, so `rewrite * /static{path}` sees `path` *without* the `/relay-aXq3` part. We had to write `rewrite * /static{path}` rather than `/relay-aXq3/static{path}` once we figured this out — and a fix commit captures the wrong-shape attempt.
- **`captureException` from React error boundaries** doesn't fire automatically — you have to import `posthog` and call it inside `componentDidCatch` or the equivalent hook. Without that, all caught errors are invisible to PostHog Error Tracking.
- **Cross-subdomain cookie warnings** show up as `dmn_chk` in PostHog. Disabling `cross_subdomain_cookie` is the cheap fix when you don't run multiple subdomains.
- **OTLP exporter latency** matters less than you'd think — the `BatchLogRecordProcessor` buffers and sends in 5s windows by default, which is fine for log shipping. We didn't need to tune it.

## What it gives us

The dashboard now shows:

- A user's page-view path → which feature they tried → what Capy answered → what the Django logs said at the same minute → whether they came back.
- Errors from React, from Django, from Capy — all in one Error Tracking view.
- Session replay for any user who hit an unexpected state.
- LLM analytics: token cost, latency p95, error rate per model, broken down by feature.

Most of which we'd otherwise be paying three separate vendors for. PostHog's pricing is generous enough at our scale that the single-tool consolidation is worth it.

If you're integrating PostHog and the events aren't arriving — try the proxy. If they're arriving but the recorder isn't loading — try the bundle renames. If your logs are still in another tool — the OTLP handler is fifteen lines.

The other engineering write-up from this week is [zero-downtime Django deploys](/guides/developers-builders/zero-downtime-django/), if you're running PostHog on similar infra and wondering how we deploy without dropping events. The [MCP server post](/guides/developers-builders/docapybara-mcp/) covers the agent-readiness side of the same site.