ADAComplianceDocs

Methodology · v1.0

How a website becomes a documented good-faith ADA compliance record.

Public methodology, written so plaintiff counsel, accessibility experts, and judges can verify each step. Not the source code. The methodology. Hidden methodologies are how shortcuts get hidden.

Version 1.0·Last updated 2026-04-25·Operates under The Integrity Framework v1.0

What we sell, and what we do not

adacompliancedocs sells documentation tooling. Customers use the platform to log policies, statements, audits, remediation, training, vendors, feedback, incidents, and changes. The platform exports a court-ready PDF of the good-faith effort record.

We do not sell ADA certification. We do not stamp a website “compliant.” The conformance statement a customer publishes is theirs, not ours. The audit findings come from mechanical scanning (axe-core, WAVE, Lighthouse) or are imported from third-party human audits. We are the documentation chain. Verification stays with the customer and with their counsel.

ADA compliance is the most failure-prone trust category in software. Overlay vendors (accessiBe et al.) collapsed because they sold stamps. We are built on the inverse premise.

The scan path

Every audit follows the same path. Each step is described below.

  1. Customer triggers a scan against a target URL or set of URLs.
  2. The platform launches a headless browser and runs axe-core against the rendered page.
  3. Findings return with WCAG criterion references, severity (critical / serious / moderate / minor), and the offending DOM selector.
  4. Each finding is persisted to Platform_AuditFindings with a Source field set to axe.
  5. The audit row references the source URL, the axe-core version, and the customer org.
  6. New findings default to status open. They never auto-resolve.

Reordering changes the documentation posture. The order itself is part of the methodology.

Step detail

Step 01

Scan trigger.

A customer, a scheduled cron, or an integration triggers a scan against an org-owned URL. The platform validates that the URL belongs to the org (DNS verification or admin-set ownership). Cross-tenant scans are structurally prevented at the API layer.
Step 02

Headless render.

The platform spins up a headless Chrome via Puppeteer. The page renders fully (including JavaScript-driven content) before axe-core injects. Static-only scanners miss ~30 percent of real findings; the headless render is what makes the audit credible.
Step 03

Mechanical evaluation.

axe-core runs. axe-core is the industry-reference accessibility ruleset, maintained by Deque, with version pinned via the platform's lockfile. Each finding maps to a specific WCAG 2.1 success criterion and an Impact level. We do not extend or override axe-core's rule logic.
Step 04

Findings persist with traceability.

Each finding lands on Platform_AuditFindings referencing the parent Platform_Audits row. The Source field distinguishes between axe, wave, lighthouse, and manual. Manual findings are imported from third-party human audits and labeled accordingly.
Step 05

Default to open, never resolved.

A finding moves from open to resolved only via explicit user action with a stored remediation note. The remediation note is part of the documentation chain. The platform never auto-resolves a finding because a re-scan returned different results; the original finding stays in history.
Step 06

Quickscan fallback labeled.

When the headless render fails (target site blocks Puppeteer, JS framework crashes, network timeout), the platform falls back to a static HTML scan. The fallback is explicitly labeled in the UI as “catches about 30 percent of what axe-core would.” The findings still flow into the documentation chain but the label travels with them. No silent degradation.

The statement publish flow

A conformance statement is a customer-published declaration that says, in plain text, what their site claims about its accessibility. Three values: partial, full, non-conformant.

Step 01

Customer drafts the statement.

The customer writes their own conformance statement. The platform provides a template and prompts for the standard fields (date, scope, conformance level, exclusions). The customer is the author.
Step 02

Pre-publish guard on full conformance.

When a customer attempts to publish with conformanceStatus='full' while critical or serious axe-core findings remain open on the in-scope URLs, the platform surfaces those findings and warns hard. CI rule CRIT-SV-CONFORMANCE-CLAIM-GUARD enforces that the publish path references the open-finding check. The customer can override with explicit acknowledgement, captured in the audit log.
Step 03

Customer-attested isolation.

Customer-attested fields (training records, vendor policies, the conformance statement itself) are stored separately from system-verified fields (mechanical audit findings). Both render in the dashboard; the source is always visible.
Step 04

Statement publication is a snapshot.

When a statement publishes, a snapshot of the statement text, the open findings at that moment, and the timestamp lands on the documentation record. Subsequent edits to the live site do not retroactively alter the published statement.

AI accountability

AI is narrowly scoped to prose polish of customer-drafted incident response letters. The polish endpoint at src/app/api/incidents/[id]/polish/route.ts takes a customer draft, returns a polished version, and asks the customer to review before sending.

  • System prompt explicitly forbids adding facts, concessions, admissions, or commitments not in the customer draft. CI rule CRIT-SV-AI-REVIEW-GATE blocks edits that loosen this prompt.
  • Customer reviews and edits the polished output before sending. The platform records the original draft, the polished version, and the customer's final.
  • AI is not in the audit, statement, or conformance path. The audit findings come from axe-core; the statement is customer-authored; the conformance status is customer-set. AI never produces a customer-facing compliance verdict.

Anthropic is the AI provider for the polish feature. Disclosed on the privacy page.

Retention

  • Audits, findings, statements, remediation log: retained per the documented retention schedule. Customer offboarding does not hard-delete the documentation chain. A future court inquiry can still reach the record.
  • Customer-attested vs system-verified state: preserved through soft-delete and audit log.
  • Polish history: retained with the original draft, polished output, and final submitted version.

Failure modes

When something in the pipeline fails, the failure is visible.

  • Headless render timeout: scan reports the failure and falls back to quickscan with the label visible.
  • axe-core crash: scan halts, no findings persisted; user is shown the failure and prompted to retry. We never persist a partial finding set as if it were complete.
  • Conformance statement guard failure: publish is blocked unless the user explicitly acknowledges. The acknowledgement lands on the audit log.
  • Polish endpoint failure: customer is shown the failure; manual drafting remains available. We do not present a non-existent polish as if it had been generated.

Version

Current version: v1.0

Last updated: 2026-04-25

Methodology changes ship with a version bump and a paired entry in the changelog below. CI rule HIGH-SV-METHODOLOGY-VERSIONED blocks merges that update this page without a Version and Changelog header.

Changelog

  • v1.0 (2026-04-25). Initial publication. Closes Layer 3 “Public methodology page” gap on INTEGRITY.md. Documents the scan path, statement publish flow, AI accountability, retention, and failure modes.

Related