Methodology

How we file.

Every signal DriveAwareness publishes follows the same shape: intake, verification, editorial review, publication, correction. The pages below document what we count as a source, what we don't, and where our limits are.

Signal lifecycle

Five stages, every time.

STAGE 1

Intake

Signals arrive from public submissions (/community/report), our own monitoring of public data sources, and tips from editorial partners. Pseudonyms are accepted at intake; provenance is tracked separately from identity.

STAGE 2

Verification

We confirm at least one independent primary source before publishing. Where claims rest on platform behavior, we capture screenshots, URLs, and timestamps; where they rest on documents, we keep originals on file.

STAGE 3

Editorial review

Each filed case passes legal and editorial review. We separate verified claims from allegations explicitly. We redact what could harm a source. We name what we are sure of.

STAGE 4

Publication

Stories ship to the blog with their receipts attached. Patterns are catalogued in /suppression-methods. The signal feed at /feed shows recent verified items in near real time.

STAGE 5

Updates and corrections

When a published claim turns out wrong or incomplete, we correct it visibly — see /legal/corrections. We do not silently edit. Original-state archives are kept.

Source policy

What counts as a source.

Primary sources count
Government records, court filings, platform-archived ads, FOIA responses, original documents, on-the-record interviews, and publisher-of-record articles.
Aggregators are starting points
Mediacloud, GDELT, ad-library exports, and similar are useful for surfacing patterns. We do not publish from aggregator data alone.
AI-generated text is not a source
We will use AI tools for transcription, summarization, and translation — never as a citation. The receipt always points back to the human-readable original.
Anonymous sources require corroboration
Pseudonymous and anonymous tips are taken seriously and tracked, but a published claim always rests on at least one independent corroborating source.

Honest about limits

What we can't do.

We can't see what isn't logged

Some platforms keep no public record of moderation actions. We document what we can observe; we are explicit when something is inferred from outcomes rather than disclosed.

Aggregate patterns aren't cases

A statistical pattern is a question, not an answer. We surface patterns, then we go look for the documented instances that explain them.

We are not a court

Verified is not the same as proven. Every published case names what we are confident about and what we are not.

See it in practice.

The methodology is theoretical until it ships. Read the published cases on the blog or browse the documented patterns in methods.