Methods we track
How information moves out of view.
Eight patterns we keep seeing — across AI governance, narrative economics, and the suppression of civic record. Each one is documented in the field. Each is observable, not speculative.
Information control
Narrative control
Coordinated framing across legacy media, academia, and major platforms — narrowing the band of viewpoints that count as legitimate.
How it shows up
- ·Sentiment-steering in recommendation systems
- ·Editorial filtering in news APIs
- ·Keyword throttling and downranking of dissenting phrasing
FramingMedia coordinationAlgorithmic bias
Information control
Search visibility
Search engines and recommendation feeds quietly demote content judged "low quality" — a label that often tracks viewpoint, not accuracy.
How it shows up
- ·Ranking penalties from opaque quality signals
- ·Editorial bias inside guideline frameworks
- ·AI classifiers driving demonetization at scale
SEOSearch biasVisibility
Digital infrastructure
Platform dependency
Independent media, forced onto centralized platforms, then bound by terms of service that can be tightened at any moment.
How it shows up
- ·API throttling and rate limits without notice
- ·Selective content removal under vague ToS
- ·Ad-account suspensions that cut off revenue
ToS leverageRevenue controlLock-in
Digital infrastructure
Infrastructure denial
Stack-level removals — app stores, payment processors, hosting — that make alternative platforms unviable before they can grow.
How it shows up
- ·App store policy enforcement and removals
- ·Payment processor restrictions on funding
- ·Hosting providers enforcing acceptable-use clauses
DeplatformingFinancialHosting
Social pressure
Social cost
Naming, blacklisting, and employer pressure — the reputational and economic costs that make people self-edit before they ever post.
How it shows up
- ·Identification of anonymous voices via leaked data
- ·Coordinated press campaigns targeting individuals
- ·Employer-side consequences for off-platform speech
DoxxingReputationChilling effects
Social pressure
Institutional capture
When professional associations, journals, and funders narrow what counts as legitimate research — and then call the result consensus.
How it shows up
- ·Professional guidelines redefining the conversation
- ·Funding concentrated on approved questions
- ·Peer review acting as gatekeeping, not vetting
AcademiaFunding biasGatekeeping
Surveillance
AI-mediated moderation
Alignment layers and content classifiers that shape what AI systems will say — and what they refuse to engage with — without disclosing why.
How it shows up
- ·Alignment constraints filtering politically sensitive content
- ·Opaque "hate" or "misinformation" classifiers
- ·RLHF training that bakes in editorial preferences
AI alignmentClassifier biasOpacity
Surveillance
Behavioral surveillance
Cross-platform tracking that lets moderation systems anticipate and damp dissent before it spreads, not after.
How it shows up
- ·Cross-platform identity and behavior tracking
- ·Damping of viral content that tests the limits
- ·Sentiment monitoring of communities and creators
TrackingPredictive moderationProfiling