The Four Tiers
10+ primary sources, clean on every QA gate, cross-corroborated claims.
5–9 primary sources, meets all editorial standards.
Breaking coverage with fewer sources available. Expect updates.
Short-form dispatch. Cited but not yet fully corroborated.
How the Score Is Computed
The Evidence score combines two weighted measurements:
- Sources (50%) — primary and corroborating links, capped per article type (see table below).
- QA Score (50%) — average across our six editorial gates.
The source cap is type-aware because the effort bar is type-aware. A breaking news dispatch saturates at 8 well-chosen primary sources; a deep-dive investigation needs 18 before we consider it fully sourced. That keeps the score fair across formats: a tight, well-reported breaking piece can earn WELL SOURCED, and a long investigation earns HEAVILY SOURCED only when it actually triangulates across the field.
| Article type | Source cap | Gate minimum |
|---|---|---|
| Breaking | 8 | 5 |
| News | 10 | 5 |
| Listicle | 12 | 8 |
| Standard | 12 | 10 |
| Thought piece | 10 | 8 |
| Deep dive | 18 | 15 |
Word count is deliberately not a factor. A well-sourced short dispatch should score the same as a well-sourced deep dive; length follows naturally from source count, and scoring it separately would double-count the same signal.
What Counts as a Primary Source
We count the following as primary sources and link them inline:
- Government filings, agency statements, court documents, and press releases.
- Original wire reporting (Reuters, AP, AFP, Bloomberg, the New York Times, Washington Post, Wall Street Journal, and equivalent outlets with on-record correspondents).
- Company SEC filings, corporate statements, and official investor communications.
- Academic papers, research institute reports, and named-analyst briefings from recognized think tanks.
- Data dashboards and datasets (shipping AIS feeds, flight tracking, public registries).
We do not count: anonymous Telegram or X posts, republishing blogs with no original reporting, state propaganda outlets without corroboration, or search-result pages dressed up to look like sources.
The Six QA Gates
Every article passes through six automated editorial gates before publishing. If any gate fails, the article goes back to revision until it passes.
- Truth. Source count meets the minimum for the article type, claims have attribution within six paragraphs, and numeric stats are either sourced inline or placeholdered. Internal contradictions are flagged.
- Writing. No banned phrases (hype, crypto slang, AI tells, narrator voice), no em dashes, no "Day N" headlines, no headline lifted from a source, no "At a Glance" bullets over 20 words or containing hedging / credentials / analysis. Editorial quality (lede, voice, compression, relevance) is scored by an LLM.
- Integrity. Competing perspectives are represented and attributed. Opposing views from named sources count. Pure one-sided framing fails.
- Images. Hero image present, body images sourced and attributed.
- SEO. Frontmatter complete, single H1, "At a Glance" section present, key term in the first 100 words.
- Originality. Walk-through-the-math requirement — the article must demonstrate at least one analytical contribution (a named tradeoff, a calculation combining two sources, a forced choice, a structural insight, or a reframe) rather than just reciting facts.
Why We Publish This
In 2026 every reader assumes someone is trying to influence them. We think the only sustainable response is to show our work. The Evidence score is a commitment, not a marketing metric: we would rather publish fewer pieces at 80+ than flood the zone with BRIEF dispatches.
If you find a claim we cite that doesn't hold up, email corrections@nbn.fm and we will update the article or retract.