Guides & Reviews
4/11/2026

How to Tell What’s Real Online: Tools, Checks, and Trade-offs

Start with provenance and corroboration, not vibes. This guide shows how to verify images, video, audio, posts, and data—what to use first, when to escalate, and which tools to trust.

If you need to know quickly whether an image, video, quote, or “breaking” post is real, start with provenance and contradiction. In under two minutes: look for a content credential label (C2PA/Content Credentials), run a reverse image/video search, extract basic metadata, and cross-check the claim against at least two independent sources. If any piece fails—no provenance, reused media, missing metadata when it should exist, or conflicting reports—treat it as unverified.

When the stakes are high (politics, markets, safety, reputation), escalate: compare multiple frames or versions, consult reputable fact-checkers, triangulate time and place with open data (maps, weather, flight or ship logs), and, if needed, use a paid authenticity or deepfake-detection service. Prioritize workflows that confirm how a piece of content was made (provenance) over attempts to guess if it’s AI (detection)—and document your steps.

Who this guide is for

  • Journalists, researchers, and OSINT practitioners who need a disciplined workflow
  • Communications, brand safety, and security teams managing reputational or physical risk
  • Educators teaching media literacy beyond “spot the fake” tips
  • Everyday users who want a fast, reliable triage before sharing

What changed—and why your BS detector feels broken

  • Generative media at scale: Anyone can create photorealistic images, voice clones, and lip-synced videos in minutes, often with minimal artifacts. Detection models improve, but so do generators—an arms race with no perfect referee.
  • Weaker platform trust signals: “Verification” checkmarks on some platforms no longer map to identity or expertise. Engagement-optimized feeds amplify novelty and outrage over accuracy.
  • Metadata loss and spoofing: Many platforms strip or rewrite EXIF/IPTC metadata. Simple ELA/forensics tests are easy to game and yield false positives.
  • Harder access to ground truth: APIs, historical search, and social analytics tools have become restricted or paywalled. Even satellite and aerial imagery with definitive answers can be gated, delayed, or low resolution.
  • Partial, uneven provenance adoption: Standards like C2PA (Content Provenance and Authenticity) and Adobe Content Credentials exist and are growing, but coverage is patchy and not every platform preserves labels.

Bottom line: Rely less on a single “detector” and more on layered verification that blends provenance, corroboration, and context.

The 60‑second triage workflow (good enough for most shares)

  1. Check for provenance labels
  • Look for “Content Credentials” or C2PA badges. If present, open the details: who captured/created it, with what device/software, and what edits were made.
  1. Run a quick visual search
  • Paste the image in Google Lens or Bing Visual Search; for video, screenshot 3–5 frames and reverse-search each. You’re looking for older or identical versions.
  1. Pull lightweight metadata
  • Download the file and view metadata with exif.tools (web) or ExifTool (desktop). Missing metadata isn’t proof of fakery, but unexpected or contradictory fields are red flags.
  1. Cross-check the claim
  • Search the main claim in quotes plus a couple of unique nouns. Scan results for reliable outlets and official statements. If it’s big news, there should be multiple independent sources.

Decision: Share only if at least two steps produce positive corroboration and none produce contradictions. Otherwise, label it “unverified” or don’t share.

The professional workflow (when accuracy is nonnegotiable)

  • Triangulate time and place
    • Geolocate landmarks with Google Maps, OpenStreetMap, or Mapillary. Match shadows to local time and weather using SunCalc and historical weather data.
  • Audit source and channel
    • Check account age, handle history, past posts, and whether they’ve published verifiably correct info before. Beware newly created, single-issue accounts.
  • Compare versions and frames
    • Extract keyframes (ffmpeg or InVID/WeVerify plugin) and compare with reverse searches. Look for mismatched reflections, hands, text, or physics.
  • Use open data when possible
    • Flights: ADS-B Exchange or FlightRadar24. Ships: MarineTraffic. Earth observation: Sentinel Hub EO Browser (Copernicus), NASA Worldview, USGS EarthExplorer (Landsat). Cross-reference movement, events, and environmental conditions.
  • Consult fact-checkers and archives
    • Google Fact Check Explorer, AP/Reuters/AFP Fact Check, Snopes, PolitiFact. Use the Internet Archive’s Wayback Machine for historical snapshots.
  • Escalate to enterprise detection/provenance
    • Consider vendors such as Reality Defender, Sensity AI, Hive/Moderate Content for deepfake signals; Truepic for capture-time provenance; NewsGuard for source-level assessments. Treat scores as inputs, not verdicts.

The tools that actually help (and what they’re best for)

Image authenticity and context

  • Free/low-friction
    • Google Lens, Bing Visual Search: Fast discovery of prior appearances.
    • TinEye: Durable index for older or altered images.
    • InVID & WeVerify browser plugin: Keyframe extraction, reverse search helpers, magnifier, read metadata.
    • ExifTool (desktop), exif.tools (web): Precise metadata parsing.
  • Pro/enterprise
    • Truepic (capture/provenance): C2PA-backed capture to preserve cryptographic provenance.
    • Reality Defender, Sensity, Hive: Model-based AI image detection at scale.

Video verification

  • Free/low-friction
    • InVID & WeVerify: Keyframe extraction for reverse searching.
    • FFmpeg: Extract frames and audio, check encodes.
    • Amnesty’s YouTube DataViewer: Thumbnails and upload time checks.
  • Pro/enterprise
    • Reality Defender, Sensity: Deepfake and manipulation indicators.
    • Google/DeepMind SynthID detectors (where available): Watermark checks for supported AI generators.

Audio and voice clones

  • Free habits
    • Always confirm via a known channel; ask for a code word or specific shared memory. Beware urgency.
  • Pro/enterprise
    • Pindrop and similar voice security vendors: Liveness and spoof detection for call centers.
    • Some model providers offer brand-specific detectors for their own voices; use when applicable.

Text claims and documents

  • Free/low-friction
    • Google Fact Check Explorer: Aggregates ClaimReview-tagged fact checks.
    • The Wayback Machine: Compare document versions and deletions.
    • PDF hash/metadata checks: Compute a SHA-256 hash; compare with the supposed source’s published hash.
  • Pro/enterprise
    • NewsGuard (extension/subscription): Source-level nutrition labels.
    • Commercial media monitoring for claim origination and spread patterns.

Source and domain vetting

  • Free/low-friction
    • ICANN Lookup (RDAP/WHOIS): Domain age, registrar, contacts (often privacy-shielded).
    • SecurityTrails, DNSlytics (freemium): Passive DNS, subdomains, historical records.
    • Company registries and NGO databases: Verify real-world entities.

Maps and environmental corroboration

  • Free/low-friction
    • Sentinel Hub EO Browser, NASA Worldview: Weather, smoke, fire, and land changes.
    • OpenStreetMap + Street View/Mapillary: Geometry and signage matching.
  • Limits and caveats
    • High-res commercial imagery (Maxar, Planet) is often paywalled or delayed. Public data may be cloudy, coarse, or off-schedule.

Browser add-ons worth having

  • InVID & WeVerify: Swiss Army knife for visual OSINT.
  • NewsGuard (if your org approves): Source reliability cues at a glance.
  • uBlock Origin: Remove engagement bait and deceptive overlays that nudge fast sharing.

How to judge AI “detectors” (and not get burned)

  • Prefer provenance over detection: A signed, cryptographically verifiable provenance trail (C2PA) can tell you how media was made and edited. Detectors cannot prove authenticity—at best they estimate the likelihood of synthesis.
  • Look for per-model competence: A detector may work well on content from Generator A but not B. Vendors that publish benchmarks per generator and update frequently are safer bets.
  • Demand transparency: Confidence scores with error bars, decision-time explainability, and validation datasets matter. Avoid anyone selling 99.9% accuracy without peer-reviewed, independent testing.
  • Use consensus, not a single score: If three different detectors, plus provenance checks and OSINT corroboration, all align, confidence increases. If they disagree, downgrade certainty.

A practical trust model you can implement tomorrow

  • Green (publish/share): Verifiable provenance or strong multi-source corroboration; no contradictions.
  • Yellow (hold/label): Partial corroboration; unclear or missing provenance; minor anomalies that don’t falsify the core claim.
  • Red (do not share): Clear contradictions, recycled or context-flipped media, or unresolved high-impact uncertainty.

Document the path to green. In teams, make the checklist part of your editorial or incident-response workflow; require two-person review for red-to-yellow escalations during breaking news.

Playbooks for common scenarios

Breaking “caught on camera” posts

  • Reverse-search within minutes; if an older version appears, it’s context laundering.
  • Check weather, shadows, and visible signage; verify the language and fonts on signs.
  • Scan replies for OSINT leads. Credible users often post earlier sources.

Leaked memos and PDFs

  • Verify hashes and compare fonts, styles, and internal links with past official documents from the same entity.
  • Cross-check with reputable reporters; real leaks often come with corroborating details or off-the-record confirmations.

Too-good-to-be-true satellite/war footage

  • Check tasking feasibility: Is the satellite capable of that resolution? Does the orbit match the claimed time? If not, it’s likely repurposed or simulated.
  • Use public fire/flood/haze layers to test plausibility before assuming high-res imagery is current.

Voicemails and urgent phone calls

  • Treat unexpected “I’m in danger/send money” calls as compromised until verified via a second channel.
  • Ask for a prearranged code phrase or a detail only the real person would know, then hang up and call back using a saved number.

Political claims during elections

  • Default to yellow. Search for official statements, court filings, and trusted local outlets. Use Fact Check Explorer.
  • Be wary of cropped screenshots with no link. Track down the source page; compare to archived versions.

Trade-offs you should accept upfront

  • Speed vs certainty: The faster you need an answer, the more you must rely on heuristics. State your confidence level.
  • Privacy vs transparency: Some tools require uploading sensitive media; use local/offline tools when content is sensitive.
  • Cost vs coverage: Public tools get you far, but niche or high-stakes cases often need paid imagery, data, or enterprise detection.
  • False positives vs false negatives: For safety-critical contexts, bias toward catching fakes (accept more false positives). For reputational contexts, bias toward avoiding false accusations (accept more false negatives).

Building an organizational verification stack

  • People: Train a small team in OSINT basics and escalation procedures. Keep a roster for time-zone coverage.
  • Process: Write a one-page playbook (triage, escalation, documentation). Run drills ahead of high-risk events (elections, product launches).
  • Technology: Standardize on a toolkit (listed above). Maintain accounts for paid imagery/detection you actually use. Track vendor updates quarterly.
  • Governance: Define who can greenlight publication, and how to issue corrections. Keep an audit trail of your verification steps.

Key takeaways

  • Start with provenance and corroboration. Detection is a supplement, not a verdict.
  • Use layered checks: provenance label → reverse search → metadata → cross-source corroboration.
  • Expect incomplete data. State your confidence and update as new evidence appears.
  • For teams, codify the workflow. For individuals, slow down before sharing.

FAQ

Q: Can AI detectors reliably tell me if something is fake?
A: No single detector is reliable across all generators and edits. Use them as one signal among many and prefer provenance and corroboration when available.

Q: What is C2PA/Content Credentials and how do I use it?
A: C2PA is a standard for attaching cryptographic provenance to media. When you see a “Content Credentials” badge, click through to view capture and edit history. Treat it like a tamper-evident label—if the signature verifies, you can trust the recorded history.

Q: Are reverse image searches still useful in the AI era?
A: Yes—especially for catching recirculated or context-flipped media. They won’t tell you if a brand-new AI image is synthetic, but they’re excellent at spotting reuse.

Q: Should I trust platform verification checkmarks?
A: Treat them as account-level hints, not proof of identity or expertise. Confirm identity via official sites or cross-platform links.

Q: What if I can’t verify but I must act?
A: Label the content as unverified, describe the uncertainties, and avoid definitive language. Establish a monitoring plan to update or retract based on new evidence.

Source & original reading: https://www.wired.com/story/how-the-internet-broke-everyones-bullshit-detectors/