For Researchers

A framework built for
academic rigor.

The Clear-Sight Analytical Framework (CSAF) is a ten-dimension model for evaluating article construction through observable language patterns. We welcome interdisciplinary collaboration, independent validation, and critical examination of the methodology.

Aligned to UNESCO MIL · Ofcom MIL · ACRL · AASL
Methodology 10 dimensions · 6 patterns · observable signals
Open to Validation · Replication · Collaboration

Why a new framework for news analysis?

Existing approaches to media evaluation tend to rely on credibility ratings, political placement, or subjective judgment. These are useful but difficult to standardize, difficult to teach, and difficult to reproduce across contexts.

CSAF takes a different approach. Each of the ten signals evaluates a distinct, observable dimension of article construction — grounded in language patterns rather than editorial intent or source reputation. The goal is a framework where two independent evaluators, given the same article, arrive at the same analysis.

Methodology

The CSAF design principles

01

Observable Signals

Every dimension is grounded in language patterns that can be identified without inferring authorial intent. Word choice, sourcing structure, framing architecture, evidence density — patterns that are either present in the text or not.

02

Ten Independent Dimensions

Balance, Logic, Autonomy, Evidence, Sourcing, Specificity, Consistency, Nuance, Context, and Claims. Each evaluates a distinct aspect of construction. Cross-correlation analysis shows acceptable independence between dimensions (most pairs < 0.5).

03

An Open Pattern Library

The ten signals combine into recognizable construction patterns that describe how articles function holistically. We've named six so far — the Credibility Illusion, the Emotional Trap, the Echo Chamber, the Attribution Trap, the AI-Generated Article, and Breaking News (good vs. bad). The library is open-ended; new patterns will be named as they emerge from the data and from collaborative research.

04

Reproducibility by Design

The framework is designed for inter-rater reliability. Because signals are language-based and observable, independent evaluators should converge on the same analysis — a property that distinguishes CSAF from opinion-based evaluation.

Open Questions

Areas for collaboration

We are actively seeking academic partnerships across these research areas. If your work intersects with any of these questions, we would welcome the conversation.

Validation & Inter-Rater Reliability

Independent validation of CSAF scoring against human expert panels. Does the framework produce consistent results across evaluators? Where do the dimensions diverge from expert consensus, and what does that reveal?

Pedagogical Efficacy

Pre/post studies measuring the impact of CSAF-based instruction on student critical reading outcomes. Does teaching with observable signals outperform traditional media literacy instruction? At what grade levels?

Cross-Cultural Applicability

How well do the ten dimensions transfer across languages, media ecosystems, and cultural contexts? Which signals are universal and which require cultural calibration?

Longitudinal Reader Impact

How does sustained use of CSAF affect reading behavior, source diversity, and information diet over time? Does the framework produce durable changes in how people engage with news?

Pattern Discovery & Computational Linguistics

The current pattern library names six recurring constructions, but it's open-ended by design. What new patterns emerge from large-scale CSAF-scored corpora? CSAF's language-based approach intersects with NLP research on framing detection, sentiment analysis, and discourse structure — and we welcome collaboration on surfacing emergent patterns from the data.

Journalism & Newsroom Impact

How does framework-aware self-evaluation affect editorial decisions? Does pre-publication CSAF analysis change the construction choices journalists make?

White Paper ~12 min read

News Literacy as Observation,
Not Judgment.

A Methodological Argument

The methodological case for shifting news literacy education from credibility verdict to observable signals. Argues that the dominant judgment-based pedagogy is unscalable and unassessable, and presents the CSAF as a worked example of what observation-based instruction looks like in practice.

Mark Kallback · Founder & CEO, Clear-Sight

Cite as: Kallback, M. (2026). News Literacy as Observation, Not Judgment: A Methodological Argument. Clear-Sight white paper.

Documentation & IP

Where to go deeper

Framework Specification

The full CSAF specification — ten signals, the current pattern library, scoring methodology, and design principles — is publicly documented on our Framework page. We believe transparency strengthens the methodology.

Patent Filing

The CSAF methodology and its application to article construction analysis are the subject of a pending patent application. Academic use, citation, and research collaboration are welcomed and encouraged.

Data & API Access

We're exploring research data partnerships for qualified academic institutions. If your research would benefit from access to anonymized CSAF scoring data or programmatic API access, we're open to that conversation.

FAQ

Common questions
from research collaborators.

Existing approaches (AllSides, Ad Fontes, NewsGuard, etc.) score outlets, not articles, and most rely on rater judgment about credibility, bias, or political placement. CSAF scores articles, not outlets, and grounds every dimension in observable language patterns rather than reputation.

The methodological argument is laid out in detail in the white paper — the short version: judgment-based scoring is hard to scale, hard to teach, and hard to assess; observation-based scoring is designed for all three.

CSAF is designed for inter-rater reliability — that's the central methodological commitment. Internal cross-correlation analysis shows acceptable independence between dimensions (most pairs < 0.5).

Independent validation is exactly what we are inviting. If you're working on rater-reliability studies of media-evaluation frameworks, we'll provide the framework documentation, scoring rubric, and a sample article corpus to test against expert panels. Get in touch via the form below.

Yes — academic use is welcomed and encouraged. The framework documentation is publicly available, and citation is the only request we make for academic use.

Suggested citation: Kallback, M. (2026). News Literacy as Observation, Not Judgment: A Methodological Argument. Clear-Sight white paper. Retrieved from https://clear-sight.ai/whitepaper.html

For commercial derivative work, the patent filing applies; reach out and we'll scope licensing.

Programmatic access for qualified academic research is available on request. Use cases we're already supporting include validation studies, computational linguistics work on framing detection, longitudinal corpora of CSAF-scored articles, and journalism / newsroom self-evaluation studies.

Tell us about your project in the form below and we'll scope appropriate access.

For studies involving human participants (e.g. pre/post pedagogical efficacy work, rater-reliability panels), IRB review is your institution's process. We support that work by providing a methodology paper, a data privacy statement, and a security questionnaire response that can be attached to your protocol.

Clear-Sight collects no learner PII beyond account email, and any participant-level data we provide for studies is anonymized at source.

The framework's ten dimensions are designed to be language-agnostic in principle — emotional intensity, attribution, structural balance, and contextual completeness are observable across languages. The current production scoring is calibrated for English-language news; cross-cultural and cross-language transfer is one of the open research questions we are most actively interested in collaborating on.

Let's advance the field together.

The framework is open for examination, citation, and rigorous critique. Whether your interest is validation, pedagogical efficacy, NLP applications, or longitudinal reader-impact — we welcome the conversation.