Educators teach it. Journalists practice it. Readers depend on it. But for decades, media literacy has meant something different depending on who you ask and where you sit. A journalism professor and a high school teacher can walk out of the same conference and describe critical reading in completely different terms. Without a shared quantitative basis, media literacy is hard to teach, harder to measure, and nearly impossible to build on consistently. Clear-Sight was built to change that. Ten metrics. One neutral consistent framework. A starting point for the conversation the field has needed to have.
Clear-Sight evaluates article construction and framing only. It does not leave the article to fact-check. This is a strength, not a limitation. It means the framework works on any article regardless of subject matter.
Does the article present the subject fairly or does it favor one perspective?
Balance measures the structural choices an article makes -- how it frames the argument, whose voice gets space, and whether the reasoning is sound. It is not about word count. It is about whether the construction of the story gives the reader an honest view of the landscape.
Does the article inform or does it activate?
Logic measures the emotional intensity of the language used and the effect it is designed to have on the reader. A high score means the article delivers information in a measured tone. A low score means emotional language is doing more work than the facts are.
Does the article let you reach your own conclusion?
Autonomy measures the pressure an article applies to steer the reader toward a specific outcome. A high score means the article trusts you to decide. A low score means the structure of the story is designed to leave you with only one acceptable conclusion.
How much of this article is verifiable fact versus opinion or assertion?
Evidence measures the ratio of checkable, concrete facts to unsubstantiated claims. It does not evaluate whether those facts are correct -- it measures how much of the article's weight is carried by verifiable information versus stated opinion.
Who is speaking in this article and how transparent is their role?
Sourcing measures the quality, diversity, and transparency of the voices used to build the story. A high score means sources are named, credible, and represent a range of relevant perspectives. A low score means the article leans on anonymous sources, a single voice, or sources whose interests are not disclosed.
How precise is the language?
Specificity measures whether the article uses concrete, verifiable detail or relies on vague language that sounds authoritative but resists scrutiny. Precision is a marker of rigor. Vagueness is often a marker of something else.
Does the article hold together?
Consistency measures whether the article's claims, framing, and tone remain coherent from beginning to end. A high score means the story does not contradict itself. A low score means the article shifts in ways that undermine its own argument or presents information that does not align across sections.
Does the article treat its subject with the complexity it deserves?
Nuance measures whether the story acknowledges competing interests, legitimate uncertainty, and the inherent difficulty of most important issues -- or whether it reduces them to something simpler than they actually are.
Does the article give you what you need to understand the story?
Context measures whether sufficient background, history, and surrounding circumstances are provided for the reader to fully evaluate what they are reading -- or whether the story assumes knowledge the reader may not have.
Are the numbers and statistics in this article used honestly?
Claims measures how the article handles quantitative information -- whether data is presented with appropriate framing, whether statistics are contextualized, and whether numbers are used to illuminate or to persuade. Clear-Sight does not fact-check claims against outside sources. It evaluates how claims are constructed and used within the article itself.
These six patterns are the curriculum. Each one teaches something distinct about how media construction works. They are designed to be used in classrooms, newsrooms, conference talks, lesson plans, and academic papers.
What it teaches: Low sourcing is not always a red flag. Context across the full profile tells the real story.
Both good and bad breaking news have low Evidence and low Sourcing. But everything else diverges. The bad article compensates for thin sourcing with emotional activation, directional framing, false precision, and selective context. The good article compensates with measured tone, honest uncertainty, and internal coherence.
In breaking news, judge the article not by what it knows but by how it handles what it does not know.
What it teaches: Precision is not the same as proof. Specific-sounding language is one of the most effective credibility signals an article can manufacture.
The signature is high Specificity paired with low Evidence. Numbers, names, and precise language create an impression of rigor. But underneath, the claims lack verifiable grounding. Consistency is high because it is constructed, not reported.
The more specific an article sounds, the more carefully you should ask whether that specificity is grounded. Precision is easy to manufacture. Verification is not.
What it teaches: Emotional activation and directional pressure together are the signature of content designed to move people, not inform them.
Logic and Autonomy are both low. Emotional language dominates and only one acceptable conclusion is presented. Facts may be present but in service of the emotional narrative. Nuance is absent because complexity would interrupt the emotional momentum.
When Logic and Autonomy are both low, ask what the article wants you to do. Because it wants you to do something.
What it teaches: A perfectly coherent article can still be a narrow one. Consistency is not a substitute for completeness.
This is the pattern that feeds algorithmic reinforcement. It feels credible because it never contradicts itself, but only because it never introduced anything that would. One perspective dominates throughout. Tone is often measured. Direction is embedded in the framing, not the tone.
Ask not whether the article holds together but whether it ever introduced anything that could complicate it. An article that never challenges its own premise is not rigorous. It is selective.
What it teaches: Well-attributed quotes from a single perspective is still one-sided journalism. Attribution is not balance.
This is the most sophisticated pattern because it passes every basic credibility check a reader would run without a framework. Sourcing is high -- named, credentialed sources. The article looks rigorous. But Balance is low. Only one side's complexity is explored. Only one side's sources are sought.
Before you trust an article because it is well-sourced, ask whose sources they are. Attribution tells you where the information came from. Balance tells you whose information was sought.
What it teaches: Clear-Sight does not detect AI authorship. But AI-generated content tends to exhibit a specific construction pattern -- high surface credibility with low depth underneath.
The signature is a cluster of high surface signals -- Specificity, Consistency, Logic -- combined with low depth signals -- Nuance, Context, Evidence. The article feels complete. It reads smoothly. Nothing jars. But when you look for genuine complexity, genuine tension, genuine depth of context, it is not there.
The question is not whether an article was written by a machine. The question is whether the article has the depth that genuine reporting produces. High surface credibility combined with low depth is the pattern worth learning to recognize.
A journalism professor assigns Pattern 3 alongside a political op-ed. Students identify the emotional activation and the directional pressure before they even read the byline.
A high school teacher uses Pattern 4 to show students why two articles about the same event can feel completely different. One challenges its own premise. One never does.
A media literacy director uses the ten metrics as a shared vocabulary across a network of partner organizations -- so a trainer in one city and a facilitator in another are teaching the same framework.
A newsroom editor runs a draft through Clear-Sight before publication. Pattern 5 surfaces in the sourcing profile. The story is well-attributed. But whose sources are they?
For decades, educators and journalists have been working toward the same goal from different directions. Educators teaching students how to read critically. Journalists trying to produce work worth reading carefully. But without a shared framework, those two worlds rarely spoke the same language.
Clear-Sight is where those worlds meet.
When a journalism professor and a high school teacher and a newsroom editor are all working from the same ten metrics, something changes. The student who learned to recognize the Attribution Trap in a classroom becomes the reader who notices it in the wild. The editor who ran a draft through Clear-Sight before publication is speaking the same language as the media literacy director who trained the community around it.
That is not a product feature. That is infrastructure for a field that has never had it.
The goal was never just better reading. It was always what better reading makes possible -- more honest conversations, more grounded engagement, a different way of showing up with each other.
That is what this framework is for.
Whether you are building a curriculum, running a newsroom, leading a media literacy program, or simply ready to read differently -- Clear-Sight is built for you. Start with the tool. Build with the framework.