Welcome to Critical Thinking: Evaluate and Use Evidence
Hello! In this chapter, we tackle one of the most vital skills in Critical Thinking: learning how to judge information properly. Whether you are reading a newspaper, watching the news, or researching for a project, you are constantly bombarded with claims and data. This section teaches you how to separate the solid facts from shaky opinions, ensuring your arguments are built on strong foundations.
Why is this important? In Papers 2 and 4, you will be presented with sources and asked to evaluate the quality of the evidence, draw conclusions, and build arguments. Mastering this topic is key to scoring highly in the Critical Thinking section!
5.1 Evaluating Evidence: The Quality Check
Evaluating evidence means asking: "How much should I believe this?" There are two main ways to approach this: evaluating the source (Credibility) and evaluating the data itself (Representativeness).
Assessing the Credibility of Evidence (Source Reliability)
The credibility of evidence depends entirely on the person or source providing it. If the source is unreliable, the evidence they provide becomes questionable.
Key Factors Affecting Source Reliability:
- Reputation: Is the source generally known for being accurate and trustworthy (e.g., a reputable university researcher) or is it known for sensationalism (e.g., a tabloid newspaper)?
- Ability to See/Hear (etc.): Did the source have the physical capacity to observe the event accurately? (e.g., "A witness who claims to have seen the car crash clearly from 500 meters away at dusk has low ability to see.")
- Expertise: Does the source have specialized knowledge relevant to the claim? (e.g., A plumber is an expert on water pipes, but not on space travel.)
- Vested Interest: Does the source stand to gain (money, status, power) or lose something if their claim is accepted? Vested interest is a powerful motive to lie, exaggerate, or select evidence to support a specific outcome.
- Neutrality/Bias: Is the source impartial, or do they lean heavily towards one side? Bias often shows up through the selection of evidence—they might only present facts that support their pre-existing viewpoint.
💡 Quick Trick: The VAREN Check
When assessing a source, remember the acronym VAREN to quickly check the key reliability factors:
Vested Interest
Ability (to observe)
Reputation
Expertise
Neutrality (Bias)
Assessing Plausibility
Plausibility refers to the intrinsic likelihood of the claim being true, regardless of who is making it. Does the claim make sense within our existing knowledge of the world?
- Example: If a farmer claims their cows are producing twice as much milk as they did last year, this is plausible (perhaps due to better feed). If the farmer claims their cows started speaking French, this has low plausibility.
Important Note: A highly reliable source (like a leading scientist) making an implausible claim (like finding a purple dinosaur) still requires intense scrutiny, but the initial evaluation of the source itself is separate from the evaluation of the claim's likelihood.
Corroboration and Consistency
When dealing with multiple pieces of evidence, you must compare them:
- Corroboration: Two pieces of evidence support each other, making both claims individually more likely to be true. (Source A says the road was wet; Source B provides a weather report showing heavy rain.)
- Consistency: Two pieces of evidence can both be true, but they don't necessarily prove each other. They simply don't contradict. (Source A says the cat is black; Source B says the dog is white.)
- Inconsistency: The two pieces of evidence cannot both be true. They contradict one another directly. (Witness 1 says the car was red; Witness 2, who was standing right next to Witness 1, says the car was blue.)
Key Takeaway for Credibility: Evidence is strongest when it comes from a neutral, expert source who has the ability to observe, possesses a good reputation, has no vested interest, and is corroborated by other sources.
Assessing Representativeness of Evidence (Samples)
Evidence often comes from samples (surveys, trials, statistics). We need to judge if the small sample accurately reflects the larger group (the population) we are making a claim about.
Factors Weakening Representativeness:
1. Number (Sample Size)
- A sample must be large enough to be statistically meaningful. If a sample is too small (e.g., surveying only 5 people to gauge national opinion), it is an inadequate basis for a valid conclusion.
2. Selectivity (How the sample was chosen)
- Self-Selected Samples: If people choose to participate (e.g., an online poll), the results are often skewed towards those who feel very strongly about the subject (either positively or negatively). This creates selection bias.
- Availability Limitations: If the selection is random but unintentionally limited, it weakens the result. (e.g., A survey conducted during working hours only samples people who are unemployed, retired, or work shifts, excluding the majority of full-time workers.)
- Paid Participants: If people are paid to participate, they may be inclined to give the answers they think the researcher wants, leading to inaccurate data.
3. Representativeness Compared to the Claim
If the sample shares a characteristic that could influence the results, it may not be representative of the general population being claimed.
- Example: A study claims that a new energy drink improves focus for all teenagers. However, the study only tested teenagers who are members of an Olympic sports team. Since elite athletes might have a naturally higher baseline for focus, the results are unrepresentative of the average teenager.
Assessing Presentation of Data
The way data is presented can sometimes misrepresent the statistics, making a trend look more significant (or less significant) than it truly is.
- Manipulating the Y-Axis: If the vertical (Y) axis of a graph starts above zero, small differences can look huge. If the Y-axis has a massive range, huge differences can look tiny.
- Irregular Intervals: Using inconsistent gaps or skips on the X- or Y-axes to distort trends.
- Relative Size of Symbols: Using symbols (like pictures of money bags) where the area of the symbol, rather than just the height, represents the quantity, leading to a misleading visual exaggeration.
Don't worry if this seems tricky at first! Practice identifying these distortions by looking critically at graphs and charts in the news. Always check the labels and axes!
5.2 Using Evidence: Explaining and Inferring
Once you have evaluated how strong the evidence is, you must then use it—either to draw a conclusion (an inference) or to explain why the evidence exists.
Assessing Explanations for Evidence
Sometimes, a claim is presented as the explanation for a piece of evidence. You need to assess how strong that explanation is.
An explanation is weak if it:
- Fails to account for the whole evidence: If the explanation only fits part of the data, it is incomplete and weak.
- Relies on speculative information/unstated assumptions: If the explanation requires adding information that isn't supported by the sources ("Perhaps they were just tired"), it is based on guesswork.
- Other explanations are equally plausible: If there are two or more good reasons why the evidence might exist, assuming only one of them weakens the reasoning.
Example: Evidence shows ice cream sales increase dramatically every June.
Weak Explanation: "This must be because June is when everyone starts their diet." (Implausible; fails to account for the whole evidence, as sales increase with heat, not diet.)
Stronger Explanation: "This is due to the start of summer and rising temperatures."
Suggesting Explanation for Evidence
This is the reverse task: given a piece of evidence, you suggest the most likely reasons for it.
- Focus on motives (What caused the source to behave that way?).
- Focus on the basis for correlation (Just because two things happen together, A and B, doesn't mean A causes B. Is there a third factor C that causes both A and B?).
Did you know? The number of pirates in the world has declined as global warming has increased. This is a correlation, but pirates do not cause global warming—it's likely an unrelated trend!
Assessing and Suggesting Inference from Evidence
An inference is a conclusion drawn from the evidence.
- Assessing Inference: Judge to what extent the evidence supports a stated claim. Identify factors that weaken the support (e.g., if the evidence is unreliable or unrepresentative).
- Suggesting Inference: Drawing a conclusion based on the evidence's relevance, significance, or usefulness. (If Source A shows traffic is jammed, you can infer that arriving on time will be difficult.)
Forming a Judgement Based on Multiple Sources
This is the culmination of your evaluation skills. You must synthesize the information to reach a final, balanced judgement.
Step-by-Step Judgement Formulation:
- Evaluate all sources: Check the credibility and representativeness of each one (using VAREN and sample checks).
- Identify key inferences: What is the most important conclusion you can draw from Source A, Source B, etc.?
- Synthesize and weigh: Compare the inferences. If three strong, reliable sources point one way and one weak, biased source points the other way, your judgement should favor the stronger evidence.
- Form a final, balanced judgement: Your conclusion must reflect the overall weight and quality of the available evidence.
Common Mistake to Avoid: Don't just summarize the sources. You must evaluate and compare them to form a true judgement.
Quick Review: Evaluating Evidence
Evidence Evaluation checks:
1. Credibility: Is the source reliable? (VAREN)
2. Plausibility: Is the claim believable?
3. Representativeness: Is the sample size appropriate and selection fair?
4. Presentation: Is the data being manipulated (e.g., tricky graph scales)?