Hello Future Psychologist! Understanding How We Know What We Know

Welcome to one of the most fundamental chapters in Psychology:
Approaches to researching behaviour.

This section isn't about *what* we find, but *how* we find it. Think of researchers as detectives; this chapter teaches you the tools and rules they use to gather evidence about the human mind and behaviour. Mastering research methods is essential because it allows you to critically evaluate all the studies you encounter throughout the course.

Don't worry if this seems technical! We will break down complex terms like validity and sampling into simple, understandable concepts using real-world analogies.

1. The Two Pillars of Psychological Research: Quantitative vs. Qualitative

All research methods fall into two main categories, based on the type of data they collect.

1.1 Quantitative Research (The Numbers Game)

Quantitative research aims to collect data that can be measured, counted, and expressed numerically. It seeks to establish generalizable facts and relationships (cause and effect or correlation).

  • Goal: To measure and test hypotheses.
  • Data Type: Quantitative data (numbers, statistics, means, standard deviations).
  • Key Strength: High reliability; results can often be replicated and statistical tests can be applied.
  • Methods Used: Experiments and Correlational studies.

1.2 Qualitative Research (The Depth and Meaning Game)

Qualitative research aims to gain an in-depth understanding of complex phenomena, personal experiences, and meanings. It focuses on descriptions, context, and rich detail.

  • Goal: To explore, describe, and understand context; hypotheses often emerge *from* the data.
  • Data Type: Qualitative data (transcripts, narratives, themes, observations).
  • Key Strength: High validity (especially ecological validity) because it captures real-world, natural behaviour.
  • Methods Used: Interviews, Observations, and Case Studies.
Quick Review: QN vs. QL

Analogy: If you want to know how many people bought coffee today, that’s Quantitative. If you want to know why they chose that specific coffee shop and how they feel about the atmosphere, that’s Qualitative.

2. Quantitative Research Methods

2.1 Experiments: Establishing Cause and Effect

The experiment is the only method that can determine a clear cause-and-effect relationship.

Key Concepts in Experimental Design
  • Independent Variable (IV): The factor the researcher manipulates or changes. (The cause)
  • Dependent Variable (DV): The factor that is measured. It is expected to change as a result of the IV manipulation. (The effect)
  • Control Condition: A condition where the IV is absent or neutral. This provides a baseline for comparison.

Example: A study tests if caffeine improves memory.
IV: Amount of caffeine given (e.g., high dose vs. low dose).
DV: Score on a memory test.
Control Group: Participants given a placebo (a pill with no caffeine).

Types of Experiments
  1. Laboratory Experiments: Conducted in a highly controlled environment.
    • Strength: High Internal Validity (researchers control extraneous variables, making it easier to confirm the IV caused the DV).
    • Weakness: Low Ecological Validity (the artificial setting may not reflect real life).
  2. Field Experiments: Conducted in a natural setting (e.g., a school, a street), but the researchers still manipulate the IV.
    • Strength: Higher Ecological Validity.
    • Weakness: Less control over extraneous variables.
  3. Natural Experiments (Quasi-Experiments): The researcher studies an IV that occurs naturally (e.g., gender, a natural disaster, or a pre-existing condition). The IV cannot be manipulated.
    • Note: Because the researcher cannot control who is in which condition, strict cause-and-effect conclusions are difficult.

2.2 Correlational Studies: Looking for Relationships

A correlational study measures the relationship between two or more variables, but without manipulating any of them.

  • Correlation Coefficient (r): A number between -1 and +1 that describes the strength and direction of the relationship.
  • Positive Correlation: As one variable increases, the other increases (e.g., hours spent studying and exam scores).
  • Negative Correlation: As one variable increases, the other decreases (e.g., hours spent partying and sleep quality).

!!! Common Mistake to Avoid !!!
Correlation does NOT equal Causation. Just because two things happen together doesn't mean one causes the other. There might be a hidden third variable (a confounding variable) responsible for both.

Did you know? Research has shown a strong positive correlation between ice cream sales and drowning incidents. Does eating ice cream cause drowning? No. The third variable is high temperature—hot weather leads to both more ice cream sales and more swimming accidents.

3. Qualitative Research Methods

3.1 Interviews

Interviews allow researchers to collect rich, detailed, first-hand accounts.

  • Structured Interview: Uses a fixed list of questions asked in a set order (like a survey read aloud). High reliability, low depth.
  • Unstructured Interview: Uses general topics, allowing the conversation to flow naturally. High depth, less reliable.
  • Semi-structured Interview: The most common type. Uses an interview guide with core questions, but the interviewer can follow up with probes based on the participant's answers.

Important Consideration: Establishing Rapport

In qualitative interviews, the relationship (rapport) between the interviewer and participant is critical. If participants feel comfortable and safe, they are more likely to give honest, detailed answers. If rapport is poor, the data quality suffers.

3.2 Observations

Observations involve watching and recording behaviour as it naturally occurs.

  • Naturalistic Observation: Observing behaviour in the environment where it naturally happens (e.g., observing children playing in a park).
  • Controlled Observation: Behaviour is observed in a structured, manipulated environment (e.g., a "strange situation" experiment room).
  • Participant Observation: The researcher becomes part of the group being studied.
  • Non-participant Observation: The researcher observes from a distance or behind a screen.

Observation Bias: Reactivity

People behave differently when they know they are being watched. This is a huge challenge for observers. This change in behaviour is called participant bias or reactivity (sometimes linked to demand characteristics, where participants try to guess the study's aim and act accordingly).

3.3 Case Studies

A case study is an in-depth investigation of an individual, a small group, or an organization. They often combine various methods (interviews, medical records, observations).

  • Strength: Excellent for exploring rare phenomena or complex, unique situations (e.g., severe brain damage, unique phobias). They can provide rich insight that challenges existing theories.
  • Weakness: The results are usually hard to generalize to the larger population because the sample size is so small and unique.
Key Takeaway (Methods)

If you need statistical proof of cause-and-effect, use experiments. If you need rich, contextualized understanding of human experience, use qualitative methods (interviews, case studies).

4. Sampling Techniques: Choosing Participants

Sampling is the process of selecting participants for a study. The quality of your sample determines how well you can generalize your findings to the rest of the target population.

Don't worry if this seems tricky at first—the goal is to find a group that truly represents the whole population!

4.1 Key Sampling Methods

  1. Random Sampling: Every member of the target population has an equal chance of being selected.
    • Strength: Minimizes sampling bias and leads to the highest generalizability.
    • Weakness: Often impractical (requires a full list of the population).
  2. Convenience (Opportunity) Sampling: Participants are selected simply because they are easily available to the researcher (e.g., psychology students on campus).
    • Strength: Fast and cost-effective.
    • Weakness: Highly prone to sampling bias, leading to low generalizability.
  3. Stratified Sampling (HL focus): The researcher identifies key subgroups (strata) in the population (e.g., age, gender, ethnicity) and ensures the sample reflects the proportion of these groups in the overall population.
    • Strength: Provides a highly representative sample, even better than random sampling.
  4. Purposive Sampling (Used heavily in Qualitative Research): Participants are chosen because they possess specific characteristics relevant to the study (e.g., interviewing only parents who adopted children internationally).
    • Strength: Ideal for in-depth qualitative studies focused on a specific demographic or experience.

Memory Aid: Think of the researcher as a cook. If they need to taste the soup (the population), they need a good *sample* (the spoonful). A random sample is like stirring the soup first; a convenience sample is just dipping the spoon into the nearest corner—it might only taste like salt!

5. Evaluating Research: Reliability, Validity, and Bias

To evaluate any study, you must examine its quality. These concepts are essential for Paper 1 and Paper 2 essays.

5.1 Reliability and Validity (The Quality Checks)

  • Reliability: Refers to the consistency of a measure. If the study were repeated (replicated) under the same conditions, would it produce the same results? (Primarily a concern for Quantitative studies.)
  • Validity: Refers to the accuracy of a measure. Is the study actually measuring what it claims to measure?
Types of Validity
  • Internal Validity: In an experiment, this confirms that the IV caused the change in the DV, and not some other extraneous variable. (High control = High internal validity).
  • Ecological Validity: Measures how applicable the findings are to real-life situations and settings. (Naturalistic settings = High ecological validity).

5.2 Bias in Research

Bias occurs when the researcher or the participant unconsciously influences the outcome of the study.

  1. Participant Bias (Demand Characteristics): Participants try to guess the aim of the study and change their behaviour to either help or hurt the researcher.
  2. Researcher Bias (Confirmation Bias): The researcher’s own expectations or beliefs influence the way they design the study, interpret the data, or behave toward participants.

Did you know? To combat researcher bias in drug trials, they use the double-blind control: neither the participants nor the person administering the treatment knows who has the real drug and who has the placebo.

6. Ethical Considerations in Psychological Research

All psychological research must adhere to strict ethical guidelines to protect the welfare and dignity of participants. When evaluating a study, always assess potential ethical breaches.

The Six Core Ethical Principles (DDP WCC)

  1. Informed Consent: Participants must agree to participate after receiving a full explanation of the study’s purpose, procedure, risks, and rights.

    Note: For participants under 16, parental consent is required.

  2. Right to Withdraw: Participants must be explicitly told they can leave the study at any time without penalty, and can withdraw their data even after the study is over.
  3. Protection from Harm: Researchers must ensure participants are protected from both physical and psychological harm (e.g., severe stress, embarrassment, loss of self-esteem).
  4. Deception: Researchers should avoid deceiving participants, but minor deception may be necessary if full disclosure would ruin the study (e.g., knowing the exact aim would cause demand characteristics). Deception must always be justified and kept to a minimum.
  5. Debriefing: After the study, the true aims and procedures must be revealed to the participants. The researcher must check for any lingering distress and restore the participant to their original state (especially crucial if deception was used).
  6. Confidentiality and Anonymity: All information gathered must be kept confidential. Data should be recorded anonymously (not linking names to results) to protect privacy.
Ethical Takeaway

When analyzing a controversial study, ask: Were the potential benefits of the research (gaining new knowledge) worth the potential costs (harm or stress to participants)? This is called a cost-benefit analysis.