Validity vs reliability: Validity means your method measures what it claims to measure (so it actually tests the hypothesis), while reliability means repeated measurements would give similar results. A dataset can be reliable but invalid if it is consistently measuring the wrong thing. Good fieldwork improves both by matching methods to the aim and standardizing procedures.
Bias and subjectivity: Bias is a systematic influence that pushes results in one direction, such as leading questions or choosing convenient sites only. Subjective judgements (for example, rating environmental quality) can still be useful if you reduce variation between observers. The principle is to make sources of judgement explicit and then control them with shared criteria and repetition.
Sampling logic: Samples must represent the population or area you want to describe, otherwise conclusions overgeneralize. The more variable a phenomenon is, the more you typically need larger or more carefully structured sampling to capture it. Sampling decisions should be justified by how they help test the hypothesis, not by convenience.
Triangulation: Triangulation combines multiple data types or methods to check whether they tell a consistent story. This works because different methods have different weaknesses, so agreement across them increases confidence. When methods disagree, the disagreement itself becomes a clue about limits, scale effects, or measurement problems.
Designing aim and hypothesis: Start by identifying the key variables (what changes and what you will measure) and the expected direction of change. Then phrase the hypothesis so it can be tested with field measurements or coded responses, rather than vague language. Finally, plan what evidence would count as support versus contradiction before collecting data.
Risk assessment and safe practice: A risk assessment identifies hazards, estimates how likely and severe they are, and sets control measures (for example, appropriate clothing, meeting points, or avoiding unsafe areas). This matters because fieldwork quality collapses if conditions are unsafe, rushed, or unmanaged. Safety planning should be treated as part of method design, not a separate add-on.
Primary and secondary data collection: Primary data is collected by the investigator for the enquiry, which gives control over methods and relevance. Secondary data is collected by others, which can expand scale or historical context but reduces control over quality and fit. Strong enquiries often combine both so field observations are interpreted within broader context.
Questionnaires and interviews: Closed questions generate comparable results quickly, while open questions capture richer explanations but are harder to code and compare. Interviews typically trade sample size for depth by allowing follow-up and clarification. The best question design avoids leading wording, matches question type to the variable, and plans how responses will be analyzed before data collection begins.
Environmental quality surveys (EQS): An EQS converts observations into scores using indicators and a scale, producing quantitative outputs from human judgement. Because subjectivity is unavoidable, reliability improves when observers agree on criteria, work in small groups, and compare or aggregate results across multiple scorers. EQS is most defensible when indicators are clear, consistently applied, and checked for inter-observer consistency.
| Distinction | Option A | Option B |
|---|---|---|
| Research statement | Aim (purpose) | Hypothesis (testable prediction) |
| Data origin | Primary (you collect) | Secondary (others collected) |
| Data form | Quantitative (numbers) | Qualitative (descriptions) |
| People data | Questionnaire (larger sample) | Interview (deeper responses) |
| Judgement-based scoring | EQS (scaled indicators) | Descriptive notes (unscored detail) |
State your decision first, then justify: When asked whether evidence supports a hypothesis, begin with a clear judgement (supported, not supported, or partially supported). This shows the examiner you can interpret evidence rather than merely describing it. Your justification should then reference patterns, extremes, and relationships in the data.
Use a consistent analysis sequence: A strong answer typically moves from description to interpretation: identify highest/lowest values, describe trends, then explain relationships using geographical reasoning. This structure matters because it turns raw figures or images into an argument. If you skip straight to explanation without describing the evidence, you often lose method and interpretation marks.
Link conclusions back to aim and hypothesis: A conclusion should explicitly return to the original aim and hypothesis so the argument closes logically. Evidence that contradicts the hypothesis should be acknowledged rather than ignored, because real data is often messy. Examiners reward balanced judgements that explain anomalies and limits.
Evaluation earns high-value marks: Evaluation should name specific limitations (sampling, access, timing, equipment, human error) and then propose feasible improvements. This works because it demonstrates you understand how methodology shapes confidence in results. Vague statements like “collect more data” are weaker than targeted fixes tied to identified problems.
Vague or non-measurable hypotheses: Learners often write hypotheses that cannot be tested because key variables are undefined or direction is missing. This fails because you cannot decide what data to collect or what outcome would count as support. Fix it by naming the measurable variable and the expected direction of change.
Confusing correlation with causation: A relationship in data (for example, two variables rising together) does not automatically mean one causes the other. Fieldwork settings include many confounding factors, so causal claims require reasoning about mechanisms and alternative explanations. A safer approach is to describe the association and then discuss plausible causes with evaluation of uncertainty.
Uncontrolled subjectivity in scoring: When using judgement-based methods (like environmental quality ratings), inconsistent criteria between observers reduces reliability. This happens because different people interpret indicators differently, producing artificial variation. Reduce this by agreeing descriptors, practicing scoring, and combining multiple scorers' results using an agreed summary measure (such as the mode).
Sampling and response bias in people-based data: Asking only certain groups, surveying at one time of day, or using leading questions can systematically distort results. This is a methodological error because the sample no longer represents the population you are trying to infer about. Improve by varying locations/times, using neutral wording, and recording non-response where relevant.
Links to the scientific method: Fieldwork aligns with hypothesis testing, where evidence is gathered to support or challenge a claim. The key connection is that method quality determines how persuasive the test is, not just the size of the dataset. Evaluation corresponds to discussing uncertainty and limitations in scientific reporting.
Links to statistics and data literacy: Simple statistics summarize quantitative results and support comparisons across sites or groups. For example, a mean and range can describe central tendency and spread, helping you judge whether differences are meaningful or likely due to variability.
Useful summaries: Mean and range help describe typical values and variability.
Links to GIS and spatial thinking: Mapping and spatial visualization help identify clustering, gradients, and relationships with This matters because many geographical patterns are spatially structured and cannot be seen clearly in a table alone. Even without advanced software, consistent site referencing and clear maps strengthen interpretation.
Ethics and safeguarding: People-based data collection requires respect, voluntary participation, and appropriate conduct. Ethical practice improves data quality because respondents are more likely to answer honestly when they feel safe and respected. It also reduces risk and ensures the enquiry is acceptable in real-world settings.