RemNote Community
Community

Study Guide

📖 Core Concepts Survey methodology – study of how to design, sample, and collect data for human‑research surveys. Sampling frame – the list or map that defines every unit in the target population from which a sample is drawn. Representativeness – degree to which a sample mirrors the larger population; essential for valid inference. Selection bias – systematic over‑ or under‑representation of certain groups, threatening generalization. Stratified random sampling – divide the population into homogeneous strata and sample each proportionally to reduce bias. Mode of data collection – the channel (telephone, mail, online, face‑to‑face, etc.) used to administer a questionnaire; each mode can produce mode effects (systematic differences in responses). Survey designs – Cross‑sectional: one‑time snapshot. Successive independent samples: repeated independent samples to track population change. Longitudinal: same respondents measured repeatedly to track individual change. Questionnaire – primary tool for data capture; quality hinges on clear wording, appropriate scaling, and sound construction steps. Reliability – consistency of a measure (e.g., test‑retest). Validity – extent a measure captures the intended construct (especially construct validity). Interviewer effects – social‑desirability bias introduced by the presence of an interviewer. --- 📌 Must Remember Sampling frame ≠ sample; a poor frame = inevitable bias. Stratified sampling = proportional random draw within each stratum → lower variance than simple random sampling. Mode choice trade‑off: cost vs coverage vs response accuracy. Cross‑sectional → descriptive only; cannot infer causality. Successive independent samples require identical questionnaire wording across waves. Longitudinal → tracks individual change but suffers from attrition. Reliability improves with more items, greater variance, clear instructions, fewer distractions. Validity ≠ reliability; a measure can be consistent but still wrong. Leading/loaded questions → bias; avoid. Reverse‑worded items help detect response set bias. Priming: earlier questions can influence later answers. Interviewer effects appear in any human‑mediated mode (face‑to‑face, telephone, video‑enhanced web). --- 🔄 Key Processes Constructing a questionnaire Define information needs → choose administration mode → draft items → expert review → pilot test → edit final version → set standard procedures. Stratified random sampling Identify relevant strata → determine proportional allocation → randomly select units within each stratum → combine selections → verify overall representativeness. Choosing a data‑collection mode List target population characteristics → assess cost, coverage, respondent willingness, and required accuracy → weigh mode advantages/limitations → select single or mixed mode. Assessing reliability (test‑retest) Administer same questionnaire to same sample at two time points → compute correlation (e.g., Pearson $r$) between scores → higher $r$ = higher reliability. --- 🔍 Key Comparisons Cross‑sectional vs. Longitudinal Cross‑sectional: one snapshot, cheaper, no attrition, no causal inference. Longitudinal: multiple waves on same people, tracks individual change, expensive, attrition risk. Successive independent samples vs. Longitudinal Successive: different respondents each wave, monitors population trends, easier logistics. Longitudinal: same respondents, reveals within‑person change, higher burden. Telephone vs. Online surveys Telephone: quick contact, may miss households without phones, interviewer present → possible interviewer bias. Online: inexpensive, fast, requires internet access, lower interviewer bias. Open‑ended vs. Closed‑ended items Open: richer detail, requires coding, longer respondent time. Closed: easier to analyze, limited nuance, quicker for respondents. --- ⚠️ Common Misunderstandings “A larger sample eliminates bias.” – Bias stems from how the sample is drawn, not just size. “Reliability guarantees validity.” – A consistently wrong measure is still invalid. “Mode effects are negligible.” – Different modes can systematically shift answers (e.g., social desirability in face‑to‑face). “Stratified sampling always improves precision.” – Only if strata are truly homogeneous and allocation is proportional or optimal. “Longitudinal studies automatically show causation.” – They show temporal sequence but still need careful design to infer causality. --- 🧠 Mental Models / Intuition Sampling frame as the “map” – If the map is missing roads, you’ll never drive there; a missing segment of the frame = missing segment of the population. Mode as a “lens” – Each mode filters respondents through a different filter; imagine looking at the same scene through colored glasses—colors (answers) shift. Stratified sampling = “fair pizza slicing.” – Each slice (stratum) gets a proportionate number of toppings (sample units) so the whole pizza reflects the original topping distribution. --- 🚩 Exceptions & Edge Cases Mixed‑mode surveys – May reduce nonresponse but can introduce mode interaction effects; need statistical adjustment. Internet‑only surveys – Acceptable when the target population is fully online; otherwise risk coverage bias. Reverse‑worded items – Helpful for detecting acquiescence bias, but can create confusion if not clearly phrased. High‑frequency longitudinal waves – May mitigate attrition but increase respondent fatigue. --- 📍 When to Use Which Select stratified sampling when distinct sub‑populations (e.g., age groups) must be represented proportionally. Choose cross‑sectional design for quick prevalence estimates or when causal inference is not required. Use successive independent samples to monitor population‑level trends (e.g., yearly public‑opinion polls). Deploy longitudinal design when studying change mechanisms (e.g., effect of a health intervention on the same participants). Pick online mode for tech‑savvy, geographically dispersed samples with limited budget. Opt for telephone or in‑person when higher response accuracy is critical and the target includes non‑internet users. --- 👀 Patterns to Recognize “Low response after 10–20 questions” → questionnaire is too long → trim or split into sections. Consistent “social desirability” spikes in face‑to‑face data → suspect interviewer effects. Uniform shift in answers across waves of a successive independent sample → possible mode change or wording alteration. Higher reliability with added items – diminishing returns after a certain number; watch for respondent fatigue. --- 🗂️ Exam Traps Choosing “cross‑sectional = causal” – cross‑sectional is correlational only. Assuming any large sample eliminates selection bias – bias depends on frame and selection method. Mixing up “reliability” and “validity” – remember reliability = consistency; validity = correctness. Selecting “online mode is always best” – ignore coverage issues for populations lacking internet access. Believing stratified sampling always outperforms simple random sampling – only true when strata reduce within‑group variability. Overlooking the need for identical wording in successive independent samples – even subtle wording changes break comparability. ---
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or