RemNote Community
Community

Study Guide

📖 Core Concepts Quantitative research – a deductive, empiricist/positivist strategy that quantifies data to test theories and generate generalizable results. Deductive approach – starts with existing theory → hypotheses → data collection → test. Measurement – the bridge that turns observable phenomena into numeric values; accuracy & reliability are essential. Variables – independent (manipulated) vs extraneous (controlled) vs dependent (outcome). Statistical significance – indicates whether an observed effect is unlikely due to random sampling error (often p < 0.05). Effect size – quantifies the magnitude of a relationship, independent of sample size. Confidence interval (CI) – range of values within which the true population parameter is expected to lie (e.g., 95 % CI). Correlation ≠ Causation – two variables can move together without one causing the other. Proxy variable – an indirect measure used when the target construct cannot be directly observed. 📌 Must Remember Quantitative research seeks generalizable findings for larger populations. Statistical methods produce unbiased results and reveal patterns. Internal validity is strengthened by manipulating the independent variable and controlling extraneous variables. Large sample sizes increase statistical power and representativeness. Psychometrics = discipline for measuring social/psychological attributes (reliability, validity). 🔄 Key Processes Model/Hypothesis Generation – derive testable statements from theory. Instrument Development Design tool → validate content → test reliability → calibrate. Experimental Design Choose independent variable(s) → manipulate → control extraneous variables → define dependent outcome. Data Collection Follow protocol, use calibrated instruments, aim for large, random sample. Data Entry & Cleaning – input into statistical software, check for missing/outlier values. Statistical Modeling & Analysis Select appropriate model (linear, non‑linear, factor analysis). Run hypothesis tests → interpret p‑value, effect size, CI. 🔍 Key Comparisons Quantitative vs Qualitative – numeric, objective, generalizable ↔ textual, interpretive, contextual. Correlation vs Causation – co‑variation only ↔ one variable directly influences the other (requires experimental control). Direct Measurement vs Proxy Variable – measures the construct itself ↔ measures a stand‑in that approximates the construct. ⚠️ Common Misunderstandings “A high correlation means one variable causes the other.” → Remember the correlation ≠ causation rule; need experimental control to infer causality. “Statistical significance = practical importance.” → Significance can arise from large samples; always check effect size. “All quantitative studies must use complex models.” → Simpler general linear models are often sufficient; choose model complexity based on research question and data. 🧠 Mental Models / Intuition “Numbers as a bridge” – think of measurement as a bridge that lets you walk from messy real‑world observations to clean, testable math. “Cause‑and‑control ladder” – each rung you add (manipulation + control) climbs you higher on internal validity. “Sample size as a flashlight” – larger samples illuminate the true population pattern, reducing the darkness of random error. 🚩 Exceptions & Edge Cases Proxy variables may introduce measurement error; validate that the proxy strongly correlates with the true construct. Non‑linear relationships – a linear model may mislead if the underlying pattern curves; check residuals or fit a non‑linear model. Small samples – can still be useful for exploratory work but yield low power; interpret significance cautiously. 📍 When to Use Which Use a general linear model when relationships appear straight‑line and variables are continuous. Switch to non‑linear model if scatterplots show curvature or theory predicts diminishing returns. Apply factor analysis when you need to reduce many correlated items into underlying latent constructs (e.g., psychometrics). Choose a proxy only when direct measurement is impossible or impractical, and you have evidence the proxy tracks the target variable. 👀 Patterns to Recognize Consistent direction & magnitude of coefficients across related models → robust finding. Large sample + tiny p‑value + tiny effect size → likely a statistical artifact, not a meaningful effect. High correlation between independent variables → multicollinearity; consider variable reduction or ridge regression. 🗂️ Exam Traps Distractor: “Correlation proves causation.” – Wrong; look for experimental control language. Distractor: “A non‑significant p‑value means no effect.” – Incorrect; could be low power; check confidence interval. Distractor: “All proxies are acceptable.” – Wrong; proxies must be validated. Distractor: “Large sample automatically guarantees validity.” – False; validity also depends on measurement accuracy and experimental design.
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or