RemNote Community
Community

Study Guide

📖 Core Concepts Software Testing – Evaluating a program to locate errors and confirm it meets specifications. Unix test Command – Shell utility that evaluates conditional expressions and returns a success (0) or failure (1) status. x86 TEST Instruction – Bitwise AND of two operands; result is discarded, but flags (ZF, SF, PF, CF, OF) are set according to the outcome. Experiment – Controlled procedure designed to test a hypothesis about a phenomenon. Statistical Hypothesis Test – Formal method for deciding whether observed data are consistent with a null hypothesis; yields a p‑value. Stress Testing – Pushing a system to extreme loads/conditions to see how it fails or degrades. System Testing – Validation that an integrated whole functions correctly, covering interactions among components. Validation vs. Verification – Validation: does the product solve the right problem? (Are we building the right thing?) Verification: does the product meet its design specs? (Did we build it right?) --- 📌 Must Remember Software testing ≠ debugging; it’s an evaluation step. Unix test returns 0 for true and 1 (or non‑zero) for false. TEST does not modify operands; only flags change. Null hypothesis (H₀) is the default claim; reject it only if p‑value < α (significance level). Stress test → extreme, often beyond expected operating range. System test → performed after integration, before acceptance testing. Validation focuses on user needs; Verification focuses on spec compliance. --- 🔄 Key Processes Software Testing Cycle Define test objectives → design test cases → execute → record results → report defects → retest. Using Unix test test EXPRESSION or [ EXPRESSION ] → sets exit status → use in if, while, etc. Example: if [ "$var" -eq 5 ]; then … fi. x86 TEST Flag Setting TEST reg, imm → perform reg & imm → update ZF (zero flag) and SF (sign flag). Follow with conditional jump (JE, JNE, JG, etc.) based on flags. Statistical Hypothesis Test State H₀ and H₁ → choose test statistic → compute p‑value → compare to α → decide. Stress Testing Procedure Define extreme load criteria → apply load incrementally → monitor performance & failure points → document breaking thresholds. System Testing Workflow Assemble full system → develop end‑to‑end test scenarios → execute → verify functional & non‑functional requirements → sign‑off. --- 🔍 Key Comparisons Validation vs. Verification Validation: right product (user‑oriented). Verification: right product (spec‑oriented). Stress Testing vs. System Testing Stress: extreme conditions, looks for failure modes. System: normal/expected operation, checks overall functionality. Unix test vs. x86 TEST test: shell command → returns exit status. TEST: CPU instruction → sets flags, no return value. Software Testing vs. Statistical Hypothesis Test Software testing: deterministic code checks. Hypothesis test: probabilistic inference about data. --- ⚠️ Common Misunderstandings Thinking TEST stores the AND result – It only updates flags; registers stay unchanged. Assuming a passing Unix test means “no error” – It merely indicates the condition evaluated to true; the script must still handle the case. Confusing validation with verification – Validation asks are we building the right thing?; verification asks are we building it right? Believing stress testing equals load testing – Load testing stays within expected limits; stress pushes beyond them. Treating a hypothesis test as proof – It only provides evidence to reject or fail to reject H₀, never absolute proof. --- 🧠 Mental Models / Intuition Flag‑Based Decision – Imagine TEST as a silent “yes/no” poll: if any bit overlaps, ZF=0 (false); if no overlap, ZF=1 (true). Exit‑Code as Boolean – Unix test is like a light switch: 0 = green (true), non‑zero = red (false). Validation ⇢ “Fit”, Verification ⇢ “Form” – Fit = does it satisfy the need; Form = does it match the blueprint. --- 🚩 Exceptions & Edge Cases Unix test does not support complex arithmetic; use (( )) or external tools for that. TEST on different operand sizes (e.g., 8‑bit vs. 32‑bit) only affects the flags of the resulting size. In hypothesis testing, small sample sizes can make p‑values unreliable; consider exact tests (e.g., Fisher’s exact). Stress tests may trigger safety mechanisms (circuit breakers) that mask true failure modes; disable them only in controlled environments. --- 📍 When to Use Which Choose Unix test for simple file/variable condition checks in shell scripts. Use x86 TEST when you need to set flags for a subsequent conditional jump without altering operands. Apply Stress Testing when assessing reliability under peak loads, disaster recovery, or hardware limits. Run System Testing after all modules are integrated and before user acceptance testing. Select Validation during requirements gathering and user‑acceptance phases; Verification during design, coding, and unit testing phases. --- 👀 Patterns to Recognize [ expression ] or [[ expression ]] → classic Unix test pattern. TEST reg, imm followed immediately by JE/JNE → flag‑driven branch pattern. Hypothesis test statements that mention p‑value < α → decision to reject H₀. Stress‑test logs showing sharp performance drop or resource exhaustion → typical failure indicator. --- 🗂️ Exam Traps Mistaking “validation” for “verification” – exam items may swap definitions; remember the “right problem vs. right solution” phrasing. Assuming TEST returns a value – answer choices that claim it stores the AND result are wrong. Choosing “load testing” when the question asks for extreme‑condition evaluation – that describes stress testing, not regular load testing. Interpreting a non‑significant p‑value as proof that H₀ is true – it only means insufficient evidence to reject H₀. Selecting a test case that passes test but ignores exit status handling – exam may highlight the need to check $? or use if.
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or