--- layout: null --- MKT 397: PhD Quantitative Marketing — Academic Log
← Portfolio
MKT 397Lab Notebook
Open Academic Log

PhD Quantitative Marketing

MKT 397 · McCombs School of Business
Prof. Rex Du
Spring 2026

A weekly research synthesis log. Each entry follows a three-part structure: what I read, how I interpret it, and what questions or gaps emerge. This is less a summary and more a thinking-in-progress document — an open-source lab notebook for anyone navigating quantitative marketing research.

Week 1

Empirics-First Methodology & Causal Inference

January 2026
1
This Week's Reading

Foundations of Causal Identification

  • Angrist & Pischke (2009), Ch. 1–2— "Mostly Harmless Econometrics": selection bias and the fundamental problem of causal inference
  • Rubin (1974)— Potential outcomes framework; counterfactual reasoning
  • Du, R. (Lecture Notes)— Empirics-first approach: start with data patterns, then build theory
2
My Interpretation

What the empirics-first lens changes

Prof. Du's methodology inverts the typical research flow. Instead of "theory → hypothesis → test," we begin with empirical regularitiesPatterns consistently observed in data before any theoretical explanation is offered. — patterns in real data — and let them guide theoretical development.

This resonates deeply with my health communication work. In studying Korean American women's help-seeking behavior, the patterns (low utilization despite high need) preceded my theoretical framework. The data spoke first.

The Rubin Causal ModelFramework where causal effects are defined as the difference between potential outcomes under treatment vs. control. The "fundamental problem" is that we can only observe one outcome per unit. formalizes something health communicators often struggle with: we can't observe the counterfactual.

3
Research Gap / Emerging Question

Where this connects to health communication

Most health communication studies rely on self-report measuresData collected by asking participants to report their own behaviors or attitudes. Subject to recall bias and social desirability bias. with limited causal identification. What if we applied empirics-first methodology to health message evaluation — starting with behavioral data rather than attitudinal surveys?

Emerging question: Can we identify natural experimentsSituations where treatment assignment occurs "as if" randomly due to external circumstances, enabling causal inference without a designed experiment. in health communication contexts that allow causal identification of message effects on behavior?

Visual Scaffold — Causal Inference Logic
Causal Effect = E[Y₁ᵢ] − E[Y₀ᵢ]

where Y₁ᵢ = outcome if unit i receives treatment
      Y₀ᵢ = outcome if unit i does not

Problem: We observe Y₁ᵢ OR Y₀ᵢ, never both.
Solution: Identify a credible comparison group.
The fundamental problem of causal inference — we can never observe the counterfactual for the same individual.

Week 1 Takeaway: "Empirics first" isn't anti-theory — it's a commitment to letting data patterns discipline our theorizing. For health communication research, this means starting with observed behavioral gaps before constructing explanations.

Week 2 — Coming soon.

Week 3 — Coming soon.

Week 4 — Coming soon.

Week 5 — Coming soon.

Week 6 — Coming soon.

Week 7

Doing Research with Surveys

February 2026
한국어 Study Guide →
1
This Week's Reading

Survey Design & Construct Measurement

  • Vomberg & Klarmann (2021)— "Crafting Survey Research": systematic process from questionnaire design to data quality control
  • Hohenberg & Taylor (2020)— "Measuring Customer Satisfaction and Customer Loyalty": operationalization, scales, and measurement systems
2
My Interpretation

The measurement chain: tool → construct → decision

These two papers form a logical pair. Vomberg & Klarmann teach how to build the instrument. Hohenberg & Taylor show what to build it for — measuring satisfaction and loyalty.

The deeper lesson: cascading measurement error. Poorly designed surveys distort satisfaction scores, which distort loyalty predictions, which misallocate resources. Error propagates through the entire CRM-Outcome ChainMarketing Activities → Customer Attitudes + Factual Reasons → Loyalty Intentions → Loyalty Behavior → Economic Outcomes..

Reliability vs. Validity. ReliabilityConsistency of measurement. Cronbach's Alpha ≥ 0.7 and Composite Reliability ≥ 0.7 are standard. is necessary but not sufficient. A scale can be perfectly consistent while measuring the wrong thing.

Common Method Bias. Measuring IV and DV in the same survey inflates correlations — not because the relationship is strong, but because of shared measurement methodCMB: artificial inflation when IVs and DVs share the same survey, time point, and respondent..

Intentions ≠ Behavior. Stated behavioral intentions are weak predictors of actual behavior (Sheppard et al. 1988). The chapter advocates dual measurement: surveys for intentions, objective databases for behavior.

Satisfaction–Loyalty Disconnect. Satisfaction explains <25% of loyalty variance. Over 60% of satisfied customers still defect. Trust, commitment, and switching costsFinancial, psychological, and procedural costs of changing providers. matter enormously.

NPS. Efficient but limited. Keiningham et al. (2007) found no support for NPS as the single best growth predictor.

Self-Selection & Heckman. Loyalty program members self-selectSelf-selection bias: participation correlates with the outcome of interest.. The Heckman two-stage correction estimates enrollment probability first, then purges selection bias via the inverse Mills ratio.

3
Research Gap / Emerging Question

Applying measurement principles to health communication

Most health communication interventions rely on self-reported attitude change. These readings expose that fragility: CMB inflates effects, intentions don't predict behavior, message satisfaction doesn't guarantee adherence.

What if we adopted the dual measurement framework? Surveys for message reception, objective data (EHR records, pharmacy fills, clinic visits) for actual behavior.

The Confirmation-Disconfirmation ParadigmOliver (1980): Satisfaction = Perceived Performance vs. Expected Performance. has untapped potential — message effectiveness may depend on the gap between patient expectations and experience.

Emerging question: Can we combine survey-based assessment with objective behavioral data and use the Heckman correctionTwo-stage method to purge self-selection bias via the inverse Mills ratio. to address self-selection into health programs?

Visual Scaffold — Confirmation-Disconfirmation
Oliver (1980)

P > E → Positive Disconfirmation → Satisfaction
P = E → Confirmation → Satisfaction
P < E → Negative Disconfirmation → Dissatisfaction
Satisfaction is about the gap between expectation and experience.
Visual Scaffold — Heckman Two-Stage
Stage 1: y₁* = X₁β₁ + e₁ → Probit
Stage 2: E[y₂ | y₁*>0] = X₂β₂ + ρλ(X₁β₁)

λ(z) = ϕ(z)/Φ(z)

ρ significant → selection bias → correction needed
The inverse Mills ratio purges selection bias from the outcome equation.

Week 7 Takeaway: "What you measure shapes what you manage." Without validity checks, CMB controls, and behavioral triangulation, surveys create a measurement mirage — precise numbers pointing in the wrong direction.