D5: Quality Impact
Core Question: How does this problem affect what we deliver?
Quality impact is often the most insidious dimension — defects found by customers cost 10× more than those found internally, and safety issues can multiply by 100×.
Primary Cascade: Quality → Customer (85% of cases)
Observable Signals
Don't wait for customer complaints. Look for early warning signals in your systems:
| Signal Type | Observable | Data Source | Detection Speed |
|---|---|---|---|
| Immediate | Defect rate spike | QA system | Hours-Days |
| Behavioral | Rework hours increase | Timesheets | Days |
| Customer | Complaint volume up | Support system | Days |
| Safety | Incident reports | EHS system | Immediate |
| Returns | Product returns | Logistics/RMA | Days |
| Process | Specification deviations | Quality audits | Weeks |
| Silent | Workarounds created | Tribal knowledge, interviews | Months |
| Inspection | Rejection rate | Manufacturing QC | Immediate |
Trigger Keywords
Language patterns indicate severity. Train your team to flag these:
High Urgency (Sound = 8-10)
"recall" "safety incident" "injury"
"critical defect" "system failure" "data breach"
"contamination" "fatality" "major outage"Action: Executive escalation within 1 hour. Potential regulatory notification.
Medium Urgency (Sound = 4-7)
"defect" "rework" "redo"
"didn't meet spec" "out of tolerance" "failed inspection"
"customer complaint" "return" "warranty claim"Action: Manager review within 24 hours.
Low Urgency / Early Warning (Sound = 1-3)
"workaround" "manual fix" "known issue"
"tech debt" "legacy limitation" "needs improvement"
"minor issue" "cosmetic defect" "edge case"Action: Track pattern over time. Add to backlog.
Metrics
Track both leading (predictive) and lagging (historical) indicators:
| Metric Type | Metric Name | Calculation | Target | Alert Threshold |
|---|---|---|---|---|
| Leading | First-pass yield | Good units / Total units | >95% | <90% |
| Leading | Defect detection rate | Defects found internally / Total | >90% | <80% |
| Leading | Code coverage | % of code with tests | >80% | <70% |
| Leading | Technical debt ratio | Debt remediation time / Dev time | <5% | >10% |
| Lagging | Customer-reported defects | Count per release/period | Decreasing | Increasing trend |
| Lagging | Cost of quality | Prevention + Appraisal + Failure costs | <15% of revenue | >20% |
| Lagging | Warranty costs | Claims / Revenue | <2% | >3% |
Example Dashboard Query
-- Defect detection rate alert
SELECT
release_version,
COUNT(CASE WHEN found_by = 'Internal' THEN 1 END) as internal_defects,
COUNT(CASE WHEN found_by = 'Customer' THEN 1 END) as customer_defects,
COUNT(CASE WHEN found_by = 'Internal' THEN 1 END) * 100.0 /
NULLIF(COUNT(*), 0) as detection_rate_pct
FROM defects
WHERE created_date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY release_version
HAVING detection_rate_pct < 80 -- Alert when <80% found internally
ORDER BY created_date DESCCascade Pathways
Quality impact multiplies rapidly across other dimensions:
Cascade Probabilities
| Cascade Path | Probability | Severity if Occurs |
|---|---|---|
| Quality → Customer | 85% | High |
| Quality → Operational | 75% | Medium-High |
| Quality → Regulatory | 30% | Very High (safety-critical industries) |
Why Customer Cascade is Most Common:
- Defects directly impact user experience (functionality, reliability)
- Quality issues erode trust (perception of brand/product)
- Customers amplify problems (reviews, social media, word-of-mouth)
- Competitive alternatives exist (switching is easier than tolerating defects)
Multiplier Factors
Not all quality issues cascade equally. The multiplier depends on:
| Factor | Low (1.5×) | Medium (3×) | High (10×+) |
|---|---|---|---|
| Detection Point | Internal QA | Customer-found | Field failure |
| Safety Impact | None | Potential | Actual injury |
| Scope of Defect | Single unit | Batch/Lot | Systemic design |
| Fix Complexity | Patch/Update | Rework | Recall/Replace |
| Brand Sensitivity | B2B commodity | Consumer product | Premium brand |
Example Calculation
Scenario: Automotive safety defect (airbag), field failure, systemic design flaw, premium brand
Multiplier factors:
- Detection point: High (10×, field failure)
- Safety impact: High (10×, actual injury)
- Scope of defect: High (10×, systemic design)
- Fix complexity: High (10×, recall required)
- Brand sensitivity: High (10×, premium brand)
Average multiplier: (10 + 10 + 10 + 10 + 10) ÷ 5 = 10×Impact:
- Direct cost of defect: $50M (recall, parts, labor)
- Multiplied impact: $50M × 10 = $500M (total business impact)
- Plus customer cascade: 85% probability of trust erosion → lost sales
- Plus regulatory cascade: 30% probability but very high severity (NHTSA investigation)
- Total risk: $500M+ from initial defect (real-world example: Takata airbag recall = $10B+)
3D Scoring (Sound × Space × Time)
Apply the Cormorant Foraging lens to quality dimension:
| Lens | Score 1-3 | Score 4-6 | Score 7-10 |
|---|---|---|---|
| Sound (Urgency) | Cosmetic issue | Functional defect | Safety critical |
| Space (Scope) | One unit | Batch/Release | All production |
| Time (Trajectory) | Isolated incident | Recurring | Systemic/Chronic |
Formula: Dimension Score = (Sound × Space × Time) ÷ 10
Example Scoring
Scenario: Critical software bug affecting all users on production, recurring in every release despite fixes
Sound = 9 (system failure, data loss risk)
Space = 9 (all production users)
Time = 7 (recurring, systemic root cause)
Quality Impact Score = (9 × 9 × 7) ÷ 10 = 56.7Interpretation: Critical urgency (56.7 >> 30). Expect immediate cascade to Customer (churn risk), Operational (emergency fixes consuming all resources), and Revenue (refunds, lost sales) dimensions.
Detection Strategy
Automated Monitoring
Set up alerts for:
- Defect rate spike (>20% increase week-over-week)
- First-pass yield drop (<90% good units)
- Customer-reported defects (>20% of total defects found by customers)
- Rework hours (>10% of total development time)
Human Intelligence
Train your QA/engineering teams to:
- Flag language patterns (use trigger keyword lists)
- Report workarounds (band-aids hiding systemic issues)
- Escalate safety concerns (near-misses are signals)
- Track customer sentiment (quality perception vs metrics)
Real-World Example
The "Workaround" Signal:
| Observable | Data Point | 3D Score |
|---|---|---|
| Signal | "Just use this workaround" appears in 15 support tickets | Sound = 5 |
| Context | Affects core feature, all users on specific browser | Space = 7 |
| Trend | Workaround documented 6 months ago, still not fixed | Time = 6 |
| Score | (5 × 7 × 6) ÷ 10 = 21 | Medium urgency |
Cascade Prediction:
- 85% probability → Customer impact (frustration, perception of "broken product")
- 75% probability → Operational impact (support time, tribal knowledge required)
- Multiplier: 2-3× (customer-found, multiple users, process workaround)
Action Taken:
- Root cause analysis prioritized (within 1 week)
- Proper fix deployed (within 2 weeks)
- Workaround documentation removed (after fix validation)
- Result: Support tickets dropped 40%, NPS increased 5 points
Industry Variations
B2B SaaS
- Primary metric: Bug escape rate, mean time to resolution
- Key signal: Customer-reported defects, production incidents
- Cascade risk: Quality → Customer → Revenue
Healthcare
- Primary metric: Medication errors, patient safety incidents
- Key signal: Near-miss reports, adverse events
- Cascade risk: Quality → Regulatory → Customer (Patient) → Revenue
Manufacturing
- Primary metric: First-pass yield, defect parts per million (DPPM)
- Key signal: Scrap rate, rework hours, customer returns
- Cascade risk: Quality → Customer → Revenue → Regulatory
Next Steps
Remember: The defect you find is fixable. The defect your customer finds is expensive. The defect that causes harm is catastrophic. Find them first. 🪶