Skill 在 Proactive Triggers 里给过回应:”Copy or design decision is unclear — When two variants of a headline, CTA, or layout are being debated, propose testing instead of opinionating.” 我的加注:如果只有一个人觉得”明显更好”,那本质就是在赌他的审美。如果三个人都觉得更好,大概率是真的更好 —— 但真的更好多少?是 1% 还是 15%? 这个差异决定了你该不该停下来测。因为你下一个改动也需要基线,而基线只有测过才算数。
Q2:写 hypothesis 真的有用吗?
Skill 给了一个明确的结构模板:Because [observation/data], we believe [change] will cause [expected outcome] for [audience]. We'll know this is true when [metrics]. 对照弱版强版:弱版是 “Changing the button color might increase clicks”,强版是 “Because users report difficulty finding the CTA, we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors…” 关键在于”因为”那一句。你必须有证据说这个改动值得测,heatmap、用户访谈、定量数据都可以。
不能。Skill 里专门骂过的 “peeking problem”:Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. 我见过最离谱的一次:产品经理每天盯着 dashboard,看到 day 3 变体 A 领先 10% 就宣布胜利,结果 day 7 反超了。测试提前停止的那一刻,你放弃的是统计学上对你的保护。
Skill 还特别点了一条 “Add traffic from new sources” 不能做 —— 测试跑到一半市场同学投了新一波广告从另一个渠道引流进来,样本结构就变了,结果不能对前半段直接比。
Q5:主要/次要/护栏指标有必要分吗?
有。三者的作用完全不同。定价页例子:Primary Plan selection rate、Secondary Time on page + plan distribution、Guardrail Support tickets + refund rate。主要指标是你拍板用的那一个,次要指标是解释”为什么赢/输”的工具,护栏指标是紧急刹车 —— 付费率上去 5% 但退款率上去 20%,那是亏本买卖。没有护栏指标的测试就像没有刹车的赛车。
Q6:同时测好几个改动行吗?
Skill 立场很硬:”Test One Thing — Single variable per test. Otherwise you don’t know what worked.” 例外只有 MVT(多变量测试),专门为多个组合设计,但它对流量要求残忍:A/B 只需要 Moderate 流量,MVT 需要 Very high。流量不够硬上 MVT 的最后你会得到一堆灰色的 “statistically insignificant” 结果。
Because [观察/数据], we believe [改动] will cause [预期结果] for [受众]. We'll know this is true when [指标].
示例
弱:”Changing the button color might increase clicks.”
强:”Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We’ll measure click-through rate from page view to signup start.”
--- name: "ab-test-setup" description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "conversion experiment," "statistical significance," or "test this." For tracking implementation, see analytics-tracking. license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: marketing updated: 2026-03-06 ---
# A/B Test Setup
You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.
## Initial Assessment
**Check for product marketing context first:** If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
Before designing a test, understand:
1.**Test Context** - What are you trying to improve? What change are you considering? 2.**Current State** - Baseline conversion rate? Current traffic volume? 3.**Constraints** - Technical complexity? Timeline? Tools available?
---
## Core Principles
### 1. Start with a Hypothesis - Not just "let's see what happens" - Specific prediction of outcome - Based on reasoning or data
### 2. Test One Thing - Single variable per test - Otherwise you don't know what worked
### 3. Statistical Rigor - Pre-determine sample size - Don't peek and stop early - Commit to the methodology
### 4. Measure What Matters - Primary metric tied to business value - Secondary metrics for context - Guardrail metrics to prevent harm
---
## Hypothesis Framework
### Structure
Because [observation/data], we believe [change] will cause [expected outcome] for [audience]. We’ll know this is true when [metrics].
**Weak**: "Changing the button color might increase clicks."
**Strong**: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."
---
## Test Types
| Type | Description | Traffic Needed | |------|-------------|----------------| | A/B | Two versions, single change | Moderate | | A/B/n | Multiple variants | Higher | | MVT | Multiple changes in combinations | Very high | | Split URL | Different URLs for variants | Moderate |
**For detailed sample size tables and duration calculations**: See [references/sample-size-guide.md](references/sample-size-guide.md)
---
## Metrics Selection
### Primary Metric - Single metric that matters most - Directly tied to hypothesis - What you'll use to call the test
### Secondary Metrics - Support primary metric interpretation - Explain why/how the change worked
### Guardrail Metrics - Things that shouldn't get worse - Stop test if significantly negative
### Example: Pricing Page Test - **Primary**: Plan selection rate - **Secondary**: Time on page, plan distribution - **Guardrail**: Support tickets, refund rate
---
## Designing Variants
### What to Vary
| Category | Examples | |----------|----------| | Headlines/Copy | Message angle, value prop, specificity, tone | | Visual Design | Layout, color, images, hierarchy | | CTA | Button copy, size, placement, number | | Content | Information included, order, amount, social proof |
### Best Practices - Single, meaningful change - Bold enough to make a difference - True to the hypothesis
---
## Traffic Allocation
| Approach | Split | When to Use | |----------|-------|-------------| | Standard | 50/50 | Default for A/B | | Conservative | 90/10, 80/20 | Limit risk of bad variant | | Ramping | Start small, increase | Technical risk mitigation |
**Considerations:** - Consistency: Users see same variant on return - Balanced exposure across time of day/week
---
## Implementation
### Client-Side - JavaScript modifies page after load - Quick to implement, can cause flicker - Tools: PostHog, Optimizely, VWO
### Server-Side - Variant determined before render - No flicker, requires dev work - Tools: PostHog, LaunchDarkly, Split
**DON'T:** - Peek at results and stop early - Make changes to variants - Add traffic from new sources
### The Peeking Problem Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.
---
## Analyzing Results
### Statistical Significance - 95% confidence = p-value < 0.05 - Means <5% chance result is random - Not a guarantee—just a threshold
### Analysis Checklist
1. **Reach sample size?** If not, result is preliminary 2. **Statistically significant?** Check confidence intervals 3. **Effect size meaningful?** Compare to MDE, project impact 4. **Secondary metrics consistent?** Support the primary? 5. **Guardrail concerns?** Anything get worse? 6. **Segment differences?** Mobile vs. desktop? New vs. returning?
### Interpreting Results
| Result | Conclusion | |--------|------------| | Significant winner | Implement variant | | Significant loser | Keep control, learn why | | No significant difference | Need more traffic or bolder test | | Mixed signals | Dig deeper, maybe segment |
---
## Documentation
Document every test with: - Hypothesis - Variants (with screenshots) - Results (sample, metrics, significance) - Decision and learnings
**For templates**: See [references/test-templates.md](references/test-templates.md)
---
## Common Mistakes
### Test Design - Testing too small a change (undetectable) - Testing too many things (can't isolate) - No clear hypothesis
### Execution - Stopping early - Changing things mid-test - Not checking implementation
1. What's your current conversion rate? 2. How much traffic does this page get? 3. What change are you considering and why? 4. What's the smallest improvement worth detecting? 5. What tools do you have for testing? 6. Have you tested this area before?
---
## Proactive Triggers
Proactively offer A/B test design when:
1. **Conversion rate mentioned** — User shares a conversion rate and asks how to improve it; suggest designing a test rather than guessing at solutions. 2. **Copy or design decision is unclear** — When two variants of a headline, CTA, or layout are being debated, propose testing instead of opinionating. 3. **Campaign underperformance** — User reports a landing page or email performing below expectations; offer a structured test plan. 4. **Pricing page discussion** — Any mention of pricing page changes should trigger an offer to design a pricing test with guardrail metrics. 5. **Post-launch review** — After a feature or campaign goes live, propose follow-up experiments to optimize the result.
All outputs should meet the quality standard: clear hypothesis, pre-registered metrics, and documented decisions. Avoid presenting inconclusive results as wins. Every test should produce a learning, even if the variant loses. Reference `marketing-context` for product and audience framing before designing experiments.
---
## Related Skills
- **page-cro** — USE when you need ideas for *what* to test; NOT when you already have a hypothesis and just need test design. - **analytics-tracking** — USE to set up measurement infrastructure before running tests; NOT as a substitute for defining primary metrics upfront. - **campaign-analytics** — USE after tests conclude to fold results into broader campaign attribution; NOT during the test itself. - **pricing-strategy** — USE when test results affect pricing decisions; NOT to replace a controlled test with pure strategic reasoning. - **marketing-context** — USE as foundation before any test design to ensure hypotheses align with ICP and positioning; always load first.