关于 A/B 测试,这 9 个问题我被问过太多次 —— ab-test-setup Skill Q&A

Claude 中文知识站 Lv5

做过几年增长的人都有这个体感:A/B 测试这个话题本身并不难,难的是**每次开会有人冒出来一句”我觉得这个改动不用测,我们直接上吧”**。或者另一个极端,”测一下看看”,然后跑出一个 52% vs 48% 的结果就开香槟。ab-test-setup 这个 Skill 不是给你写实验脚本的,它是把上述两种极端都堵上的一个检查清单。把过去半年团队里反复争执的 9 个问题拎出来,用这个 Skill 对应章节回答一遍。

Q1:这个改动很明显会更好,还要测吗?

Skill 在 Proactive Triggers 里给过回应:”Copy or design decision is unclear — When two variants of a headline, CTA, or layout are being debated, propose testing instead of opinionating.” 我的加注:如果只有一个人觉得”明显更好”,那本质就是在赌他的审美。如果三个人都觉得更好,大概率是真的更好 —— 但真的更好多少?是 1% 还是 15%? 这个差异决定了你该不该停下来测。因为你下一个改动也需要基线,而基线只有测过才算数。

Q2:写 hypothesis 真的有用吗?

Skill 给了一个明确的结构模板:Because [observation/data], we believe [change] will cause [expected outcome] for [audience]. We'll know this is true when [metrics]. 对照弱版强版:弱版是 “Changing the button color might increase clicks”,强版是 “Because users report difficulty finding the CTA, we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors…” 关键在于”因为”那一句。你必须有证据说这个改动值得测,heatmap、用户访谈、定量数据都可以。

Q3:样本量随便估估行吧?

这是事故高发区。Skill 给的速查表直接抄过来贴墙上:baseline 1% 时检测 10% lift 需要 150k/variant;baseline 3% 对应 47k;baseline 5% 对应 27k;baseline 10% 对应 12k。看这张表的正确方式是找到你当前 baseline 那一行,然后看你能接受检测多大的提升。如果基线 3%、希望检测 10% 的相对提升,每个变体需要 47k 样本。如果一周只有 10k 流量,这个测试至少要跑 9-10 周。跑不起就放弃,不是把样本量偷偷砍小。

Q4:看到一个变体领先就能停吗?

不能。Skill 里专门骂过的 “peeking problem”:Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. 我见过最离谱的一次:产品经理每天盯着 dashboard,看到 day 3 变体 A 领先 10% 就宣布胜利,结果 day 7 反超了。测试提前停止的那一刻,你放弃的是统计学上对你的保护

Skill 还特别点了一条 “Add traffic from new sources” 不能做 —— 测试跑到一半市场同学投了新一波广告从另一个渠道引流进来,样本结构就变了,结果不能对前半段直接比。

Q5:主要/次要/护栏指标有必要分吗?

有。三者的作用完全不同。定价页例子:Primary Plan selection rate、Secondary Time on page + plan distribution、Guardrail Support tickets + refund rate。主要指标是你拍板用的那一个,次要指标是解释”为什么赢/输”的工具,护栏指标是紧急刹车 —— 付费率上去 5% 但退款率上去 20%,那是亏本买卖。没有护栏指标的测试就像没有刹车的赛车。

Q6:同时测好几个改动行吗?

Skill 立场很硬:”Test One Thing — Single variable per test. Otherwise you don’t know what worked.” 例外只有 MVT(多变量测试),专门为多个组合设计,但它对流量要求残忍:A/B 只需要 Moderate 流量,MVT 需要 Very high。流量不够硬上 MVT 的最后你会得到一堆灰色的 “statistically insignificant” 结果。

Q7:跑完没有显著差异是不是白跑了?

不是。Skill 在 Analyzing Results 里把四种结果都列了对应结论:Significant winner → implement;Significant loser → 保留原版学为什么;No significant difference → 需要更多流量或更 bold 的测试;Mixed signals → 拆段深挖。

“No significant difference” 本身是有信息的:要么你的变体动得不够大,要么你的 MDE 设得比真实差距还大。我之前带过一个团队做过 5 次按钮色改动测试,全部 “no significant difference”。第六次才意识到用户不是不点按钮,是找不到按钮。把按钮从页脚挪到 hero 区立刻出结果。改动不够 bold,再跑一万次都不会有显著性

Q8:95% 置信度到底意味着什么?

Skill 原话:95% confidence = p-value < 0.05,随机出这个结果的概率 <5%,不是保证,只是阈值。我补一句:95% 不是神圣数字。早期阶段用 90% 置信度 + 接受更多假阳性追求速度也是合理策略。但你要事先声明自己用 90%,而不是跑完发现没到 95% 再回头改阈值。

Q9:跑之前应该做什么检查?

Skill 的 Pre-Launch Checklist:Hypothesis 文档化 / Primary metric 定义 / 样本量计算 / Variants 实施正确 / 追踪验证 / QA 完成。看起来是常识,踩坑的几乎全是”variants 实施正确”和”追踪验证”这两条。最经典的事故:变体 A 的点击事件埋点错了字段名,跑了 3 周数据全空,只能从头再来。QA 不只看页面长得对不对,要手动点一遍,然后去后台看事件是不是真的打进来了。

几条流量分配心得

Skill 给了三种分流策略:Standard 50/50、Conservative 90/10 或 80/20、Ramping 小步涨。我的实操经验:90/10 的保守分流在定价页测试里几乎是必选。定价动一下的风险比文案改动大得多 —— 万一你的变体让付费率降了 30%,你不希望一半流量都吃到这个损失。先给 10% 流量跑两周确认没有灾难性结果再扩大到 50/50。Ramping 分流主要用于技术风险,比如改了结账按钮背后的支付 SDK,先 5% 再 20% 再 50%。

Segment 分析的陷阱

Skill 在 Analysis Checklist 里提了 6 条,segment differences 这一条要单独讲。全局看没差异,拆到 segment 里可能差异巨大。最常见的是移动端和桌面端表现相反 —— 新版 landing page 在桌面端赢了 12%,在移动端输了 8%。但 Skill 同时也警告了 “Cherry-picking segments” 这条反模式。关键判断是 segment 拆分要事前声明:跑测试之前就决定了要分 mobile/desktop 看,这是合法 segment analysis;跑完发现整体输了临时想出一个”高价值用户 segment 赢了”的故事,这是自欺。

文档化:Hypothesis / Variants(含截图)/ Results / Decision and learnings,必须保留。团队守则:每个测试结束一周内必须归档,标题格式 [日期]-[页面]-[变量]-[结论]。半年之后有人问”我们测过按钮色吗”直接一搜就能找到。没有这个习惯的团队,一年后会把同一个实验重新做三遍。

如果你想先从”该测什么”入手,再去结构化设计,建议跟之前写过的 SEO audit Skill 解读付费广告 Skill 解读 串着看 —— 它们分别解决”要测什么”和”哪个渠道值得优化”的上游问题。挽留类的反例也值得看 churn-prevention Skill 里那个”加倒计时反而跌”的实战。

工具搭配

我的日常流程:Claude Code 跑这个 Skill 做 hypothesis 草案和 sample size 估算,实验实施用 PostHog 做客户端分流,关键变体的 server-side 切流用 LaunchDarkly。统计推断懒得手算的时候我会把数据丢给 Hermes Agent 跑一个简化版的贝叶斯估计。偶尔跟同事对稿时用 Cursor 对代码变更做 diff review。


SKILL 完整中文版

1
2
3
4
5
6
7
8
9
10
11
name: "ab-test-setup"
description: "当用户要规划、设计或实施 A/B 测试或实验时使用。也用于用户提到
'A/B test'、'split test'、'experiment'、'test this change'、'variant copy'、
'multivariate test'、'hypothesis'、'conversion experiment'、
'statistical significance'、'test this' 的场景。埋点实施见 analytics-tracking。"
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
updated: 2026-03-06

A/B 测试搭建

你是实验设计与 A/B 测试领域的专家。目标是帮助设计产生统计有效、可执行结果的测试。

初始评估

先检查产品营销上下文:
如果 .claude/product-marketing-context.md 存在,在提问前先读。用已有上下文,只补问未覆盖或特定于当前任务的信息。

在设计测试前要弄清楚:

  1. 测试背景 —— 你想改进什么?在考虑什么改动?
  2. 当前状态 —— 基线转化率?当前流量规模?
  3. 约束 —— 技术复杂度?时间线?可用工具?

核心原则

1. 从假设开始

  • 不是”看看会怎么样”
  • 对结果的具体预测
  • 基于推理或数据

2. 一次只测一件

  • 每次测试单一变量
  • 否则你不知道是什么起了作用

3. 统计严谨

  • 预先确定样本量
  • 不要偷看提前停止
  • 坚持方法论

4. 量对关键指标

  • 主要指标绑业务价值
  • 次要指标提供上下文
  • 护栏指标防止伤害

假设框架

结构

1
2
3
4
5
Because [观察/数据],
we believe [改动]
will cause [预期结果]
for [受众].
We'll know this is true when [指标].

示例

:”Changing the button color might increase clicks.”

:”Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We’ll measure click-through rate from page view to signup start.”


测试类型

类型 描述 流量需求
A/B 两个版本,单一改动 中等
A/B/n 多个变体 较高
MVT 多个改动的组合 非常高
Split URL 不同 URL 对应不同变体 中等

样本量

速查

Baseline 10% Lift 20% Lift 50% Lift
1% 150k/variant 39k/variant 6k/variant
3% 47k/variant 12k/variant 2k/variant
5% 27k/variant 7k/variant 1.2k/variant
10% 12k/variant 3k/variant 550/variant

计算器:

详细样本量表和持续时间计算:见 references/sample-size-guide.md


指标选择

主要指标

  • 最重要的单一指标
  • 直接挂假设
  • 用它来 call 这次测试

次要指标

  • 支撑主要指标的解释
  • 解释改动为什么/怎么起作用

护栏指标

  • 不该变差的东西
  • 显著负向就停测

示例:定价页测试

  • Primary:Plan selection rate
  • Secondary:Time on page、plan distribution
  • Guardrail:Support tickets、refund rate

变体设计

可变化的维度

类别 例子
Headlines/Copy 信息角度、价值主张、具体度、语气
视觉设计 布局、颜色、图像、层级
CTA 按钮文案、大小、位置、数量
内容 包含的信息、顺序、数量、社会证明

最佳实践

  • 单一、有意义的改动
  • 足够大到能被检测到
  • 忠于假设

流量分配

方式 比例 何时使用
Standard 50/50 A/B 默认
Conservative 90/10、80/20 限制坏变体的风险
Ramping 从小开始,逐步加 技术风险缓释

考虑:

  • 一致性:用户再次访问看到同一变体
  • 跨时段/跨一周曝光要均衡

实施

客户端

  • JS 在页面加载后修改
  • 快速实施,但可能闪一下
  • 工具:PostHog、Optimizely、VWO

服务端

  • 变体在渲染前确定
  • 无闪烁,需要开发
  • 工具:PostHog、LaunchDarkly、Split

运行测试

发布前清单

  • Hypothesis 文档化
  • 主要指标定义
  • 样本量已计算
  • 变体实施正确
  • 追踪已验证
  • 所有变体已 QA 完成

测试期间

该做:

  • 监控技术问题
  • 检查 segment 质量
  • 记录外部因素

不该做:

  • 偷看结果并提前停止
  • 中途修改变体
  • 从新渠道加流量

偷看问题

在达到样本量之前看结果并提前停止,会导致假阳性和错误决策。预先承诺样本量,相信流程。


分析结果

统计显著性

  • 95% 置信度 = p-value < 0.05
  • 意味着结果随机出现的概率 <5%
  • 不是保证 —— 只是阈值

分析清单

  1. 达到样本量了吗? 没到,结果是初步的
  2. 统计显著吗? 检查置信区间
  3. 效应量有意义吗? 对比 MDE,预测影响
  4. 次要指标一致吗? 支持主要指标?
  5. 护栏是否告警? 有什么变差了吗?
  6. segment 差异? 移动 vs 桌面?新 vs 老?

结果解读

结果 结论
显著胜出 实施变体
显著失败 保留原版,学为什么
无显著差异 需要更多流量或更 bold 的测试
信号混杂 深挖,可能分段看

文档化

每次测试都要记录:

  • Hypothesis
  • Variants(含截图)
  • Results(样本、指标、显著性)
  • 决策与 learning

模板:见 references/test-templates.md


常见错误

测试设计

  • 测了太小的改动(检测不到)
  • 测了太多东西(无法分离)
  • 没有清晰假设

执行

  • 提前停止
  • 中途改东西
  • 没检查实施

分析

  • 忽略置信区间
  • Cherry-picking segments
  • 过度解读无定论结果

任务特定问题

  1. 当前转化率是多少?
  2. 这个页面有多少流量?
  3. 在考虑什么改动、为什么?
  4. 值得检测的最小提升是多少?
  5. 有什么测试工具?
  6. 这个区域之前测过吗?

主动触发

在下列情况主动提议 A/B 测试设计:

  1. 提到转化率 —— 用户丢出转化率问怎么提升时,提议设计测试而不是猜方案。
  2. 文案或设计决策不明 —— 两个 headline/CTA/layout 在争论时,提议测试而不是争论意见。
  3. 活动表现不佳 —— 用户报告某 landing page 或邮件低于预期,给结构化测试计划。
  4. 定价页讨论 —— 任何提到定价页改动都要触发带护栏指标的定价测试。
  5. 发布后复盘 —— 功能或活动上线后,提议跟进实验优化结果。

输出物

Artifact 格式 描述
Experiment Brief Markdown 文档 Hypothesis、variants、metrics、样本量、时长、owner
样本量计算器输入 表格 Baseline rate、MDE、置信水平、power
Pre-Launch QA Checklist 清单 实施、追踪、变体渲染验证
Results Analysis Report Markdown 文档 统计显著性、效应量、segment 拆分、决策
Test Backlog 优先级列表 按预期影响与可行性排序的实验

沟通

所有输出应达到质量标准:清晰 hypothesis、预先注册的指标、记录的决策。避免把无定论结果包装成胜利。每个测试都应产生一个 learning,即使变体输了。设计实验前先加载 marketing-context 了解产品和受众框架。


相关 Skill

  • page-cro —— 在你需要”测什么”的想法时用;在你已有假设只需要测试设计时不用。
  • analytics-tracking —— 在运行测试前搭建测量基础设施时用;不能替代预先定义主要指标。
  • campaign-analytics —— 测试结束后把结果纳入更广泛的活动归因时用;不在测试期间用。
  • pricing-strategy —— 当测试结果影响定价决策时用;不能替代受控测试。
  • marketing-context —— 任何测试设计之前作为基础先载入,确保假设与 ICP 和定位一致。

SKILL Original English Version

下方为 SKILL.md 英文原文,完整保留以便读者对照查阅 / The following is the original SKILL.md in English, embedded verbatim for cross-reference.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
name: "ab-test-setup"
description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "conversion experiment," "statistical significance," or "test this." For tracking implementation, see analytics-tracking.
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
updated: 2026-03-06
---

# A/B Test Setup

You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.

## Initial Assessment

**Check for product marketing context first:**
If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Before designing a test, understand:

1. **Test Context** - What are you trying to improve? What change are you considering?
2. **Current State** - Baseline conversion rate? Current traffic volume?
3. **Constraints** - Technical complexity? Timeline? Tools available?

---

## Core Principles

### 1. Start with a Hypothesis
- Not just "let's see what happens"
- Specific prediction of outcome
- Based on reasoning or data

### 2. Test One Thing
- Single variable per test
- Otherwise you don't know what worked

### 3. Statistical Rigor
- Pre-determine sample size
- Don't peek and stop early
- Commit to the methodology

### 4. Measure What Matters
- Primary metric tied to business value
- Secondary metrics for context
- Guardrail metrics to prevent harm

---

## Hypothesis Framework

### Structure

Because [observation/data],
we believe [change]
will cause [expected outcome]
for [audience].
We’ll know this is true when [metrics].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240

### Example

**Weak**: "Changing the button color might increase clicks."

**Strong**: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."

---

## Test Types

| Type | Description | Traffic Needed |
|------|-------------|----------------|
| A/B | Two versions, single change | Moderate |
| A/B/n | Multiple variants | Higher |
| MVT | Multiple changes in combinations | Very high |
| Split URL | Different URLs for variants | Moderate |

---

## Sample Size

### Quick Reference

| Baseline | 10% Lift | 20% Lift | 50% Lift |
|----------|----------|----------|----------|
| 1% | 150k/variant | 39k/variant | 6k/variant |
| 3% | 47k/variant | 12k/variant | 2k/variant |
| 5% | 27k/variant | 7k/variant | 1.2k/variant |
| 10% | 12k/variant | 3k/variant | 550/variant |

**Calculators:**
- [Evan Miller's](https://www.evanmiller.org/ab-testing/sample-size.html)
- [Optimizely's](https://www.optimizely.com/sample-size-calculator/)

**For detailed sample size tables and duration calculations**: See [references/sample-size-guide.md](references/sample-size-guide.md)

---

## Metrics Selection

### Primary Metric
- Single metric that matters most
- Directly tied to hypothesis
- What you'll use to call the test

### Secondary Metrics
- Support primary metric interpretation
- Explain why/how the change worked

### Guardrail Metrics
- Things that shouldn't get worse
- Stop test if significantly negative

### Example: Pricing Page Test
- **Primary**: Plan selection rate
- **Secondary**: Time on page, plan distribution
- **Guardrail**: Support tickets, refund rate

---

## Designing Variants

### What to Vary

| Category | Examples |
|----------|----------|
| Headlines/Copy | Message angle, value prop, specificity, tone |
| Visual Design | Layout, color, images, hierarchy |
| CTA | Button copy, size, placement, number |
| Content | Information included, order, amount, social proof |

### Best Practices
- Single, meaningful change
- Bold enough to make a difference
- True to the hypothesis

---

## Traffic Allocation

| Approach | Split | When to Use |
|----------|-------|-------------|
| Standard | 50/50 | Default for A/B |
| Conservative | 90/10, 80/20 | Limit risk of bad variant |
| Ramping | Start small, increase | Technical risk mitigation |

**Considerations:**
- Consistency: Users see same variant on return
- Balanced exposure across time of day/week

---

## Implementation

### Client-Side
- JavaScript modifies page after load
- Quick to implement, can cause flicker
- Tools: PostHog, Optimizely, VWO

### Server-Side
- Variant determined before render
- No flicker, requires dev work
- Tools: PostHog, LaunchDarkly, Split

---

## Running the Test

### Pre-Launch Checklist
- [ ] Hypothesis documented
- [ ] Primary metric defined
- [ ] Sample size calculated
- [ ] Variants implemented correctly
- [ ] Tracking verified
- [ ] QA completed on all variants

### During the Test

**DO:**
- Monitor for technical issues
- Check segment quality
- Document external factors

**DON'T:**
- Peek at results and stop early
- Make changes to variants
- Add traffic from new sources

### The Peeking Problem
Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.

---

## Analyzing Results

### Statistical Significance
- 95% confidence = p-value < 0.05
- Means <5% chance result is random
- Not a guarantee—just a threshold

### Analysis Checklist

1. **Reach sample size?** If not, result is preliminary
2. **Statistically significant?** Check confidence intervals
3. **Effect size meaningful?** Compare to MDE, project impact
4. **Secondary metrics consistent?** Support the primary?
5. **Guardrail concerns?** Anything get worse?
6. **Segment differences?** Mobile vs. desktop? New vs. returning?

### Interpreting Results

| Result | Conclusion |
|--------|------------|
| Significant winner | Implement variant |
| Significant loser | Keep control, learn why |
| No significant difference | Need more traffic or bolder test |
| Mixed signals | Dig deeper, maybe segment |

---

## Documentation

Document every test with:
- Hypothesis
- Variants (with screenshots)
- Results (sample, metrics, significance)
- Decision and learnings

**For templates**: See [references/test-templates.md](references/test-templates.md)

---

## Common Mistakes

### Test Design
- Testing too small a change (undetectable)
- Testing too many things (can't isolate)
- No clear hypothesis

### Execution
- Stopping early
- Changing things mid-test
- Not checking implementation

### Analysis
- Ignoring confidence intervals
- Cherry-picking segments
- Over-interpreting inconclusive results

---

## Task-Specific Questions

1. What's your current conversion rate?
2. How much traffic does this page get?
3. What change are you considering and why?
4. What's the smallest improvement worth detecting?
5. What tools do you have for testing?
6. Have you tested this area before?

---

## Proactive Triggers

Proactively offer A/B test design when:

1. **Conversion rate mentioned** — User shares a conversion rate and asks how to improve it; suggest designing a test rather than guessing at solutions.
2. **Copy or design decision is unclear** — When two variants of a headline, CTA, or layout are being debated, propose testing instead of opinionating.
3. **Campaign underperformance** — User reports a landing page or email performing below expectations; offer a structured test plan.
4. **Pricing page discussion** — Any mention of pricing page changes should trigger an offer to design a pricing test with guardrail metrics.
5. **Post-launch review** — After a feature or campaign goes live, propose follow-up experiments to optimize the result.

---

## Output Artifacts

| Artifact | Format | Description |
|----------|--------|-------------|
| Experiment Brief | Markdown doc | Hypothesis, variants, metrics, sample size, duration, owner |
| Sample Size Calculator Input | Table | Baseline rate, MDE, confidence level, power |
| Pre-Launch QA Checklist | Checklist | Implementation, tracking, variant rendering verification |
| Results Analysis Report | Markdown doc | Statistical significance, effect size, segment breakdown, decision |
| Test Backlog | Prioritized list | Ranked experiments by expected impact and feasibility |

---

## Communication

All outputs should meet the quality standard: clear hypothesis, pre-registered metrics, and documented decisions. Avoid presenting inconclusive results as wins. Every test should produce a learning, even if the variant loses. Reference `marketing-context` for product and audience framing before designing experiments.

---

## Related Skills

- **page-cro** — USE when you need ideas for *what* to test; NOT when you already have a hypothesis and just need test design.
- **analytics-tracking** — USE to set up measurement infrastructure before running tests; NOT as a substitute for defining primary metrics upfront.
- **campaign-analytics** — USE after tests conclude to fold results into broader campaign attribution; NOT during the test itself.
- **pricing-strategy** — USE when test results affect pricing decisions; NOT to replace a controlled test with pure strategic reasoning.
- **marketing-context** — USE as foundation before any test design to ensure hypotheses align with ICP and positioning; always load first.

上面这 9 个问题你都经历过哪些?如果手头有正在争论”要不要测”的具体案例,欢迎丢到 cocoloop 社区 的实验设计专区。我们一起用这套 Skill 的 pre-launch checklist 做 peer review,比起在内部反复扯皮,外部视角通常能一眼看出假设漏洞在哪。

  • 标题: 关于 A/B 测试,这 9 个问题我被问过太多次 —— ab-test-setup Skill Q&A
  • 作者: Claude 中文知识站
  • 创建于 : 2026-04-14 20:51:07
  • 更新于 : 2026-04-14 20:51:07
  • 链接: https://claude.cocoloop.cn/posts/ab-test-setup-claude-skill/
  • 版权声明: 本文章采用 CC BY-NC-SA 4.0 进行许可。