LostChurn Docs
Campaigns

A/B Testing

Optimize dunning campaign performance with A/B tests — test subject lines, send times, channels, and message copy to maximize recovery rates.

A/B testing lets you compare different versions of your campaign messages to find what recovers the most revenue. LostChurn supports structured experiments with automatic traffic splitting, statistical tracking, and winner declaration.

What Can Be Tested

LostChurn supports four A/B test types, each targeting a different aspect of your dunning campaigns:

Test TypeWhat VariesExample
Email SubjectSubject line of email templates"Update your payment" vs "Your subscription is at risk"
Send TimeHour of day the message is deliveredMorning (9 AM) vs afternoon (2 PM) vs evening (7 PM)
ChannelWhich channel delivers the messageEmail vs SMS vs WhatsApp
CampaignEntire campaign flow (steps, timing, channels)3-step gentle flow vs 5-step aggressive flow

Email Subject Tests

The most common test type. Create two or more subject line variants for the same email body to measure which drives higher open and click-through rates.

Send Time Tests

Test different delivery hours to find when your customers are most responsive. LostChurn adjusts the send time per variant while keeping the message content identical.

Channel Tests

Compare recovery rates across different channels. This is especially valuable when you are adding a new channel (e.g., WhatsApp) and want to measure its effectiveness against your existing email or SMS flow.

Campaign Tests

The most comprehensive test type. Compare entirely different campaign structures — different numbers of steps, different delays, different channel mixes. Use this to validate major campaign strategy changes before rolling them out.

Setting Up an A/B Test

Step 1: Create the Test

Navigate to Campaigns > A/B Tests > New Test. Provide:

  • Name — A descriptive name (e.g., "Q1 Subject Line Test - Urgency vs Empathy")
  • Description — Optional notes about the hypothesis you are testing
  • Test type — Select from EmailSubject, SendTime, Channel, or Campaign

Step 2: Define Variants

Add two or more variants. Each variant needs:

  • Variant name — A label like "control", "variant_a", "variant_b"
  • Weight — The percentage of traffic to send to this variant (all weights must sum to 100)
  • Configuration — The variant-specific settings (subject line text, send hour, channel, or campaign reference)

Example: Subject line test with three variants

VariantWeightSubject Line
Control40%"Your payment didn't go through"
Variant A30%"Action needed: update your card"
Variant B30%"We want to keep you as a subscriber"

Giving the control a higher weight is a common practice when you want to limit risk while testing new approaches.

Step 3: Attach to a Campaign Step

A/B tests are connected to campaign steps through the step's A/B variants configuration. When building or editing a campaign step:

  1. Enable A/B testing on the step
  2. Select the test you created
  3. Map each variant to a template

When a customer reaches that step, LostChurn assigns them to a variant based on the configured weights and tracks their outcome.

How Traffic Splitting Works

LostChurn assigns each customer to a variant when they reach the A/B-tested campaign step. The assignment is:

  • Deterministic — A customer always sees the same variant, even if the campaign is paused and resumed
  • Weighted — Assignments follow the configured weight distribution
  • Tracked — Every assignment is recorded with the customer ID, variant ID, and timestamp

The total_assignments counter on the test and the impressions counter on each variant update in real time so you can monitor the split.

Reading Results

Navigate to Campaigns > A/B Tests and select your test to see the results dashboard. Key metrics for each variant:

MetricDescription
ImpressionsNumber of customers who received this variant
ConversionsNumber of customers whose payment was recovered after receiving this variant
Conversion rateConversions / Impressions
Revenue recoveredTotal dollar amount recovered by this variant

Conversion Tracking

A conversion is recorded when a customer's payment is successfully recovered after being assigned to a variant. LostChurn attributes the recovery to the variant using the same attribution methodology used for overall campaign performance.

Statistical Significance

LostChurn calculates statistical significance to help you determine when you have enough data to declare a winner.

When Is a Result Significant?

A result is considered statistically significant when there is at least 95% confidence that the observed difference in conversion rates is real and not due to random chance.

Guidelines for reliable results:

  • Run the test for at least 7 days to account for day-of-week effects
  • Aim for at least 100 impressions per variant before drawing conclusions
  • For small differences in conversion rates (less than 2%), you may need 500+ impressions per variant
  • Do not stop a test early just because one variant is ahead — early results are often misleading

Interpreting Results

The results dashboard shows:

  • Confidence level — Percentage likelihood that the winning variant is truly better
  • Lift — Percentage improvement of the best variant over the control
  • Recommended action — LostChurn suggests whether to declare a winner, continue testing, or stop the test if no variant shows improvement

Declaring a Winner

When you are confident in the results:

  1. Click Declare Winner on the winning variant
  2. LostChurn updates the campaign step to use the winning variant's template for all future enrollments
  3. The test is marked as complete and stops assigning new traffic

You can also choose End without winner if the results are inconclusive, which reverts the step to its original template.

Best Practices

  • Test one thing at a time — Changing multiple variables makes it impossible to know what caused the difference
  • Document your hypothesis — Use the description field to note what you expect and why
  • Run tests to completion — Resist the urge to stop early when one variant looks promising
  • Share results with your team — Use Slack notifications to alert your team when a test reaches significance
  • Iterate — Use the winning variant as the new control in your next test

What's Next

On this page