Thought

Marketing

How to test a digital ad campaign before you scale spend: the four tests we run

<p>How to test a digital ad campaign before you scale spend: the four tests we run</p>

Testing a digital ad campaign before scaling spend means matching each test to where the campaign is leaking. Landing page tests when clicks don't convert. Artwork tests when impressions don't earn clicks. A/B tests to isolate variables. Brand-lift studies when the goal isn't direct response. Use it before you raise budget, not after you've already spent it.

The cost of skipping the test

Most underperforming digital campaigns don't fail because the idea is wrong. They fail because the team scaled spend before anyone ran a test to prove the idea worked. Doubling the budget on a campaign that isn't performing doesn't produce more conversions. It produces more proof that the current version isn't resonating. At 2x the spend, the loss is 2x.

From our project work across industries, the teams that get the most out of digital spend share one habit: they test at small spend, learn from the result, and only scale what the test validated. The teams that don't are usually the ones explaining why the quarter missed.

 

The diagnostic: four tests, matched to where the campaign is leaking

Not every campaign needs every test. Before picking a method, look at where in the funnel the campaign is actually losing people.

 

If impressions aren't earning clicks → Artwork Testing

Symptom: reach is high, CTR is low. The audience is seeing the ad and scrolling past.

This is a creative problem, not a targeting problem. The fix is comparing creative variants that isolate one dimension at a time. Keep the size and aspect ratio identical. Keep the audience, the budget, and the time window identical. Change one thing: the hero image, the headline, the tone of the copy, or the call-to-action. If it's a fashion ad, variant A might feature a model in black and variant B in white. If it's a message test, variant A might lead with price and variant B with a pain point. Run them in parallel to the same audience, let each collect enough impressions to reach significance, and read the winner.

Testing creative at small spend is the single highest-ROI test most teams don't run. A 30 to 40% CTR difference between variants is common, and if you found that out at 1% of the final budget, you've just saved 30 to 40% of the next 99%.

 

If clicks aren't converting → Landing Page Testing

Symptom: CTR is reasonable, conversion rate is low. People are clicking through and leaving without acting.

This is a landing page problem. The ad did its job. The page didn't close the loop. Common variables worth isolating: the position and copy of the primary CTA, the length of the form, the headline match between ad and page, page load time, and mobile layout. For teams with meaningful traffic, experimentation tools like GA4 experiments, Optimizely, VWO, or in-CMS testing inside platforms like Webflow and WordPress let you run these tests without building multiple page versions manually. For teams with less traffic, a more modest approach works: ship one change, measure, then ship the next.

A practical note for anyone still following older references: Google Optimize was retired in September 2023. If that's what your team used, migrate to GA4 experiments or a dedicated testing platform before running the next test.

 

If you have multiple variables to isolate → A/B Testing inside the ad platform

Symptom: the campaign has several open questions at once (time of day, audience segment, message angle, CTA button copy) and you want structured answers without running five separate campaigns.

Meta's A/B Testing in Ads Manager, Google Ads experiments, TikTok Ads split testing, and LinkedIn Campaign Manager experiments all let you isolate one variable inside a single campaign setup. The platform handles the audience split, budget allocation, and statistical comparison. This is where a question like "which CTA converts best for our audience, Sign Up, Contact Us, or Apply Now?" stops being an opinion and starts being a numerical answer.

The rule we use: one variable per test. Testing creative and audience at the same time produces a result you can't interpret, because you can't tell which change drove the difference.

 

If the goal isn't direct response → Brand Lift or Conversion Lift Study

Symptom: the campaign is awareness-oriented, or direct-response attribution isn't reliable, and you want to know whether the exposure actually moved perception or behavior.

Brand Lift studies (available on Meta, Google, YouTube, and other major platforms) survey a sample of people who were exposed to the ad and a matched sample who weren't, asking the same set of brand questions (recall, favorability, purchase intent). The delta is the lift. Conversion Lift does the same thing for on-site behavior: did exposed users convert at a higher rate than the matched unexposed group?

These matter more now than they used to, for a reason worth its own section.

 

What testing looks like in a privacy-constrained world

Third-party cookie deprecation, iOS App Tracking Transparency, and the shift to server-side conversions have fundamentally changed what reliable measurement looks like in digital advertising. Click-through attribution (the data most teams still build campaign decisions on) is increasingly noisy, and the gap between what the platform reports and what actually happened on your site has widened.

This changes testing in two ways.

First, pre-click signals (CTR, engagement rate, creative-level metrics) are more trustworthy than they used to be, because they happen on the platform and don't depend on cross-domain tracking. Prioritize them when you can.

Second, post-click measurement increasingly needs platform-side lift studies to cross-check what your own analytics suggest. Brand Lift and Conversion Lift are the platform's answer to attribution noise: they compare exposed and unexposed cohorts instead of trying to trace each click.

The old playbook was "test everything against conversion rate." The new playbook is "test creative and engagement on-platform, validate business outcomes with lift studies and first-party data, and treat cross-domain click-through attribution with appropriate skepticism."

 

The operational structure: what to set up before pressing Go

A test produces useful answers only if the setup doesn't compromise them. Before launching any campaign test:

- One variable per test. If you're testing creative, hold audience and placement constant. If you're testing audience, hold creative constant.

- Same dimensions for variants. Identical aspect ratios, lengths, and formats. A 1:1 image vs. a 16:9 video isn't a test. It's two different campaigns.

- Same time window. Weekends and weekdays perform differently. Run variants simultaneously or in directly comparable windows.

- One clearly defined primary metric. Secondary metrics are interesting, but the decision gets made on the primary.

- A minimum per-variant spend that gives each arm enough data to reach significance. Undersampled tests produce confident-looking noise.

- An explicit learn-vs-scale split in the monthly budget. We recommend carving out 10 to 20% of spend for learning. Below that, you're running low-signal tests. Above it for sustained periods, you're running a proper experiment program, which is fine, but name what you're doing.

 

What NOT to test

Testing isn't free, and not every campaign benefits from it. Skip the test when:

- You won't act on the result. If the creative is shipping Monday regardless of what the test says, the test is just making you feel better.

- You don't have traffic to reach significance. A test that takes four months to read is a test that wasn't worth running at that spend level. Either concentrate the spend or skip the test.

- You're testing ten things at once. Multivariate testing has its place, but only when you have enough traffic to power it. For most mid-market budgets, one variable at a time is the format that produces readable answers.

- The launch is genuinely time-sensitive. A perfect test that pushes the campaign past its window costs more than a good-enough launch plus post-hoc learning.

 

Testing cheaply is better than testing badly. When in doubt, run a smaller, cleaner test, not a bigger, messier one.

Share

Writer
Digital Marketer

Chatarin Inmuang