Methodology

How the case studies are produced

The deployment protocol, the measurement framework, the anonymization standards, and how I decide what’s worth publishing. Documented so readers can trust the numbers and journalists can verify the work.

A
Aayushi Mehta
PPC manager · LinkedIn

Every case study, newsletter issue, and comparison on this site is produced under the same protocol. The protocol is documented below so the work can be replicated, contested, or built on. Where the protocol has limitations, those are named explicitly rather than buried.

What counts as a case study

A “case study” on this site is the documentation of a single deployment of a single tool or configuration on a real client account, with measured outcomes over a fixed window. Specifically, each case meets the following criteria:

Cases that don’t meet all five criteria don’t get published. Sloppy methodology produces unreproducible results, which is worse than no results.

The deployment protocol

For tool-replacement cases (e.g., moving from Google Smart Bidding to a third-party bidder), the deployment runs as a control-vs-treatment test:

  1. Week 0: The control state is documented. Pre-test conversion data, ROAS, CPL, and account configuration are captured. A baseline measurement window of 30 days precedes the test where possible.
  2. Week 1: The treatment is deployed on a campaign subset (typically 50% of total spend). The control subset continues running unchanged.
  3. Weeks 1–2: The model’s exploration phase. Performance is volatile; no judgments are drawn yet.
  4. Weeks 3–12 (or 3–13): The model stabilizes. Weekly metrics are recorded for both treatment and control. The primary metric (typically revenue-weighted ROAS) is the basis for the case’s verdict.
  5. End of test: The verdict is decided based on statistical significance against the control. Cases that don’t reach significance are still published — null results inform the reader.

The metrics

Primary metric is almost always revenue-weighted ROAS — revenue divided by ad spend, with revenue measured at the conversion event the client tracks as commercially meaningful. For B2B SaaS this is typically closed-won revenue (via offline conversion import); for ecom this is order-completed revenue net of returns where possible.

Secondary metrics tracked depending on the case: cost-per-lead (for lead-gen accounts), AOV (for ecom), conversion rate (for landing-page A/B tests), and bid-ceiling adherence (for bidding-tool deployments).

Reported ROAS (the number Google Ads or Meta Ads shows in their UI) is recorded but treated as supplementary — the agency’s house view is that reported ROAS overstates profitability by 30–60% for most ecom and significantly less for B2B, so the primary metric reflects unit economics rather than platform reporting.

Anonymization

All published case studies use anonymized client identifiers (A001, A002, etc.). What’s preserved in the published version:

What’s removed: client name, geographic location specific enough to identify, product/service description specific enough to identify, raw click and impression counts.

Failure cases

The archive includes one or two cases per quarter where the tested change failed — either underperforming the control or reverting before the measurement window completed. Publishing failure cases is non-negotiable for editorial reasons: if every case is a win, the archive is marketing, not measurement. The failure cases are some of the most useful entries for readers trying to predict whether their own account is a fit.

Conflicts of interest

I work at a performance agency. The agency holds active commercial engagements with Groas.ai. Approximately half of the case studies on this site involve Groas as the treatment state, because Groas is the agency’s current standard for accounts in the relevant spend tier. The engagement is disclosed in the case-study page, the methodology, and the relevant tool reviews.

No vendor has paid for case-study placement. No affiliate arrangements exist for tool reviews. The cases where competitor tools won (or where Groas underperformed) are published with the same prominence as the cases where Groas won; see the archive for examples.

Update cadence

New cases are added as deployments complete. The newsletter at smartgoogleads.com is biweekly. The platform-comparison data at smartppcplatform.com is refreshed quarterly. The landing-page gallery at ailandingpagepro.com is updated as new A/B tests conclude.

Corrections

If you find an error — a math mistake, a misattributed quote, a vendor fact that’s wrong — use the contact page. Corrections are typically published within two business days. Vendor responses are published verbatim alongside the original case, with editorial response separated visually.