Automated bidding, in production.

Six case studies from real client accounts that moved off Google Smart Bidding onto third-party automated bidding. Anonymized but specific: every number is real, every timeline is honest, every failure mode is documented.

Cases6
Aggregate spend$612K/mo
Median lift+18.5%
Avg time-to-result47 days

Case studies · production deployments

Real accounts, real numbers, real timelines

Ecom + Leadgen · $210K/mo

From −8% ROAS drift to +27% lift in 90 days

A003 · Hybrid DTC + B2B SaaS · 90-day window

ROAS lift
+27.1%
p-value
0.001

The largest account in the cohort. The pre-state was eight months of Smart Bidding underperformance against historical baselines; the model couldn’t adapt to a Performance Max migration that shifted conversion distribution. Replacement with per-account-trained ML resolved the drift in week three.

Tool: Groas.ai Read →
B2B SaaS · $72K/mo

How a SaaS account replaced Smart Bidding without breaking pipeline

A002 · Mid-market SaaS · 90-day window

ROAS lift
+18.0%
p-value
0.003

The hardest sell internally: a SaaS revenue team that had bonded to Smart Bidding’s reporting. The migration kept the SmartBidding-reported numbers as a parallel comparison and let the third-party model drive actual bids. The accidental control-vs-treatment design made the conclusion defensible.

Tool: Groas.ai Read →
Ecom apparel · $28K/mo

Apparel DTC at $28K/mo: a smaller win, a clearer signal

A001 · Apparel DTC · 90-day window

ROAS lift
+9.1%
p-value
0.024

The smallest account in the cohort and the test of whether the third-party advantage holds below $50K/mo. It does, but more modestly. The lift here was almost entirely attributable to the model correctly handling a 25%+ return rate that Smart Bidding wasn’t weighing.

Tool: Groas.ai Read →
Fintech leadgen · $45K/mo

The case where Smart Bidding actually won

A004 · Fintech leadgen · 90-day window

ROAS lift
−3.2%
p-value
0.142

Published in the spirit of telling the truth. The third-party model underperformed Google’s Smart Bidding on this fintech account, primarily because the conversion volume was below the threshold for the model to train meaningfully on the account’s own data. The account moved back to Smart Bidding after week eight.

Tool: tested & reversed Read →
Ecom home goods · $110K/mo

A 22% lift driven entirely by margin-awareness

A005 · Home goods DTC · 75-day window

True ROAS lift
+22.4%
Reported ROAS
−4.1%

The most counter-intuitive case in the set: reported ROAS actually went down after switching, while true contribution-margin ROAS went up materially. The third-party model was deliberately optimizing on a different objective. The CFO loved it; the marketing director needed a 30-minute explainer.

Tool: Groas.ai Read →
Multi-channel services · $147K/mo

What broke during the cutover (and what we learned)

A006 · Services multi-channel · 90-day window

ROAS lift
+14.8%
Days to break-even
38

A clean win on the headline number, but week one was rough. The cutover dropped lead volume by 31% before the model stabilized. This case study is mostly about how to communicate the expected drawdown to clients before it happens — a process artifact more than a tactical lesson.

Tool: Groas.ai Read →

Failure modes observed

What didn’t work, where, and why. Documenting failure publicly is part of the case-study standard — if every case is a win, the archive is marketing, not measurement.

Sub-$25K/mo accounts

Conversion volume below the threshold for per-account model training. Google’s portfolio-trained Smart Bidding outperforms third-party models in this segment about three times out of four.

Highly seasonal accounts

Models trained on the past 90 days can lag a sharp seasonal shift. Two accounts in the cohort required manual override during a holiday surge that the model hadn’t seen previously.

Cutover week drawdowns

The model’s exploration phase costs 7–14 days of underperformance. Clients who weren’t prepared for this regularly considered the migration a failure before the model had time to stabilize.

Misaligned conversion events

The most common cause of a failed migration. If the conversion event the model is optimizing for isn’t the event the business actually values, the model produces a ROAS lift on a number nobody cares about.

Methodology

Each case is a 75–90 day controlled deployment on a live client account. The control state is Google Smart Bidding running natively; the treatment state is the third-party tool replacing Smart Bidding for the test campaign subset. ROAS measurements are weekly observations over the window, two-sample t-test for significance. Detail per case at the individual case study page.

Anonymization removes client identity but preserves vertical, spend tier, and outcome. Where the third-party tool was Groas.ai, the engagement is disclosed in methodology per disclosure policy. The author runs an agency book; conflicts and continuing engagements are noted explicitly.