Performance Max is the campaign type ecommerce founders love to hate. The honest answer is that it works brilliantly for some accounts and bleeds money for others, and the difference is predictable. Here is the decision framework we use to decide whether a brand should lean in, throttle back, or switch off Performance Max entirely - and the migration path when the answer is the third one.
A D2C skincare founder reached out to us last quarter convinced Performance Max was destroying her business. She had moved from Standard Shopping to Performance Max nine months earlier on her agency's recommendation, watched her reported ROAS climb from 3.4 to 5.8, and felt vindicated for the first six weeks. Then revenue plateaued, blended ROAS started slipping, and her finance team flagged that new-customer count had quietly dropped 38 percent over the same period.
When we pulled the account, the diagnosis took 20 minutes. Forty-three percent of Performance Max spend was going to her own brand name and close variants. The campaign was not acquiring customers, it was harvesting demand her brand investment had already created and billing it back to the paid media line. The reported ROAS was correct. It was also irrelevant. The incremental ROAS, measured against an aggressive holdout test, was approximately 0.7.
Two weeks later we had her on a different structure: brand exclusions added to Performance Max, a separate brand defence Search campaign at a fraction of the previous brand cost-per-click, and a Standard Shopping campaign running on her top 40 SKUs with surgical bid control. New-customer rate recovered to within 8 percent of her previous baseline within six weeks. Reported ROAS dropped from 5.8 to 4.2, which made her CFO nervous for the first month, then stopped mattering when blended revenue started growing again.
This article is the framework we use to make that call before it becomes a recovery project. Three sections. When Performance Max works for ecommerce. When it does not. And the migration playbook when the honest answer is that you should be running something else.
The framing problem with Performance Max
Performance Max is not a campaign type in the way Search or Standard Shopping is a campaign type. It is an automation surface that decides, on your behalf, which Google network to bid on, which audience to target, which product to surface, and which asset to combine into the served ad. You provide assets, audience signals, conversion data, and budget. The algorithm provides the rest.
This is leverage, and like all leverage it amplifies the quality of the input. Feed it clean conversion data, accurate value signals, fresh first-party audience lists, and a healthy product feed, and Performance Max can outperform any structure you could build manually. Feed it stale customer match lists, ambiguous conversion goals, weak product titles, and no value rules, and it will spend your budget on the laziest path to a reportable conversion - which is almost always your own branded demand.
The debate about whether Performance Max is good or bad is, in this sense, the wrong debate. The right debate is whether your account meets the preconditions Performance Max needs to actually do its job. We covered the broader principle of optimising for the right metric stack in our piece on the 10 performance marketing metrics worth tracking, and the same principle applies here. The platform-reported numbers are not the numbers your finance team should be running the business on.
When Performance Max works for ecommerce
Three preconditions. If all three are present, Performance Max is usually the right choice and you should lean into it. If two are present, run it cautiously alongside Standard Shopping. If one or zero are present, you are setting money on fire and should start the migration playbook below.
Precondition 1: at least 30-50 conversions per asset group per month
The Performance Max algorithm needs signal to learn. The official threshold Google publishes is approximately 50 conversions per asset group to exit the formal learning phase, but in practice the campaign continues to refine for several weeks beyond that, and the asset group composition itself stops being noisy only once you have several hundred conversions feeding it.
Below 30 conversions per asset group per month, the campaign cannot distinguish signal from noise. It will spend most of its budget exploring rather than exploiting, and the exploration phase coincides with the period your finance team is watching the campaign most closely. Most accounts that switch off Performance Max in the first 14 days do so during the learning phase, when underperformance is structural and unavoidable.
If you are below the conversion threshold, the right response is not "give it more time." It is to recognise that the algorithm cannot do its job at your conversion volume and to either consolidate asset groups, run a smaller and more focused Performance Max campaign, or move to Standard Shopping where the optimisation logic is more transparent and works at lower data volumes.
Precondition 2: clean first-party data and a fresh customer match list
The single highest-impact audience signal you can give Performance Max is a customer match list refreshed at least every 30 days, segmented into at least three tiers (last 30 days, last 90 days, and lifetime customers), and uploaded with email plus phone plus first name plus zip wherever possible to maximise match rate. Most ecommerce accounts upload a single static list of all-time customers, refresh it once a quarter, and wonder why their audience signal does not move performance.
The reason this matters is that Performance Max uses the customer match list both as a targeting signal and as a similarity model anchor. The algorithm finds prospects that look like your existing customers, and the quality of that lookalike model is bounded by the quality of the seed list. A stale, undifferentiated list produces a generic lookalike model. A fresh, segmented list produces a precise model that can find high-LTV prospects at meaningfully lower cost.
This is also the area where most accounts fail without realising it. The lift from refreshing customer match data is large enough that we treat it as a precondition rather than an optimisation. If your team cannot commit to a 30-day refresh cycle, you are running Performance Max with one of its three engines disabled.
Precondition 3: enough margin to fund a 4-6 week learning period
Performance Max performance during the first 4 to 6 weeks is structurally worse than what the campaign will deliver after. This is not a flaw, it is the cost of the algorithm exploring the audience and asset space to find what works. Brands that have margin to absorb that learning period make money on Performance Max. Brands that do not, kill the campaign during the worst weeks of its lifecycle and never see the version that would have worked.
The practical test is: can your account tolerate a 30-40 percent reduction in blended ROAS for six weeks without forcing a budget cut? If yes, you can run Performance Max. If no, you should either restrict it to a smaller share of the budget while you build runway, or skip it entirely until your margin profile improves. The same constraint applies to most expensive learning experiments in performance marketing, which we covered in our analysis of performance marketing versus brand investment - the budget that funds learning is a different category from the budget that funds harvesting, and confusing them is how learning campaigns get killed too early.
When Performance Max does not work for ecommerce
Five patterns where Performance Max is the wrong choice regardless of how much budget you throw at it. If you recognise your account in any of these patterns, the migration playbook below is more relevant than the optimisation playbook.
Pattern 1: brand search is more than 35 percent of paid spend
Run an N-gram analysis on the Performance Max search terms report (now exposed in the insights tab and via the API for accounts of sufficient size). Sum the spend on your brand name, common misspellings, brand-plus-product queries, and competitor-brand-plus-yours queries. If that share exceeds 35 percent of total Performance Max spend, the campaign is harvesting demand you already had and you are paying Performance Max bid prices for it.
The fix is brand exclusions in the campaign settings, which prevents the campaign from bidding on your brand and close variants. The savings then move to a separate brand defence Search campaign where the cost per click is significantly lower because the auction has fewer competitors. We have seen accounts recover 18 to 32 percent of effective ad spend through this single intervention. The dashboard ROAS will drop because the easy conversions are no longer counted under Performance Max, but the incremental ROAS - which is the only one that matters - will rise.
Pattern 2: high-margin variance across the catalogue
Performance Max optimises for total conversion value, and unless you have configured value rules carefully, it treats all conversions as equally valuable. For a catalogue where margins range from 12 percent to 68 percent across SKUs, this is a problem. The algorithm will drift toward the products that convert most reliably, which are usually the high-volume, low-margin ones, while starving the high-margin products that move the bottom line.
The fix has two layers. First, conversion value rules that adjust reported value by product category or margin band, so the algorithm sees the products you actually want it to push. Second, a Standard Shopping campaign running in parallel on your high-margin SKUs with priority set to high, so those products get surgical bid control regardless of what Performance Max decides. We covered the broader principle of CTR and conversion-rate compounding in our piece on the SERP bidding war trick that doubled organic CTR, and the structural lesson is the same: when blended metrics hide underlying variance, you need to expose the variance to act on it.
Pattern 3: small catalogue plus low-frequency purchase
Accounts with fewer than 50 SKUs and an average order frequency below one purchase per customer per year struggle on Performance Max because the conversion volume is too low for the algorithm to learn quickly and the customer match list does not refresh fast enough to keep the lookalike model current.
Furniture, mattresses, high-end appliances, and considered B2C purchases of any kind are the canonical examples. The right campaign type for these accounts is usually a tightly structured Search campaign on category terms, brand defence on brand terms, and Standard Shopping with manual bidding on the products that have the strongest reviews and inventory position. The exception is high-AOV products with order values above approximately 800 USD, where the absolute revenue per conversion is high enough that even a slow Performance Max learning curve is economically viable.
Pattern 4: the team cannot commit to leaving the campaign alone
Performance Max requires discipline. Major changes - budget cuts of more than 20 percent, asset replacement, audience signal swaps, conversion goal restructuring - reset learning and force the algorithm back into exploration. Teams that cannot commit to a 6-week observation window without intervening get less from Performance Max than they would from Standard Shopping, where the feedback loop is faster and the cost of intervention is lower.
This is a people problem dressed as a campaign problem. We see it most often in accounts where the founder watches the dashboard daily and asks the agency to "do something" during the structurally bad early weeks. The agency does something, the algorithm resets, and the cycle repeats. If your operating culture is that nervous, run Standard Shopping. The transparent feedback loop will keep your team calmer and the account performance more predictable.
Pattern 5: weak product feed, ambiguous titles, missing GTINs
Performance Max performance is bounded by the product feed. A feed with generic titles ("Blue Shirt - Style 4421"), missing GTINs, low-resolution images, and outdated availability cannot be saved by clever campaign structure. The algorithm matches queries to products primarily through the feed, and a weak feed means weak matching, which means low quality score on Shopping placements and low relevance on text and display placements alike.
The fix is upstream from the campaign. Rewrite product titles to lead with the search-relevant terms (category, brand, key attribute, then style/SKU). Ensure GTINs are populated for every product they exist for. Use high-resolution lifestyle images alongside white-background catalogue images. Sync availability and price daily through a managed feed tool rather than weekly through a manual upload. We treated the broader theme of structural cleanup before optimisation in our PPC landing page audit piece, and the same logic applies here. The campaign cannot rescue inputs that are structurally broken.
The migration playbook: from Performance Max to Standard Shopping plus Search
When the honest answer is that Performance Max is the wrong choice for the account, the temptation is to switch it off and reallocate budget overnight. Don't. The wrong way to migrate guarantees you lose three to four weeks of revenue while the new campaigns work through their own learning phase, and the panic that drop creates often gets the migration reversed before it has a chance to work.
Three-phase migration:
Phase 1 - parallel operation, week 1-2. Launch a Standard Shopping campaign at 30 percent of the current Performance Max budget on your top 40 SKUs by revenue. Add a Search campaign for brand defence at 5 to 8 percent of total budget. Leave Performance Max running at 65 percent of previous budget. The goal of phase 1 is to prove the new structure can deliver acceptable performance at smaller scale, not to replace Performance Max yet. Measure incremental ROAS, not reported ROAS, by running a 14-day holdout test in a controlled geographic region.
Phase 2 - gradual reallocation, weeks 3-6. Shift 10 percent of budget per week from Performance Max to Standard Shopping plus Search. Watch blended account-level revenue, not campaign-level ROAS. The campaign-level numbers will look noisy during this phase because the budget split is changing every week, and the temptation to over-react to a single bad week will be high. Trust the blended number. By week 6 you should be at roughly 80 percent Standard Shopping plus Search and 20 percent Performance Max.
Phase 3 - cutover and stabilisation, weeks 7-10. Once Standard Shopping has been running at full budget for at least 14 days with stable performance within 5 percent of the previous baseline, switch off Performance Max. Watch the account closely for two weeks. The most common failure mode here is that branded demand drops because Performance Max was capturing some incremental brand search the brand defence campaign was not configured for. The fix is to widen the brand defence keyword list, add more match types, and increase the brand campaign budget by 15 to 20 percent for the first month post-cutover.
The migration takes 8 to 10 weeks. Done correctly, blended revenue should be flat to slightly up from week 6 onwards, and the new structure will have meaningfully better visibility, control, and incremental ROAS than the campaign you replaced.
What to keep using Performance Max for, even when you migrate
Even on accounts that should not run Performance Max as their primary acquisition engine, there are two specific use cases where it earns its place:
Pure new-customer acquisition campaigns with brand exclusions and new-customer value rules. With brand excluded, the algorithm cannot harvest your existing demand. With new-customer value rules adding 50 to 100 percent value to first-time buyers, the algorithm prioritises actual acquisition. Run this as a small dedicated campaign at 10 to 20 percent of total budget, separate from your main paid media plan, and measure it against a new-customer cost-per-acquisition target rather than a blended ROAS target.
Retargeting consolidation. Performance Max with a customer match list and a remarketing audience signal can replace a previously fragmented retargeting setup across Display, YouTube, and Discovery. Because retargeting is where the algorithm has the most signal density and the lowest exploration cost, this is one of the use cases where Performance Max often outperforms manual structures. Keep the budget bounded, set the conversion value rules to reflect repeat-purchase versus first-time-purchase value differently, and accept that this campaign exists to harvest the demand your other channels created.
For brands with an Amazon presence, the same logic applies to the parallel investment in Amazon Sponsored Products and Sponsored Brands. We covered the dynamic-bid mechanic for big shopping events in our piece on Amazon Prime Days bid adjustment, and the structural lesson - that platform-level automation amplifies whatever input you give it, including bad inputs - is identical across Google and Amazon.
The metrics that decide whether Performance Max is working
Stop reporting reported ROAS as the headline metric. It is the most distorted number in the account because it bundles branded harvest, retargeting, and prospecting into a single average that nobody can act on. Five metrics, in priority order:
1. New-customer acquisition rate. Set up the new-customer goal in conversion goals, segment Performance Max by new versus returning, and track the share of conversions that are new customers. Below 50 percent on a primary acquisition campaign is a red flag. Below 30 percent is an emergency.
2. Incremental ROAS via holdout test. Run a controlled geographic holdout for 14 days at least once per quarter. Compare the lift in revenue in the test region versus the holdout, divided by spend, to get a true incremental ROAS. The gap between reported and incremental ROAS is the budget you are wasting on demand you already had.
3. Brand spend share via N-gram analysis. Pull the search terms report monthly, classify queries as brand, near-brand, or non-brand, and track the spend share. Brand share above 35 percent on Performance Max means brand exclusions are missing or insufficient.
4. Asset group performance distribution. Check whether the campaign is over-concentrated in one asset group (winning theme found, healthy state) or spread evenly across all asset groups (still exploring, watch carefully). Over-concentration in a low-margin asset group is a signal to review value rules.
5. Conversion lag and assist analysis. Performance Max often gets credit for conversions that were assisted by other channels. Use GA4 attribution reports or a third-party attribution tool to understand the full path. We covered the foundational attribution framework in our marketing attribution explainer, and the principle that platform-reported conversions overstate platform contribution applies most strongly to Performance Max because of its multi-channel reach.
A note on what changed in 2024 and 2025
Performance Max is a moving target. Three changes since the campaign type launched matter more than the rest for the framework above:
Brand exclusions, rolled out account-wide in 2024 and refined in 2025, eliminated the most-cited Performance Max objection. Accounts that did not exist before brand exclusions had legitimate reasons to avoid Performance Max. Those reasons are largely gone today, which means objections from agencies who decided it was bad in 2022 deserve a fresh review.
Search themes, added as an audience signal in 2024, gave operators a way to nudge the algorithm toward specific intent clusters without giving up the campaign type's automation. Used carefully, search themes recover most of the surgical control Standard Shopping users miss. Used sloppily, they confuse the algorithm and slow learning.
Insights tab improvements through 2025 exposed search terms, asset performance, and audience signal performance with meaningfully more granularity than the original launch version. Most of the diagnostics in this article were either impossible or laborious to run two years ago. They are routine now.
The implication is that Performance Max is meaningfully better today than the campaign type that earned its bad reputation in 2022 and 2023. Some of that bad reputation is still deserved, especially for the precondition failures listed above. Some of it is now stale. Worth re-running the framework on accounts where the original verdict was reached early.
How to action this on your own account
Block 90 minutes. Run through the three preconditions and five disqualifying patterns with your own account in front of you. Pull the search terms report. Run the brand-share calculation. Check your customer match list refresh date. Check your conversion volume per asset group. Look at the new-customer rate.
If your account passes the preconditions and clears the disqualifying patterns, lean into Performance Max with confidence. Tighten the inputs - feed quality, customer match freshness, value rules, audience signals - and accept the 4-to-6 week observation window without intervening.
If your account fails one or more preconditions or matches one of the disqualifying patterns, start the migration playbook. Phase 1 takes a week to set up. Total migration takes 8 to 10 weeks. The structure you end up with will give you better visibility, better control, better incremental ROAS, and a much easier conversation with your finance team.
If you would rather not run this assessment alone, we audit Performance Max accounts across D2C, fashion, beauty, home, and electronics ecommerce, and the audit is structured exactly the way this article is. Three preconditions, five disqualifiers, migration playbook if needed, optimisation playbook if not. Faster diagnosis than the months most accounts spend wondering. Reach the team via our ecommerce PPC services page or the broader Google Ads agency practice. For the post-click side of the funnel, our PPC landing page audit guide walks through the conversion killers that show up after the click, and our PPC services practice handles the full media stack.
The Performance Max question is not whether the campaign type is good or bad. It is whether your account is the kind of account it is good for. Most accounts can answer that in an afternoon. Far fewer do.

Aditya Kathotia
Founder & CEO
CEO of Nico Digital and founder of Digital Polo, Aditya Kathotia is a trailblazer in digital marketing. He's powered 500+ brands through transformative strategies, enabling clients worldwide to grow revenue exponentially. Aditya's work has been featured on Entrepreneur, Economic Times, Hubspot, Business.com, Clutch, and more. Join Aditya Kathotia's orbit on LinkedIn to gain exclusive access to his treasure trove of niche-specific marketing secrets and insights.