From Nike to Shopify — MMM is now accessible
Marketing Mix Modelling was invented in the 1960s by econometricians working for large consumer packaged goods companies. For most of its history, it required a specialist analytics consultancy, months of data preparation, and a budget that started at £50,000 and could easily reach £200,000 for a comprehensive annual engagement. The output was a thick PowerPoint deck delivered six months after the data was collected.
The companies that could afford it — Unilever, P&G, Nike, L'Oreal — built MMM into their annual planning cycles and made better budget decisions as a result. The rest of the market relied on platform-reported ROAS, last-click attribution, and educated intuition.
Three things have changed that equation in the last five years. First, Meta, PyMC Labs, and Google all open-sourced sophisticated MMM frameworks. Second, cloud computing has made it practical to run computationally intensive statistical models without investing in hardware. Third, the collapse of pixel-based attribution — driven by iOS 14.5, cookie deprecation, and growing consumer privacy expectations — has made the case for MMM-based measurement undeniable even for brands that previously got by on Meta ROAS alone.
Today, a Shopify brand doing £1M in annual revenue can run a production-grade Marketing Mix Model for free, using open-source tools, on data they already have. The technical barrier is genuinely low. The data barrier is modest. The main thing holding most brands back is not capability — it is knowing where to start.
This guide gives you exactly that: a practical, jargon-light walkthrough of what data you need, how the model works, how to read the results, and the mistakes that trip up most first-time MMM practitioners. We will walk through a complete fictional example at the end so you can see exactly what the numbers look like in practice.
If you have never encountered Marketing Mix Modelling before, read our complete beginner's guide to MMM first. This post focuses on the practical "how to run one" side rather than the conceptual foundations.
What data you actually need
The most common reason a small brand's first MMM produces useless results is not a technical error — it is insufficient data. Before you spend any time on modelling, you need to honestly assess whether your data meets the minimum requirements. Running a model on data that is too thin does not produce a rough answer; it produces a confident-looking answer that is wrong in ways you cannot easily detect.
The three hard minimums
At least 12 months of data, ideally 24. This is the single most important requirement. An MMM works by finding the statistical relationship between your spend and your revenue across time. For that relationship to be identified reliably, the model needs to see your business through a full annual cycle — including peak periods, quiet periods, and the natural variation in between. With less than 52 weeks of data, the model cannot reliably separate the effect of your December advertising from the effect of Christmas. With 104 weeks, it has seen two Christmases and can separate the two effects with much more confidence.
There is no workaround for this requirement. If your Shopify store is less than a year old, you cannot run a meaningful MMM yet. Start building your data history now, maintain consistent tracking, and revisit in six to twelve months.
At least two paid channels. An MMM identifies channel contributions by comparing weeks when you spent more on one channel against weeks when you spent less, while controlling for everything else. If you only have one paid channel, the model has nothing to compare it against and cannot identify its effect cleanly from seasonal trends. Two channels is a workable minimum. Three or four channels gives the model more degrees of freedom to work with and typically produces more reliable results.
Weekly granularity. Marketing Mix Modelling is a weekly discipline. You need your revenue and your spend broken down by week — not by day, and not aggregated by month. Monthly data smooths out the within-month variation that the model uses to identify channel effects. Daily data introduces noise that makes it harder to isolate meaningful signals, particularly for smaller brands where daily revenue is lumpy. Weekly is the standard for a reason.
The columns you need in your CSV
Assembling your MMM dataset is mostly a matter of exporting from the right places and joining the tables together. Here is exactly what you need:
| Column | Source | Notes |
|---|---|---|
| week_start | Calculated | Monday of each week, YYYY-MM-DD format |
| revenue | Shopify Analytics | Net revenue after refunds, exclude shipping |
| orders | Shopify Analytics | Optional but useful as a sanity check |
| spend_meta | Meta Ads Manager | Total spend across all campaigns that week |
| spend_google | Google Ads | Include Search + Shopping + Display separately if possible |
| spend_tiktok | TikTok Ads Manager | If applicable |
| spend_email | Klaviyo / internal | Pro-rate platform cost or use send volume as proxy |
| is_promotion | Internal records | 1 if a sale or discount code was active that week, else 0 |
| discount_depth | Internal records | Average % discount during promotion weeks (optional) |
| is_holiday | Calendar | 1 for weeks containing major public holidays |
Do not worry if some of this is imperfect. Email spend is notoriously difficult to quantify — most brands treat it as near-zero cost and exclude it, or use a flat monthly platform fee divided by the number of campaigns sent. The model will absorb some measurement error. What it cannot absorb is systematic gaps — missing weeks, channels left out entirely, or revenue figures that include returns without adjustment.
How much spend variation do you need?
This is the question most guides gloss over. The model identifies channel effects by observing variance. If you spent the same amount on Meta every single week for two years, the model cannot distinguish Meta's contribution from the baseline trend. You need real variation — periods of high spend, low spend, and ideally at least a few weeks of zero spend on each channel.
As a rough benchmark, you want a coefficient of variation (standard deviation divided by mean) of at least 0.3 for each channel's weekly spend. In plain English: your spend should vary by at least 30% around the average. Most brands have this naturally — campaigns are switched on and off, Black Friday budgets are scaled up, slow periods see cuts. But if you have been running always-on campaigns at flat budgets with no variation, your model will struggle.
How Nuso's MMM works
Nuso is designed to take MMM from a multi-week engineering project to something you can complete in an afternoon, even without a data science background. Here is the full workflow:
Step 1: Connect your data sources
Nuso connects directly to Shopify via the Shopify API to pull your weekly revenue and orders history. It connects to Meta Ads Manager, Google Ads, and TikTok Ads Manager via their respective APIs to pull weekly spend by platform. In most cases, you do not need to export or manipulate any CSV files — Nuso assembles the dataset automatically from your connected accounts.
If you have additional spend channels (programmatic display, podcasts, out-of-home, influencer fees), you can upload a supplementary CSV with those columns. Nuso merges it with the automatically pulled data.
Step 2: Review and clean your data
Before running the model, Nuso surfaces a data quality report. It flags missing weeks, implausible revenue spikes (potential data errors), channels with insufficient variance, and weeks where multiple promotions were active simultaneously. You review each flag, confirm or correct it, and mark promotional periods in a simple calendar interface.
This step matters more than it looks. A single unflagged Black Friday week can cause the model to attribute the promotional lift to whichever channel happened to be running that week — which will produce an overestimated ROAS for that channel and an underestimated baseline.
Step 3: Choose your framework
Nuso offers three options: Robyn (fast, frequentist, good for a first model), PyMC-Marketing (Bayesian, full uncertainty intervals, takes longer), or All Three (runs all frameworks in parallel and compares results). For brands running MMM for the first time, we recommend starting with Robyn to get a fast directional answer, then running PyMC-Marketing once you have reviewed the Robyn results and feel confident in the data.
You do not need to configure adstock parameters, saturation functions, or prior distributions unless you want to. Nuso applies sensible defaults calibrated on DTC e-commerce data — geometric adstock with a search for decay rates between 0.2 and 0.8, and Hill saturation with brand-appropriate prior ranges. Advanced users can override any of these settings.
Step 4: Run the model
Robyn runs complete in 5–15 minutes. PyMC-Marketing typically takes 30–90 minutes depending on your dataset size and complexity. Nuso runs the computation in the cloud — you do not need to keep a browser tab open. You receive a notification when results are ready.
Step 5: Review results and run the budget optimiser
The results dashboard is covered in detail in the next section. Once you have reviewed the contribution decomposition and are comfortable with the model, you can run the budget optimiser: input your total monthly ad budget and your target MER (Marketing Efficiency Ratio — total revenue divided by total ad spend), and Nuso returns a recommended weekly spend allocation per channel.
Reading your results
The primary output of an MMM is a revenue decomposition: how much of your revenue in a given period came from each source. For most Shopify brands, this looks something like the table below.
| Component | % of Revenue | £ (£1M annual store) | What this means |
|---|---|---|---|
| Baseline | 38% | £380,000 | Revenue you'd have made with zero ad spend |
| Meta Ads | 27% | £270,000 | Incremental revenue driven by Meta campaigns |
| Google Ads | 18% | £180,000 | Incremental revenue from Search + Shopping |
| Email / SMS | 12% | £120,000 | Incremental revenue from email campaigns |
| Seasonality | 3% | £30,000 | Seasonal uplift above baseline (Christmas, etc.) |
| Promotions | 2% | £20,000 | Additional lift from discount campaigns |
Channel ROAS: MMM vs platform-reported
One of the most useful outputs Nuso displays is the side-by-side comparison of MMM ROAS against platform-reported ROAS for each channel. This comparison is almost always surprising, and frequently alarming.
Platform-reported ROAS is calculated by the ad platform itself, using whatever attribution model you have configured. Meta's 7-day click, 1-day view attribution model will claim credit for any purchase that occurred within seven days of a click or one day of a view impression. Google's data-driven attribution distributes credit across touchpoints in the path to conversion. Both platforms are measuring something real — but both have strong incentives to claim credit for as many conversions as possible, and both use attribution windows and models that systematically inflate their apparent contribution.
MMM ROAS is calculated differently. It is the incremental revenue attributed to the channel by the model — the revenue you would have lost if you had not run those campaigns — divided by the spend. It is a measure of incrementality, not attribution.
A typical comparison might look like this:
| Channel | Platform ROAS | MMM ROAS | Gap |
|---|---|---|---|
| Meta Ads | 4.2× | 2.8× | −33% |
| Google Search | 6.1× | 3.9× | −36% |
| Google Shopping | 5.4× | 3.2× | −41% |
| TikTok Ads | 1.8× | 2.1× | +17% |
Notice what often happens: search channels — Google Search and Shopping — tend to show the largest gap between platform ROAS and MMM ROAS. This is because branded search captures a lot of demand that was already going to convert. Someone who saw your Meta ad last week, then searched for your brand name and clicked a Google Shopping result, gets attributed to Google by Google — even though Meta drove the original purchase intent. MMM sees through this because it observes aggregate patterns: branded search spend and volume are correlated with Meta spend across time, and the model can detect that relationship.
Also notice that TikTok sometimes shows a higher MMM ROAS than platform ROAS. This is common for channels with large view-through attribution problems — TikTok's platform-reported ROAS is often underestimated because the platform reports conservatively relative to the actual awareness it creates.
The baseline: your most important number
Most brands focus on the channel contributions and ignore the baseline. This is a mistake. The baseline — the revenue you would make if you turned off all paid advertising tomorrow — is arguably the most strategically important number in your model.
A high baseline (above 50%) means your brand has strong organic demand: direct traffic, organic search, word of mouth, and repeat customers who come back without being reminded. This is a healthy, defensible position. You can afford to be selective about paid channels and can survive a period of reduced ad spend without catastrophic revenue decline.
A low baseline (below 25%) means your revenue is heavily dependent on paid acquisition. Every pound you spend is doing real work — but if your ad costs increase, or a platform changes its algorithm, or you need to cut your budget, you face a cliff edge. Building baseline is a long-term brand investment, and the MMM shows you clearly what that investment is worth.
Watch your baseline trend over time. If it is growing quarter on quarter — if the model estimates you'd generate more organic revenue now than you would have a year ago at the same spend level — that is compounding brand equity. If it is flat or declining, your paid spend may be generating revenue without building the brand underneath it.
The budget optimiser: where ROAS becomes a decision
The budget optimiser is where MMM produces direct commercial value. It uses the saturation curves fitted for each channel to identify the spend allocation that maximises total revenue for a given total budget.
The logic is straightforward: every saturation curve has a point of diminishing returns, and the optimal allocation is the one where the marginal return on the next pound is equal across all channels. If Meta's marginal ROAS is 2.1 and Google's is 3.4, you should move money from Meta to Google until the marginals equalise. The optimiser finds that allocation automatically.
The output is a recommended weekly spend per channel, alongside a projected revenue total and a projected MER. It also shows you the sensitivity: how much would revenue change if you spent 10% more or 10% less in total? This is useful for planning conversations with finance teams.
The budget optimiser recommendation is a model output, not a guarantee. It assumes that the relationships identified in the historical data will hold in the near future — which is approximately true over short horizons (next 4–8 weeks) but less reliable over longer periods as channel efficiency changes. Refit your model quarterly and re-run the optimiser each time.
Common mistakes and how to avoid them
Mistake 1: Running the model with too little data
This is far and away the most common error. A brand with eight months of Shopify history runs an MMM, gets confident-looking channel contributions, and makes budget decisions based on them. The problem is that with under a year of data, the model has almost certainly conflated seasonality with channel effects. Your September Meta spend looks more effective than your February spend not because Meta was working better in September, but because September is when your category picks up naturally. The model does not know this unless it has seen at least one full annual cycle.
The fix is simple: wait until you have 12 months. Use the interim period to maintain consistent weekly tracking, vary your spend levels deliberately to create the variance the model needs, and flag your promotional periods carefully so they are ready to include in your dataset when you run the model.
Mistake 2: Forgetting to flag promotional periods
If Black Friday is not flagged in your dataset, the model will try to explain the revenue spike using the other variables it has available — primarily your channel spend. Since you probably ran more ads during Black Friday, the model will attribute the promotional lift to your ad channels, inflating their apparent ROAS for the year. Conversely, if you ran a quiet ad week during a sale period, the sale lift gets attributed to baseline, understating organic demand.
You need to flag every significant promotional period: site-wide sales, category promotions, bundle deals, voucher code campaigns sent to your email list. The is_promotion column in your CSV should have a 1 in every week where a meaningful discount was available to customers, even if you did not increase ad spend that week.
Mistake 3: Trusting ROAS over MMM when they conflict
When your Meta ROAS dashboard says 4.2 and your MMM says 2.8, your first instinct is to trust the number that is higher. This is the wrong instinct. The platform-reported ROAS is always going to be higher because it is designed to claim as much credit as possible within the attribution window. The MMM is trying to estimate the counterfactual — what would have happened without that spend — which is a harder but more honest question.
The test is: if Meta's true incremental ROAS is 4.2, you should be able to pause Meta spending for four weeks, observe a roughly 4.2x revenue decline relative to spend reduction, and then resume. In practice, brands that run this experiment almost always find the revenue decline is smaller than the platform ROAS would predict — confirming that the MMM estimate was closer to the truth.
Mistake 4: Ignoring channel spend variation
If you have been running always-on Meta campaigns at a flat £3,000 per week for two years without ever switching them off or significantly scaling up or down, the MMM will struggle to identify Meta's contribution cleanly. There is not enough variance in the Meta spend column for the model to distinguish its effect from the background noise.
The fix for future models is to deliberately vary your spend. Even a 3-week pause on one channel per quarter — which you can frame internally as a creative refresh or a budget reallocation test — creates the variance the model needs. The business cost of the pause is typically outweighed by the improvement in model reliability over the subsequent year.
Mistake 5: Running the model once and filing the results
MMM is not a one-time exercise. Channel efficiency changes as you scale, as competitive dynamics shift, as ad platform algorithms evolve, and as your own brand awareness builds. An MMM fitted on last year's data will give you increasingly wrong recommendations as you move further from that period.
The right cadence is a full model refit every quarter, with an interim review of the key metrics (MMM ROAS by channel, baseline trend, budget optimiser output) monthly. This is not as burdensome as it sounds — once the data pipeline is set up, a model refit is a matter of clicking run and reviewing the new results rather than rebuilding from scratch.
Mistake 6: Acting on model outputs without a sanity check
Before acting on any MMM result, ask yourself three questions. First: does this make intuitive sense? If the model says email has a 12× ROAS, that is either a genuine finding about the power of your list, or a sign that something is wrong with your email cost estimation. Second: are the channel contributions in plausible ranges? Meta typically accounts for 15–35% of revenue for DTC Shopify brands. Numbers far outside that range warrant investigation. Third: do the saturation curves look realistic? A channel that shows no sign of saturation even at your highest spend levels is either genuinely undersaturated (possible) or the model has not identified the saturation curve correctly (also possible).
Real example walk-through: Ember Candles
Ember Candles is a fictional UK Shopify brand selling premium scented candles, home fragrance, and reed diffusers. They launched in March 2024, which means by early 2026 they have 23 months of weekly data — just enough for a reliable MMM. They spend approximately £12,000 per month across Meta Ads, Google Ads, and email (via Klaviyo), and they do £1.1M in annual revenue.
Their starting situation
Before running MMM, Ember Candles' founder, Sofia, was making budget decisions based on Meta's in-platform ROAS (reporting at 3.8×) and Google's in-platform ROAS (reporting at 5.2×). She had increased Google budget from £2,000 to £4,500 per month over the previous six months based on the strong Google ROAS, and reduced Meta from £7,000 to £5,500. She was happy with the reported numbers but concerned that overall revenue growth had slowed despite what the dashboards were showing.
The data assembly
Ember Candles' data manager exported weekly revenue from Shopify Analytics (net of refunds), weekly spend from Meta Ads Manager and Google Ads, and weekly send activity from Klaviyo. She flagged seven promotional periods in 23 months: two Black Friday / Cyber Monday windows, a Valentine's Day sale, a Mother's Day promotion, a summer clearance, a Christmas gifting campaign in late November, and an end-of-line clearance in February 2025.
The resulting CSV had 99 rows (99 weeks from March 2024 to March 2026) and 12 columns. She uploaded it to Nuso, confirmed the column mapping, and ran the Robyn model first for speed.
What the model found
The Robyn model selected a solution from the Pareto frontier with a NRMSE of 0.11 (good fit) and produced the following revenue decomposition for the trailing 12 months:
| Component | % of Revenue | £ (trailing 12M) | MMM ROAS | Platform ROAS |
|---|---|---|---|---|
| Baseline | 44% | £484,000 | — | — |
| Meta Ads | 29% | £319,000 | 3.9× | 3.8× |
| Google Ads | 14% | £154,000 | 2.3× | 5.2× |
| Email (Klaviyo) | 9% | £99,000 | — | — |
| Seasonality | 2% | £22,000 | — | — |
| Promotions | 2% | £22,000 | — | — |
The finding that stopped Sofia in her tracks: Meta's MMM ROAS (3.9×) was close to its platform-reported ROAS (3.8×). But Google's MMM ROAS (2.3×) was dramatically below its platform-reported ROAS (5.2×). The gap was larger than she had expected and larger than any measurement noise could explain.
Why Google's ROAS was overstated
The model surfaced the explanation in the saturation analysis. Google Ads for Ember Candles was predominantly branded search — people searching "Ember Candles" and "Ember reed diffuser" by name. These are customers who already know the brand and have high purchase intent regardless of whether a Google Ad appears. The MMM, by looking at what happens to revenue in weeks when Google spend is low, found that branded search revenue does not drop proportionally — confirming that much of the Google-attributed revenue would have arrived via organic search or direct anyway.
The double-count was also visible in the data: Meta spend and Google-attributed conversions were positively correlated across weeks. When Meta spend was high, Google claimed more conversions — because Meta was creating the awareness that subsequently showed up as branded search intent.
The budget optimiser recommendation
Given the fitted saturation curves and the current total monthly budget of £12,000, the budget optimiser recommended:
- Meta Ads: increase from £5,500 to £7,500 per month (Meta is below saturation, high marginal ROAS)
- Google Ads: reduce from £4,500 to £2,500 per month (Google is past the efficient spend threshold)
- Email/Klaviyo: maintain at £2,000 per month (near-optimal)
The projected revenue impact: £1.1M → £1.18M annually, a £80,000 increase with no additional total spend. Not because the total budget changed — but because it was allocated more efficiently.
What happened next
Sofia made the budget changes in March 2026. She refitted the model six weeks later with the updated data. Google's platform-reported ROAS had dropped (less budget, similar revenue) and Meta's had stayed roughly flat — which is exactly what you expect when you move money from an oversaturated channel to an undersaturated one. Total revenue in the six-week post-change period was up 8% year-on-year, in a period where overall market growth was flat.
The £80,000 annual revenue uplift projection was on track. More importantly, Sofia now had a measurement framework she trusted — one that told her what was actually working, rather than what each platform wanted her to believe.
Run your first MMM this week
Connect your Shopify store and ad accounts. Nuso handles the data prep, runs the model, and tells you exactly where to move your budget — no data scientist required.
Get started free