Lenskold

Lenskold Article Series

by Jim Lenskold

Capturing Campaign Sales Lift – Can Pre-Post Measurements Be Trusted?

When it comes to measuring the sales lift of a marketing campaign, close to half of all marketers will use some form of basic pre-post marketing analysis (45% per the Lenskold Group & MarketingProfs 2007 Marketing ROI & Measurements Study). A pre-post analysis compares the average sales levels for a time period prior to the marketing campaign to the sales levels during and possibly following the marketing campaign. The methodology is fairly easy to calculate with data that is fairly easy to access, but is it accurate and reliable?

I’ve met marketing professionals from Fortune 500 firms who admit that every time the pre-post analysis shows a positive lift, they attribute the lift to marketing and when it shows a decline in sales, they attribute this to non-marketing factors. The reality is that sales fluctuations are driven by more than just a single marketing initiative and executives are savvy enough to know that you can’t take credit for the upside and no responsibility for the downside.

So the typical pre-post measurement is not accurate enough to support major marketing decisions, and if it is not managed correctly and improved, marketing can take a significant hit on credibility. But don’t give up on it all together. This article will explain what you need to know to improve its accuracy and how it should fit into your mix of measurement methodologies.

Pre-Post Analysis Limitations & Potential

For the most part, all measurements to determine the lift in sales from a marketing initiative must work to identify the “baseline” sales – i.e., the sales that would have happened in the absence of marketing – which is then compared to the actual sales. A pre-post analysis assumes that the average sales levels prior to marketing would continue during the marketing period and are therefore the baseline sales level. As an example, a 4-week average sales level of 500 units may be compared to a 1-week sales level of 550 units during a campaign period and the lift is assumed to be 50 units.

 

The problem is that this methodology does not isolate the impact of just the marketing initiative being measured. Sales levels are influenced by so many factors including competitive marketing, economic conditions, other marketing and sales contacts within your firm, product lifecycles, and even the weather. So the uplift in sales may not be your uplift. And it’s quite possible that your marketing actually did have a positive impact when the pre-post analysis shows a decline in sales during the post-marketing period.

 

Other measurement methodologies can isolate the impact of specific marketing initiatives and have advantages over a pre-post analysis. Market testing establishes control groups to determine the baseline sales. Modeling uses many detailed data points in extensive analyses to strip out the influence of possible external factors and attribute the lift above the baseline to specific marketing initiatives.

 

Pre-post analysis does have a role in marketing measurements. First of all, modeling requires significant budget, data, and staff resources that are not always available. Market testing requires the right conditions and provides a certain number of measurement opportunities within a given time period. Add to that the low cost and low data requirements for pre-post analyses, and it makes sense to use this as an ongoing measurement, providing directional feedback on marketing performance.

 

There are three critical success factors for using pre-post analysis measures:

 

  1. Use this methodology for the appropriate type of objectives
  2. Improve your pre-post analysis techniques for better reliability and lower margin of error
  3. Present the results as directional to set expectations and maintain credibility

Aligning Measurement Methodologies to Objectives

Given the limitations of pre-post analyses, it is best not to choose this methodology for measurements that will guide multi-million dollar campaigns or set your strategic direction. It’s a sufficient measure for monitoring the effectiveness of smaller tactical initiatives where the decisions are made primarily to compare relative value of different initiatives, or to assess whether certain types of marketing are effective enough to continue.

 

Keep in mind that even for these less critical decisions, improving the reliability of pre-post analyses is necessary. When assessing effectiveness using pre-post measures, look more for recurring patterns than for a single incident of positive or negative impact. You can very easily eliminate an effective campaign based on a sales fluctuation in decline. Marketers are also eager to act on highly positive results and may find that the next run of the campaign is not nearly as successful as the original.

 

When to use pre-post analyses in your measurement plan:

 

  • For general monitoring of performance
  • For low cost, low risk marketing initiatives
  • When directional information is all the budget allows
  • As a source to spot emerging trends

Pre-post measurements can detect when expected outcomes do not occur, which may indicate that external factors are influencing marketing effectiveness.  For example, if prior marketing initiatives generally lead to a 3% to 5% lift in sales but that does not occur for a specific initiative, the conclusion may be that further analysis, market testing, or modeling is necessary to assess how external factors are influencing performance.

Improving Pre-Post Measurement Reliability

Marketers like the pre-post analysis because it is very easy to calculate. In fact, many make the inaccuracies worse in an attempt to make the math simple. Over and over again, I see that a measure for a 4 week campaign will be compared to a pre-marketing period of 4 weeks while a 13 week campaign is compared to a 13 week pre-marketing period (and so on). If your objective is to predict the sales level during the campaign impact period, why is the best predictor 4 weeks sometimes but 13 weeks other times? Clearly, this is a short-cut and no analysis was done to determine how to use sales trend data to predict the baseline.   The pre-post methodology can become much more effective with analyses designed for the specific purpose of improving predictive accuracy of sales trend data, and minimizing the variance that hurts predictability. Your goal is to establish one standard pre-post analysis structure that will provide the best possible estimate of baseline sales. Even this analytic approach cannot eliminate the influence of external factors. But it can minimize the margin of error within the sales data.   Here are some of the key steps to improving pre-post reliability:  
  • Minimize the impact of seasonality with a comparison to prior year sales (matching products and sales distribution to the degree possible). Keep in mind that if promotions were run during the same period in the prior year, this seasonality adjustment will hurt and not improve measurement accuracy.
  •  
  • Determine the amount of time where the average is most predictive of the baseline sales and use that time period consistently. If the time period is too short, it will not eliminate short-term fluctuations. If it is too long, it may not be reflective of current market conditions. There is a mathematical answer to this question and it can be validated by projecting the baseline sales during non-promotional time periods.
  •  
  • Remove high variance data. This may include select markets, products, customer segments, or sales channels that are unsteady and can distort the pre-post sales comparisons measurements.
  •  
  • Track a broader product set beyond those promoted to 1) understand the halo effect and possible cannibalization from the lift in promoted products/services and 2) watch for sales fluctuations that may indicate the influence of non-marketing factors.
  •  
  • Account for trends over time so that sales increases or decreases due to other factors (product lifecycle, economy, competition, etc.) do not influence the pre-post comparison.
  •  
  • Run Analyses of Variance (ANOVA) tests on your pre-post campaign analysis to determine if your measured sales lift is above or below your margin of error. If the lift is above (or below) the variance, it at least eliminates the problem of sales fluctuations (but not the impact of external factors).
  •  
  • Identify repetition in the findings to help draw conclusions. If certain types of campaigns consistently show a sales lift over the pre-marketing period, it is less likely that the results are from an un-related driver.
  •  
  • Use outputs from more sophisticated marketing mix modeling or market trend analyses (if run within your company) to incorporate adjustments to the pre-post analyses for the influence of external factors such as changes in competitive activity or weather conditions.
  Running these analyses and establishing a standard methodology allows your organization to use pre-post measurement for directional insight with increased confidence. Plus, the discipline of analyzing and understanding data at a more detailed level puts you one step closer to modeling.

Disclose Limitations to Maintain Credibility

You now know the limitations of pre-post analyses but also recognize that it belongs in your overall measurement plan. You’ll take steps to improve the accuracy and reliability so you can use the directional insight to support low-risk decisions. The final step is to properly present the results to executives and other stakeholders.


Too often, marketers present the pre-post analysis as a conclusive measurement to help build confidence in their marketing. While it shows good discipline to have a measured result, it can backfire when the next pre-post measure shows a negative result that requires excuses as to why that measurement approach is no longer valid.


Present the pre-post analysis as a directional measurement that indicates marketing may be working. Indicate that there could be influence from other factors that are not detectable with this methodology. With full disclosure, you maintain credibility and do not have to make excuses when sales decline in future measures or when repeated campaigns don’t deliver the same lift.


You can also communicate that other more advanced measurement methodologies are available if stakeholders need a more conclusive measure. In fact, make the case for better measurements that go beyond just tracking lift, and also support strategic testing and diagnostics that guide performance improvements. It is impossible to measure everything and measurements are not perfect. But precise and directional measurements are all valuable when used appropriately and integrated with diverse methodologies into a cohesive measurement plan.