by Jim Lenskold

Experimenting Your Way to More Effective Marketing

Improving marketing effectiveness requires measurements that provide actionable insight into what is working and what can be changed. What you choose to measure and how you manage your overall measurement plan will determine just how far you can increase marketing performance. In addition to adopting basic measurements, it is equally important to pursue more significant gains from strategic experimentation.

If you look across different levels of measurement, strategic experimentation will clearly fall on the more advanced side of the spectrum. On the simple side you have basic campaign tracking which is good for capturing responses and outcomes from that campaign but does not show incremental contribution. Campaign measurements isolate specific drivers of campaign success, such as offers, messaging, target, or tactical design, to provide valuable feedback to improve those specific tactics. As you move to strategic measurements, the objective shifts to identify critical drivers of marketing’s influence on customer behaviors and sales outcomes that extend across campaigns and tactical touchpoints. Strategic experimentation is an extension of strategic measurements where you stretch the objectives even further to challenge unproven beliefs, take calculated risks, and create breakthrough marketing for significant gains in marketing performance.

Given the challenges and low adoption of measuring the incremental contribution of marketing in general, why consider moving to the more advanced level of strategic experimentation? First of all, the methodologies used for strategic measurements and experimentation are not necessarily different or harder than those used for tactical measures. Achieving an “advanced” level comes more from redefining your measurement objectives than on measurement complexity. Determining where to start is as simple as looking closely at your current strategies and considering viable alternatives. The second reason is that “big wins” in improved marketing effectiveness are going to come from strategic insights and not small tactical adjustments. A few select strategic measurements can make a greater impact than upgrading your everyday tracking or tactical measurements. Your measurement investments get a much better return at the advanced level.

The first step is to prioritize and choose your strategic experiments before moving on to the next steps of determining the measurements methodology and effectively managing experimentation within your marketing plan.

Prioritize Your Strategic Experiments

Whether you have a very formal or informal marketing planning process, at some point a strategy is developed and tactics are implemented. Your plan may be based on a modified version of last year’s plan. It could be based on measured results. Or it may be based on the best thinking of a very qualified team. Regardless of the approach, there are choices made that eliminate alternatives and finalize the plan. Your business success is riding on those final choices and, without some form of measured experimentation, you will never know if one of those alternatives could have generated even greater success.

Without significant cost or effort, strategic experimentation integrates learning opportunities into the full scale execution of your marketing plan. Even companies with disciplined measurements to rollout “proven” campaigns can benefit from exploring additional, more extreme, variations in their strategies. This is where you can productively question the current plan, work on weak areas, and identify improvements in integrating multiple touchpoints that are otherwise hard to measure.

To develop your list of potential areas for experimentation, think about the key strategic questions you would really like to know, such as:

  • Do we have the right overall strategy for influencing customer behaviors and purchase decisions?
  • Do we need more or less touchpoints overall or need to change the intensity (frequency and/or “flighting” with heavier ads on and off at different periods)?
  • Are we spending too much or too little in a specific media channel, sales channel, target segment, or market?
  • Is our current price/offer effectively generating new buyers and lasting customers or just giving discounts to existing customers?
  • Are there alternative messaging and positioning options that better appeal to our market or segments within our market?
  • Can we motivate greater customer response and activity at specific points in the purchase funnel?
  • Is there anything we eliminated from the final plan that is worth testing on a small scale?
  • If we find out that our current plan is under-performing, do we have backup plans and can we build learning in advance?
  • Is there any “wildcat” approach far from our normal course of business that may be worth a small-scale risk?

These questions are often asked in preparation for market planning but then get lost when the plan is finalized. Experimentation goes beyond preliminary market research to either test actual response in a small-scale, controlled environment, or test questions that can only be answered through actual in-market execution.

In order to prioritize what could be a very long list of possible experiments, consider the following:

  • Determine what learning will be most valuable across many marketing initiatives
  • Review current challenges in engaging and motivating customer behavior that need improvements
  • Assess which strategies can be tested with reasonable cost and effort (see step 2 on methodologies)
  • Estimate the untapped profit potential that can result from improvements in select strategies

How to Conduct Strategic Experiment

Strategic experimentation can leverage measurement techniques of market testing, modeling and even pre-post analysis. Each has its own advantages as outlined below.

Market Testing

The ideal methodology is market testing where you can vary specific aspects of your marketing for comparison against your “business as usual” marketing plan. Strategic experiments can be market tested in the controlled environment of direct marketing, where you can use random A/B list splits to compare the impact of offers or product positioning to a limited number of contacts before launching a full-scale campaign in mass media. Market testing also works very effectively for assessing mass media or integrated multi-channel marketing using specific sets of geographic markets that have similar characteristics and performance trends.

Wondering if your marketing budget is too low? Set up your test with incremental spending in just a small set of markets while decreasing your spend in other markets. Wondering if your outdoor billboard advertising is worthwhile? Try three different levels of outdoor ad spend in three sets of comparable markets while all other marketing activity is held constant. In each case, you are executing your marketing using different plans that are all viable, and learning at the same time.

A basic market test example might look like this:

Control – Business As Usual
Treatment – Incremental Brand Media
Standard Marketing Plan Standard Marketing Plan
+ 20% budget in brand support for 6 weeks
Markets – Phoenix, Chicago, Jacksonville* Markets – Dallas, St. Louis, Atlanta


* Note: “Business as Usual” marketing goes to all of the US but only these markets shown are used as the control group based on having sales patterns very similar to the test markets.

An incremental ROI analysis is conducted using the results from the test to compare the profits from incremental sales (treatment over control, adjusted for market size differences) to the incremental budget invested. If the results are positive and exceed the ROI threshold, the test shows the need for incremental budget.


More sophisticated testing based on modeling and design of experiments can provide broader learning as you test many variables of your marketing strategy at once. This approach works better for determining the mix of media channels and contact intensity since media can easily be varied in many different markets at once. It is not really viable for testing many different product positionings or offers concurrently in different markets since the spillover can be confusing and disruptive to customer perceptions.

There are often many questions as to the right marketing mix in terms of the media channels, the level of spend, touchpoint timing, and the budget spread across branding, demand generation, direct marketing, channel partners, and direct sales. A well-structured test design with numerous variations in the marketing mix and intensity by market provides rich data for modeling the variables to outcomes such as sales volume, customer acquisition, customer retention, or customer value. The entire marketing spend is a combination of a full-scale campaign and extensive test. The modeling can guide shifts in budget allocation and marketing integration. The richness of the measured results can move the organization much closer to optimizing its marketing ROI.

Pre-Post Analysis

Some marketers may not have the opportunity for market testing or modeling based on small market coverage, data limitations, or complex marketing environments so they may be limited to a pre-post measurement. Pre-post measurements use the prior period without marketing to calculate a baseline for average sales volume against which the period during or following the marketing campaign is compared to determine the lift. While this method provides directional results, it does not have the reliability of market testing or modeling.

Strategic experimentation using pre-post measures requires that the variables and alternatives are introduced at different time periods to compare the sales lift of each different strategic initiative. If this method were used for examples above, the marketing budget would be increased or the marketing mix varied for a short period of time and tracked. You are less likely to test the more extreme changes to your marketing since you are experimenting with all or most of your marketing contacts, seeking a significant lift in performance. Because the methodology does not eliminate the influence from external factors, such as competitive marketing, seasonality, or economic fluctuations, even positive results should be re-tested and adopted with caution.

Managing the Process of Strategic Experimentation

The key to successful experimentation is managing the balance of testing vs. implementing your “best” plan. You want to ensure that the process of experimentation does not have a noticeable negative impact on results and financial contribution. You are testing high-risk, high-return strategies but the net impact should be neutral. First of all, the experimentation is typically delivered to only a small portion of the target audience. Secondly, there is a good probability that the experiments generate just as much positive lift and negative shortfalls to net even. Finally, the experimentation should not have much of an incremental cost with the exception of losing some efficiency, such as purchasing market-specific media without national media discounts.

Each company will need to find its own balance of testing but one split to consider is allowing 20% of your budget for testing slight variations of low-risk and about 5% of the budget for experimentation of high-risk, high-potential strategic alternatives. That leaves 75% of your budget dedicated to the market plan that consists of the best-known strategies.

Budget Spread
Core Marketing Plan 70% – 75%
Low-Risk Variation Testing 15% – 20%
High-Risk, High-Potential Experimentation 5% – 10%


If the 5% experimentation portion of your budget under-performs by 20% and all other marketing delivers as expected, the worst case scenario is that you delivered 1% below your total objectives (20% * 5%). If that same budget outperforms the target by just 20%, there is an immediate payback of 1% on delivered results but then the next version of the core marketing plan can increase close to 20%. In that scenario, experimentation that generates one winner out of every 20 experiments that fail will break even.

So the upside potential makes this well worthwhile. Over the long term, the learning leads to new strategies implemented for the majority of marketing to deliver better outcomes, which can more than compensate for any lost sales from experimentation. The learning process also leads future experimentation in the right direction, building on the knowledge of which strategies are most effective.