Paid Search Ad Testing

June 3, 2009 by  
Filed under Internet Marketing

Jun 1, 2009 at 1:17pm ET by Andrew Goodman of Search Engine Land

“Horror” is an apt term for how many experienced paid search practitioners respond to the newbie’s mistake of optimizing paid search ads solely to CTR.

Before getting to the nuances of ad testing, though, it’s important to emphasize yet again that rapid setup of ad rotation, for testing purposes, was the gift given to us by the Google Gods in 2002 when they rolled out what was then called AdWords Select. Seven years later, many marketers have not fully accepted the gift.

Although few drivers are as key to performance as the ad copy, many campaign managers have a tendency to neglect ad testing to this day. There is still much more industry “noise” in the day-to-day tactical chatter about two other levers: (1) keywords and (2) bids. In those areas, the newbie’s attention is easily grabbed by vendors who have nothing to sell but more keyword discovery, and more frequent bid changes.

By contrast, ad refinement requires know-how, experience, and a testing methodology. It’s the guts of this little response engine you’re building. Don’t neglect it.

Optimize for maximum clickthrough rate (CTR)?

I’ll call it CTR for short, but it may include “other relevancy factors.” Today’s paid search systems no longer use CTR alone as a proxy for relevance and quality, for ranking purposes. They use “quality” measures that focus primarily on CTR. One of the reasons CTR is so important to the engines is revenue.

As such, many of the built-in tools in the campaign management platforms have a high-CTR bias.

When you enable a new AdWords account, your ad rotation setting is defaulted to “Optimize.” If you’re taking advantage of ad rotation, the system will automatically favor “winning” ads that have a high CTR. You need to change this to “Rotate” if you want to take control back.

Dynamic keyword insertion is a trendy tool that can help you match the ad title or body copy elements to what the user typed into the search engine. Typically, this raises CTR’s. It’s very popular, but again, should be used for special purposes only.

The agency world—at least the more cynical side of it—also has a pro-CTR bias. If the client fails to ask performance-based questions, and is treating paid search more like a media buy than a “tweakable lead generation machine,” it’ll be tempting for some agencies to turn on the above tools and to optimize for higher CTR especially if they’re paid as a percentage of spend. And some in-house managers might also want the line item for search to go up, as opposed to improving ROI. Higher CTR means more clicks, and therefore a higher spend.

(For the same reasons, they might overbid the campaign into an ad position that is higher than economically desirable.)

By extension, although they’re certainly going to say otherwise if you ask intelligent questions about filtering, analytics, negative keywords, and the like, Google’s default advice will carry the same bias. Why wouldn’t it? Higher CTR’s mean more clicks which mean more revenue for Google. And meanwhile, they can make more users happy.

That’s fine if you’re not concerned about paying higher costs per acquisition for the extra volume. But optimizing strictly to CTR is generally seen as a rookie mistake by more performance-oriented search marketers.

Optimize for maximum ROI?

So now what? Just test the ads for a few weeks and an apparently statistically-significant number of sales conversions or leads, and kill the ones with the worst cost-per-sale or cost-per lead numbers, right? After all, that cost per conversion number is right there in your AdWords interface as long as you have AdWords Conversion Tracker installed. And pausing or deleting the non-performing ads is just a click away.

If your approach is cautious and a “pristine” ROI (as opposed to total profit) is paramount among your objectives, that’s fine. But it’s often the wrong move.

What if you happened to have two ads in your test (of four or five ads, say) that tied for the lead in ROI, but one had a significantly higher CTR? It wouldn’t hurt to favor the higher CTR one, would it? Even if your criteria were solely internal to your company. Assuming profitability to begin with, total profit would be higher if you chose the high ROI ad that also got more clicks than the ad it was tied with for highest ROI.

Unfortunately, that only rarely happens, but it illustrates the potential you could be missing out on.

Go for the “double win”

If you’re rewarding yourself (but not The Google) with lower CTR ads all the time, the economics of that can hurt you because the system’s tuned in the house’s favor.

In addition to potentially hurting your ad position and therefore click volume, an ROI-only approach to testing ads will actually hurt the ROI itself. If you want to regain the lost volume, you’ll have to increase bids above where many competitors are bidding. That’ll cut into the “better ROI” you temporarily achieved. The house wins. 🙁

Why? Google, in particular, places such a huge weight on keyword CTR in its ad ranking algorithm, that you could be hurting keyword Quality Scores if you’re always overfiltering and settling for nice high ROI ads that also have poor CTR.

In other words, if optimizing for pure CTR is just plain careless, optimizing for high ROI alone can be the “easy way out” that hurts total profit and also eats into ROI itself. When you’ve refined to the point where you’ve got ads with higher ROI, you need to hold out for further discovery so that you can find—among contending high-ROI ads—the high-ROI ad among those that has the highest CTR possible. I call this a “double win.”

That’s not easy. But your chances of finding one of these are increased if you’ve been through a staged ad testing process and are now doing some kind of proprietary multivariate ad testing. A partial factorial approach to testing from a potential pool of 64 ads, say, gives you a greater chance of stumbling on that “genetic freak” of an ad that just happens to do a little better on both counts than all the rest.

Note: most of the literature you’ll read on multivariate testing applies to landing pages. Most advertisers have not yet thought to apply this to the ads. And in the ad testing field, we have certain advantages if we’re conducting the tests with our own methods or tools. Luckily, tools like Google Website Optimizer introduced a “pruning” feature to allow advertisers to arbitrarily give up on certain losing elements or combinations prior to test completion to reduce testing time. In the ad testing field, advertisers can “prune” a losing ad based on judgment, at any time.

Avoid subjectivity where you can

The above brief notes on methodology suggest that we can do a lot for the economics of our campaigns by sticking to a plan and trying various methods and various creative theories to achieve superior consumer response. Yet oftentimes we don’t get there because we don’t experiment enough. Either we’re content to stop too soon, or someone in authority (the famous HIPPO’s) vetoes an effective ad element because they deem it wrong somehow.

What about other considerations?

To be sure, there may be budget issues, brand or feel considerations, to say nothing of regulatory, seasonal, or factual concerns that stop you from testing everything to the ideal degree.

On the whole, many advertisers have a long way to go before their ad testing strategy is up to par. The first step for many is to ward off the industry-wide CTR-only bias, and the second step is to look again at the importance of boosting CTR if ROI has become the only benchmark for ad performance. Further refinement is possible with focus, patience, and a strong methodology.