So, we’re well on our way into 2015 and it’s time to look back on what we’ve achieved so far this year. New digital strategy, check… New programmatic targeting methods, check… Account expansion, check… Reviewed ad copy and variant tested…

More often than not we find ourselves jumping into and testing new features released by the leading platforms but end up neglecting the foundations of our accounts.

What better time to re-visit our ad copy?

We know that the ad copy we write depends on many things, for example, the ad group keywords and the intent of the user we’re targeting. Writing compelling ad copy initially was great and we’ve seen the fantastic results they’re delivering, but, how do we know that we couldn’t write better ads? That we couldn’t push CTR up just that little bit more?

Re-writing our ad copy is one of simplest changes we can make in an account and a sure-fire way to improve performance, but we typically concentrate on every other aspect of the account and overlook ad copy.

There are numerous ways to improve ad copy. It can take time to craft excellent ads from scratch, but it’s well worth noting that creating variants of our top performing ads can deliver the effects we desire.

Here’s an ad that’s performing well:

Alaska Holiday 2015
Experience True Wilderness
Plan Your Alaska Adventure Today!


We know that the ad performs well, so we’re not looking to write entirely new copy. Instead, let’s break the ad down into its headline, description-one, description-two and display-URL.

  • We could exchange the headline ‘Alaska Holiday 2015’ for ‘Alaskan Holidays 2015’ or ‘Holidays In Alaska – 2015’ while preserving meaning.
  • We could switch desc1, ‘Experience True Wilderness’, for a close alternative (such as ‘Experience Alaskan Wilderness’, ‘Experience Authentic Wilderness’), or for a different phrase (for example, ‘Stunning Scenery, Guided By Experts’ or ‘Personal service – expert advice’).
  • The call-to-action in desc2, ‘Plan Your Alaska Adventure Today!’, could be switched for a different one (such as ‘Plan Your Alaska Holiday Today!’, ‘Call & Book Your Alaska Trip Now!’ or ‘Call Us for Your Tailormade Tour ‘)
  • The display URL could be replaced by ‘’ or ‘’
  • Also, in this instance, the desc1 and desc2 lines could be exchanged with one-another, or two dissimilar descriptions could be used instead of a description and CTA.

We can then take these elements, and group them by function:


Alaska Holiday 2015
Alaskan Holidays 2015
Holidays In Alaska – 2015


Experience True Wilderness
Experience Alaskan Wilderness
Experience Authentic Wilderness
Stunning Scenery, Guided By Experts
Personal service – expert advice


Plan Your Alaska Adventure Today!
Plan Your Alaska Holiday Today!
Call & Book Your Alaska Trip Now!
Call Us for Your Tailormade Tour

Display URLs:

These can be arranged in three ways:

Headline – Description – Call to Action – Display URL
Headline – Call to Action – Description – Display URL
Headline – Description – Description – Display URL


Just from these suggestions there are 312 combinations of the original ad possible. Testing all of these would be extremely impractical.

Starting with a smaller selection of variables and then manually changing them takes time, and there are difficulties keeping track of ad testing at scale; as different campaigns get different amounts of traffic, tests would vary in length and different campaigns would be at different stages testing different elements.

Scripting could be used to automate the process, making it simpler and more efficient.

The testing process would be:

  1. Organise the elements into groups so that they can be combined
  2. Start with a selection of ads
  3. Run the ads for long enough to make statistical conclusions
  4. Rank the ads according to results (ties are allowed where performance is not significantly different).
  5. When an ad outranks another, look at which elements were different. The element in the losing ad(s) gets a negative score; the winning ad(s) gets a positive score (each element has a record of its scores in a Google Doc the script can access).
  6. Start a new test with the winners and new ads (automatically generated from the list of elements, giving preference to elements which are untested or have previously won).

There are multiple options for the way to generate new ads:

  • Choose a ‘slot’ to test (headline, description line 1, description line 2, display URL, or the arrangement). Take the winning ads and vary the elements in that slot.
  • Take the winning ads and create variants by switching an element in each ‘slot’. (This has the problem that it would be harder to compare the performance of each element on its own).
  • Take the winning ads and create variants by switching an element in each ‘slot’. From this determine which slot makes the most difference, and in the next round test ads that vary only in that slot.

When enough data has built up, it may be possible to look at the performance of pairs of elements as well as individual elements.

As an alternative, a genetic algorithm could be used:

  1. Start with a set of completely different ads
  2. Test the ads
  3. Drop the losers
  4. Generate a new generation of ads from the winners – combine the winners’ elements (crossover), and randomly change a few elements (mutation)
  5. Repeat from stage 2 with the new generation of ads

Identifying your ‘champion’ ad copy and generating variations for testing is one of the lowest hanging fruits. In any account there is opportunity to try out new ad variants, no two ad groups perform the same and ad copy performance will vary between each. By following the simple methods above we can ensure that we are constantly mining for the best of the best ad copy.

Have you tried this already? What lifts (or perhaps drops) in performance have you seen? Or maybe you’re testing your copy in a different way? I’d love to hear from you. Leave your comments below.