Looking Beyond Response Rate

You’ve heard us say it before: direct mail fundraising can be expensive. So, at Allegiance Fundraising Group, we are always looking for ways to improve performance in the mail.
It is easy to review a single acquisition direct mail campaign’s performance report by list and spot the winners and the losers at a glance. A simple rank ordering quickly shows you which lists responded at a rate higher than the campaign overall, and which responded below that campaign average. It is easy at the time of a campaign to know the ins and outs, the ‘mechanics’ of the campaign, the things that may have impacted response overall – positively or negatively. But with the passage of time, you need campaign performance to stand on its own without all of the exceptions and asterisks.
As we desire to create a deeper history for each list to increase list planning confidence, we need to know the relative performance of each list for each of its “at-bats”. Because a list’s performance in any given campaign is relative to that campaign, comparative metrics are critical. A very weak campaign is going to drag down the response rate of all of the lists you chose to use in that particular mailing. This can create a false truth, if you will, about a list’s ability to generate response. Similarly, when you hit the ball out of the park with a new package or a tweaked offer, any list that was lucky enough to be a part of that campaign has an unfairly elevated performance factored into its average response rate.
Let’s look at an example:
Marine Life International (fictional organization) mailed the donor list of National Whale-Saving Alliance (another fictional organization) in its spring campaign. The list responded at .75%.
Dedicated Dolphin Rescue (fictional organization) also mailed the donor list of National Whale-Saving Alliance for its spring campaign. The list responded for them at .75% as well.
NWSA is Marine Life International’s very best list in the campaign. But it is Dedicated Dolphin Rescue’s very worst list. How is that possible? Because Marine Life International achieved a response rate of .52% overall while Dedicated Dolphin Rescue’s campaign average was 1.25%. For Marine Life International, the list is a winner, indexing at 144. For Dedicated Dolphin Rescue, it’s a loser, indexing at only 60, barely half of what the package produced.
What appears to be a same-responding list would certainly be treated very differently when planning these two organizations’ acquisition mail list buys.
To establish an even richer understanding of a list’s propensity to respond well, we can then begin to stack up each of these “at-bats” to develop patterns. If we can see that nine times out of ten, a list beats the campaign average, a case can be made for taking more quantity than perhaps you normally would on a single list. By the same token, a list that performs below the campaign average just as often as it beats it isn’t a list that you can depend upon.
In summary, don’t look at response rate and average gift in a vacuum. Make your metrics work for you. Taking more time to evaluate performance gives you greater confidence in list selection and minimizes your risk in new donor acquisition. Your fundraising program will help you.