March 30, 2015, 7:53 p.m. in Case Studies by Adam Lofting
In our line of work, we love the stories that bring to life the opportunity of increasing conversion rates and raising more money for our important causes.
Here's a personal favorite of mine.
How a 1 hour split test raised over $100,000 for WWF
While I was working at WWF, we were raising funds for an emergency tiger campaign. Tiger adoptions were typically promoted at an asking price of $5 per month, but online we also offered a one-time payment option starting at $60 to adopt a tiger for a year.
Regular donations were prompted at:
$4 or $7 or $15 or $Other (where other is a free input box)
One-off payments were prompted at:
$48 or $84 or $180 or $Other (i.e. the one-off amounts multiplied by 12)
The majority of our gifts were regular donations, but when we looked at the data for the one-off adoption payments the vast majority of donors were selecting the $48 radio button. Based on the suspicion that the gaps at this scale made the increments look too dramatic to a typical donor ($48 to $84 feels bigger than $4 to $7), we tested donation prompts of $60 $75 and $90 for the one off-amounts. A user could still donate the minimum amount of $48 using the "Other" input box.
By the standards of online optimization tests, this one was easy to run. It took less than an hour of work to setup, analyze the results and change our site when the test was finished.
The new price-prompts for the one-off donation amounts resulted in a $5.27 increase to the average gift value for one-off adoptions. This was a $100,000 increase to annual income. A sum like that could restore 1,360 hectares of grassland habitat where tigers live and hunt their prey.
For an hour of techy/webby work, that's a significant contribution to make to the organizational mission, and understandably it's a story people like to hear. Everyone from your supporters through to your board of trustees will enjoy this sort of story, so it makes sense that we love and share them. I've shared this one many times.
The risks of sharing simple split-test wins
But, as with all the stories we share there is a risk. Most of our news today is filled with riveting tales that lack the information we actually need to make informed decisions. When we share our conversion rate optimization stories, let's be careful not to leave out the information that people really need to replicate the success.
I thought it might be useful to tell you about the story behind the story above, and behind every optimization test I have run.
The story behind the story
At WWF, the price prompts on our donation asks were sacrosanct. At least that's how they felt to me when I started working there. They were the product of years of fundraising learning and you'll notice they fit with the style of most fundraising asks from most charities. They are like that because they work. On my first day working there, if I'd suggested changing these price prompts, the answer would have been no. Not that I had any inkling at that point that this was a good test to run.
This optimization test happened because it had the support of the fundraising team, the web team and the relevant management. It had this support because we had been running lots of smaller tests over the preceding year. And we'd been running them well. We had been documenting and sharing our process and our rationale, and building organizational support with more and more significant tests.
Why we were able to run this test
It was only because we had good data on our existing donation amounts that we could identify this opportunity for improving one-off gift amounts. Weeks, months or maybe even years of work by many people had to happen before the one hour of work that gave us the great web-testing story above.
I can put my name to bits of this organizational development, and some of the new processes we set up, but it's not my work overall. The real credit goes to the person who managed to include the words "multivariate testing and optimization" in the job description when they were recruiting for my role. This was my permission to test things.
Sometimes I had to fight for the time to test, because although the value was understood by the people who recruited the role, this was only part of my time. Big and urgent projects often trump slow and important ones.
The secret to building a testing culture
By continuing to document results and to share them with the wider team, the process slowly shifted. I went from fighting for the time to run tests, to having so many requests for things to test that we had a significant backlog of ideas.
All in all, it took at least a year of work including technical, process and internal negotiating to reach the point where a one hour tweak to the website could increase our annual income by £68,000. I hope that's actually a useful story for you to hear about optimizing your website.
I don't want to belittle the individual case-studies, as this story about spending £200 per year to keep your towels warm shows, a good story can be a catalyst for change. But the best use of the individual stories of testing is to normalize the ongoing process with the rest of your organization.
The real first step to split testing
To optimize your fundraising conversion rate, don't start worrying about the colour of your call to action button. Instead, do this:
- Make testing a part of someone's job. In writing. In the job description.
- Set specific goals for the number of tests to run each year.
- Document everything, or counter-intuitive findings will be undone by well-meaning others.
- Make your results freely and easily available to anyone in the organization. This is science and you need to be peer-reviewed.
- Review your tests regularly (weekly or fortnightly) and continually plan what you will test next.
If you do all that, you'll be able to answer your own questions about what to test and what tools to use.
Adam Lofting is the Digital Analyst at WWF