Last year GivingTuesday and the Association of Fundraising Professionals (AFP) joined forces to support the Fundraising Effectiveness Project (FEP). The FEP has shared quarterly industry benchmarking reports since 2017, with data spanning back to 2005.
Our partnership is based on a goal to make the FEP quarterly benchmark report as accurate and representative of giving to nonprofits as possible and to share deeper insights at a faster cadence.
We began in Q2 by revamping the look and feel of the report to make it easier to understand. These enhancements also included a new online dashboard: https://data.givingtuesday.org/fep-report/.
In 2021, we anticipate updating the report further – sharing new tracking metrics to give visibility into key fundraising tactics (e.g. repeat/recurring gifts, online/offline trends), and further splits on the data (e.g. by organization size and cause area) to make it possible to contextualize results to your cause area or nonprofit.
In the process of digging into the data, we found additional areas for improvement that we want to share as we anticipate additional method changes in 2021. We welcome your thoughts and feedback on what we’re seeing and how we can address them going forward.
We found two core challenges in our methods. Their effects mostly canceled out in 2020, but we can improve the accuracy of our benchmarks by addressing these in 2021.
1. Our Panel Has Different Cause Representations Than Leading Benchmarks
Our benchmarks use a “panel” or organizations that attempt to mirror nonprofit sizes in the US (while eliminating the largest and smallest organizations, so we can focus on our most stable organizations over the past 5 years).
We do not, however, try to mirror nonprofit cause categories. In fact, it is hard to find a set of cause benchmarks to try and match against. Looking at an example benchmark, Giving USA’s cause category breakdowns for overall giving, we see that the FEP data likely:
- Over-represents human services
- Under-represents religion-related services, Education, and Arts.
In 2020, this issue tends to over-represent growth during COVID (and for the entire year), since we over-represented the largest growth sector (human services) and under-represented the largest retraction sector (Arts).
In isolation, this could have created bias in our metrics, and make them less representative of the sector as a whole. Fortunately, this was not the case in 2020 – the next issue was as big a factor (if not larger) which counteracted this challenge. Making a change in 2021 will ensure our metrics are more representative of the sector.
2. Nonprofit Data Reporting Is Slow, Causing Our Metrics to Increase Post-Publication.
We know that we continue to receive data after we publish reports. We attempted to measure the amount of change in two ways:
- See how much data is added on each data dump we receive, to months we already reported on, and how that affects results.
- See how much our metrics have changed since publishing them (note: this includes “data drift” as well as methods changes).
What we found is clear:
- In 2020 Q3 and Q4, data within 2020 (Q1 and Q2) moved more than data from previous years (2019 & 2018). This will make our metrics increase over time.
- In comparing current methods and data to what we published previously, we also see increases for 2018 and 2019.
It is hard to quantify data movement, as we have not been tracking data reporting trends long (only in 2020), and changes from previous publications also contain method changes. Our current guesses are that top-line donation (total dollars) could change 5 to 10% in absolute basis points in Q1 and Q2 reports (as organizations are slower to report results early in the year), and 2 to 5% in Q4 (when organizations complete reporting more quickly due to the fiscal year changes in December).
By changing this, our reports will be more reflective of trends in Q1 and Q2, and ensure our readers can use our metrics confidently. Our goal is to account for this in our 2021 Q2 report – with an update in Q1 to better quantify the issue and ways we can address it.
What This Means for 2020 Reporting
Fortunately for 2020, the two effects counteract each other:
- We over-report growth because our panel is biased towards Human Services, which has seen the largest spike during COVID
- We under-report growth due to not having received all Q1 and Q2 data.
Given that the effects counteract, and that more work is needed to quantify each effect more accurately, we’re not clear on whether we are over-reporting or under-reporting in 2020.
What This Means for 2021 Reporting
While we’d prefer to make all updates in Q1, we want to take time to make the most accurate changes possible. For these reasons, we anticipate making the following changes in 2021:
- End of May, 2021: Q1 2021 report – We anticipate having a panel update and new tracking metrics in place for Q1 2021, including reporting on nonprofit causes. We won’t handle the above data challenges, but you’ll get deeper insights so that if data by cause area specifically matters to you, you’ll be able to grab the most relevant data.
- End of July, 2021: Q2 2021 report – We anticipate updating our panel methods to handle cause area, and update methods for handling post-publication data drift. We wish to update these both together since currently, they are counter-balancing each other.
We know that you place a lot of trust in our benchmarks, and hope this process of deeper understanding and revision will lead you to trust us even more. We hope this summary helps contextualize the current benchmarks for you, and leaves you looking forward to our upcoming changes!
For further questions, concerns, and comments, please reach out to us at email@example.com