by Todd Benson
During the past month COACHE launched its annual national Faculty Job Satisfaction Survey. It is by far our largest initiative every year and this year is no exception. We have over seventy institutions in our project with some campuses as small as seventy-five faculty and others with several thousand faculty. As soon as we launch, I immediately receive emails wanting to know how to track response rates and whether one institution is performing above average compared to others. Before examining these questions, it is important to consider why response rates matter and perhaps, why they should not matter quite as much? Which strategies tend to generate higher response rates? And, perhaps most importantly, how we can use the exercise of driving up response rates to effect real change on your campus?
The True Measure of Meaningful Response Rates
Response rates are important to our partners, however, I would argue that their perceived importance and actual importance are not exactly the same. Faculty and administrators alike believe that a high response rate is the best indicator of how well a survey is performing. It easy to compare to others and it tends to be a quick target for critics of survey research in general and an escape clause when the results are difficult to hear.
At COACHE, we strive for high response rates, but a high response rate in and of itself does not answer a question that matters more. What we really want to achieve is representativeness in our results. In other words, when you think about the many voices of your faculty, are they present in your results? That is a trickier question but one that we consider very important. It is common for a small liberal arts college to see response rates in the 65%-75% range, but it is also common for 65%-75% of faculty at a liberal arts college to be white men. If we want to understand the academic workplace, we need to hear from underrepresented faculty too. Attaining truly representative responses can be an even greater challenge than merely achieving high response rates, but it is of equal, if not greater importance.
COACHE’s Keys to Improving Response Rates and Representativeness
One exercise that COACHE conducted last year was to vary the nature of our reminder messages around three basic themes. One reminder focused on the importance of giving the respondent a voice and sense of agency. Another reminder asked respondents to participate in order to improve the workplace for themselves and their colleagues. A final invitation asked faculty to participate in order help make their institution a better place to work. These distinctions were made in both the subject line and the text of the reminder. The messages were randomized so that faculty could receive the three reminders in any order. The results showed that the messaged focused on improving the workplace had a 4%-5% higher response rate when sent as one of the first two reminders and a 2%-3% higher rate as the third reminder.
One might argue that we should simply use the message with the highest response rate repeatedly, but we take a different approach. We believe that faculty may respond to different messaging and that using the same message repeatedly attracts the same type of respondent. For example, the message which drew the highest response rates asks faculty to participate in order to support the institution but what about faculty who are disengaged or dissatisfied with their institution. Would this message be enticing to them? For our campus partners, the takeaway from this is that no single message can make the case to all of your faculty.
Applying this Locally
Regardless of whether you are a COACHE partner or not, you are most likely surveying faculty, staff, and students on a regular basis. What are the lessons that you can use in any of your survey work?
For institutions, the primary tool for improving response rates and representativeness depends on a specific type of messaging. Local COACHE teams need to convey to faculty that the data will be used meaningfully. From time to time, we add a few questions to the end of our survey to assess faculty perceptions about surveys and one of the most telling data points is that more than half of the faculty who complete our entire survey say that they believe that results are not an important part of determining campus policy. That attitude about the use of campus data is disturbing to a project like COACHE, where we hope to not only collect data but to work with institutions on how to turn the results into policy and practice. So for any data collection project that you are engaged in, make sure that the communication strategy includes some if not all of the following touchpoints:
- Do not assume that a message from the top is all you need. An email from the Chief Academic Officer can be part of the strategy but it should not be the only form of communication. This is why COACHE encourages its members to form a team in advance of launch. Hearing about COACHE from deans and chairs can really make a difference especially at larger, more complex institutions. More importantly, make sure that your team is representative. When faculty hear from their colleagues that they are going to be part of the analysis and dissemination processes, it establishes trust and credibility. It is also important to make sure that the team reflects the survey population.
- Convey a long-term plan – Even without the results, institutions can give their constituents some sense of how the results will be put into action. When can respondents expect to see the results? Who will be responsible for analyzing the data? If you collected this type of data in the past, make sure that they understand how prior data affected their lives.
- Communication about survey participation does not always have to come in the form of a request. Consider writing short pieces that describe the challenges that data collection will help solve. Make the survey a part of a broader discussion of the issues on your campus.
- Use the entire survey period – Often, communication from campuses slows down in the last few weeks of survey administration unless response rates are too low. However, panicking about low response rates is not a way to motivate faculty to participate. In fact, it may have the opposite effect. Developing a plan to that runs through the last week of survey administration means that you and your team do not have to scramble if response rates dip.
In the end, whether you are focused on a high response rate, a representative sample, or some combination of the two, it all boils down being thoughtful about the process. Building a communication plan that starts with the basics but then, works to find its own holes. What are the pattern in non-response? What is the compelling case that will encourage participation? Those are tough questions to ask but doing so should have an impact on response rates and also on your level of faculty engagement.
Recommended for additional reading: