by Kiernan Mathews, Todd Benson, Sara Polsky, and Lauren Scungio
Since 2005, the Collaborative on Academic Careers in Higher Education’s (COACHE) Faculty Job Satisfaction Survey has been systematically listening to faculty and, campus by campus, revealing inequities in the faculty experience. The survey results illuminate disparitiesin perceptions about the academic workplace betweenfaculty of different racial and ethnic backgrounds—andalso demonstrate, amid a nationwide conversation about inclusion,that white faculty’s perception of diversity and inclusion efforts on campus still outpaces genuine progress.
This post serves to offer some communications guidance for the COACHE partners with three weeks remaining in the Faculty Job Satisfaction Survey period. We understand that many other priorities are in play right now: these points are offered strictly to reduce your cognitive load, not to add to it. As mentioned in Kiernan’s update last week, you do not have to do any more work.
With the sudden escalation in both public concern and genuine risk associated with COVID-19, COACHE’s team has been discussing the implications for faculty. As we discussed the issue, an important question that arose was, “Who might we be forgetting?” For us, the answer to that question was part-time faculty. As administrators grapple how to handle their institutions’ response to this global pandemic, here are some thoughts about why part-time faculty are an important consideration in these discussions and some questions that institutions might consider in their planning.... Read more about Considering Part-time Faculty in COVID-19 Response Planning
Today, higher education institutions are using more and more data to drive decision-making. This is a good thing, however, when leaders need to include faculty in decision making and execution offering the right amount of data is of paramount importance. Faculty are, by their training, critical consumers of data and not sharing everything could be interpreted as the administration spinning the results. On the other hand, sharing every single data point can lead to an unfocused discussion. It’s no surprise then that one of the most frequent, and challenging, questions that comes up as we work with our partners is “How much of the report should we share with faculty?” Our standard answer is “Everything!” but full transparency presents its own set of challenges and concerns.
Recently I had the chance to speak with Berit Gundersen at the University of the Pacific. As the Associate Provost for Faculty Affairs, Gundersen lead the initiative to bring the COACHE Faculty Job Satisfaction survey to the University of the Pacific in 2014. In our discussion, she mentioned that even years later, faculty and administrators are discussing the results - even during a dinner at the President’s home this past spring. It made me wonder what aspects of their approach allowed the work to sustain itself for so long, so Berit and I dug in to try to understand what worked. Some themes began to appear that might be valuable for other institutions that wish to engage their faculty in data driven discussions.
As we near the end of our survey administration cycle, our team has begun working on institutional reports. The reports that COACHE providesare distinctive in that they allow our partners to select five peer institutions for more direct comparison. That kind of nuanced comparison can be both a blessing and a curse for our partners. The benefit of having near peer comparators comes from the ability to see beyond the national benchmarking. They provide context that helps explain why your institution may be over- or under-performs in the national landscape. The challenge that comes with this sort of data is that it can create the opportunity to dismiss the findings when the peers aren’t “perfect.” In truth, there are no perfect peers. There are choices with consequences that need to be considered and communicated throughout the rollout process. To that end, I wanted to share some thoughts about peer selection based on my experience with our partners.
During the past month COACHE launched its annual national Faculty Job Satisfaction Survey. It is by far our largest initiative every year and this year is no exception. We have over seventy institutions in our project with some campuses as small as seventy-five faculty and others with several thousand faculty. As soon as we launch, I immediately receive emails wanting to know how to track response rates and whether one institution is performing above average compared to others. Before examining these questions, it is important to consider why response rates matter and perhaps, why they should not matter quite as much? Which strategies tend to generate higher response rates? And, perhaps most importantly, how we can use the exercise of driving up response rates to effect real change on your campus?