Tuesday 15 August 2017

Make the National Student Survey Great Again!

The NSS data was out last week. This year it’s a new set of questions – some are the same as in previous surveys, some are amended versions of previous questions, and some are entirely new. This means that year-on-year comparisons need to be treated with a little caution.

But one aspect of reporting continues to bother me. The survey measures final year undergraduate student responses to a number of statements. For instance, “Overall, I am satisfied with the quality of the course” on a Likert scale - that is, a 1-5 scale, where 1 = definitely disagree; 2 = mostly disagree; 3 = neither agree nor disagree; 4 = mostly agree; and 5 = definitely agree. The data is presented with a simple summing of the percentages who respond 4 or 5 to given a ‘% agree’ score for every question at every institution. Which in turn means universities can say “93% satisfaction” or whatever it might be.

This is simple and straightforward, but loses important data which could be summarised by using a GPA (Grade Point Average) approach – just like the HE sector commonly uses in other responses, for instance in REF outcomes. Using a GPA, an overall score for a question reflects the proportion giving the five different responses.

To calculate a GPA, there’s a simple sum:

GPA = (% saying ‘1’ x 1) + (% saying ‘2’ x 2) + (% saying ‘3’ x 3) + (% saying ‘4’ x 4) + (% saying ‘5’ x 5)

This gives a number which will be 5 at most (if all respondents definitely agreed) and a minimum of 1 (if all respondents definitely disagreed).

If GPA was used for the reporting, there’d still be one number which users would see, but it would contain more nuance. GPA measures how extreme people’s agreement or disagreement is, not just the proportion who are positive. And this matters.

I looked at the raw data for all 457 teaching institutions in the 2017 NSS. (This is not just universities but also FE Colleges, which work with universities to provide foundation years, foundation degrees and top-up degrees, and alternative providers.)  I calculated the agreement score and the GPA for all teaching institutions for question 27: Overall, I am satisfied with the quality of the course. And then I rank-ordered the institutions using each method.

What this gives you are two ordered lists, each with 457 institutions in it. Obviously, in some cases institutions get the same score; where this happens, they all have the same rank order. And institutional rank is reflects the number of institutions above them in the rank order.

So, for example, on the ‘agreement score’ method, 27 institutions score 100%, the top score available in this method. So they are all joint first place. One institution scored 99%: so this is placed 28th.  Similarly, on the GPA ranking, one institution scored 5.00, the top score using the GPA method. The next highest score was 4.92, which two institutions got. So those two are both joint second.

What I did next was compare the rank orders, to see what difference it made. And it makes a big difference! Take, for example, the Anglo-European College of Chiropractic. It’s 100% score on the ‘agreement score’ method puts it in joint first place. But its GPA of 4.39 places it in joint 79th place. In this instance, its agreement score was 61% ‘mostly agree’ and 39% ‘definitely agree’. Very creditable. But clearly not as overwhelmingly positive as Newbury College, which with 100% ‘definitely agree’ was joint 1st on the agreement score method and also in first place (and all on its own) on the GPA measure.

The different measures can lead to very significant rank-order differences. The examples I’m going to give relate to institutions lower down the pecking order.  I’m not into name and shame so I won’t be saying which ones (top tip – the data is public so if you’re really curious you can find out for yourself with just a bit of Excel work), but take a look at these cases:

Institution A: With a score of 87% on the agreement score method, it is ranked 138/457 overall: just outside the top 30%. With a GPA of 3.95, it is ranked 349/457: in the bottom quarter.

Same institution, same data. 

Or try Institution B: with an agreement score of 73% it is ranked 382/457, putting it in the bottom one-sixth of institutions. But its GPA of 4.28 places it at 129/457, well within the top 30%.

Again, same institution, same data.

In the case of Institution A, 9% of respondents ‘definitely disagreed’ with the overall satisfaction statement. This means that the GPA was brought down. Nearly one in ten students were definitely not satisfied overall.

In the case of institution B, no students at all disagreed that they were satisfied overall (although a decent number, more than a quarter, were neutral on the subject.) This means that their GPA was higher, but the overall satisfaction reflected the non-committal quarter.

I’m not saying that institution A is better than B or vice versa. It would be easy to argue that the 9% definitely disagree was simply a bad experience for one class, and unlikely to be repeated. Or that the 27% non-committal indicated a lack of enthusiasm. Or that the 9% definitely disagree was a worrying rump who were ignored. But what I am saying is that we’re doing a disservice by not making it easier for applicants to access a more meaningful picture.

The whole point of the National Student Survey is to help prospective students make judgements about where they want to study. By using a simple ‘agreement’ measure, the HE sector is letting them down. Without any more complexity we can give a more nuanced picture, and help prospective students. It’ll also give a stronger incentive to universities to work on ensuring that nobody is unhappy. Can this be a bad thing?

GPA is just as simple as the ‘agreement score’. It communicates more information. It encourages universities to address real dissatisfaction.

So this is my call: let’s make 2017 the last year that we report student satisfaction in the crude ‘agreement score’ way. GPA now.

No comments:

Post a Comment