Monday, 8 October 2018

#BanEssayMills


I’ve posted before about a petition to government, askingfor legislation to ban essay mills. The petition is more than half-way towards its first target – enough signatures to require a government response.

Let’s rehearse the issues.

There are organisations – known as essay mills – which for a fee will write an essay or similar piece of work to whatever specification a student asks. Although they market themselves as revision aids, there is no doubt that they are aiming to encourage students to buy an essay which they can submit as part of their university assessment, instead of writing it themselves.

This is a bad thing. The student doesn’t learn, and is cheating. So for those of you keeping count, this is in fact two bad things. And it’s hard to stop.

Half-way there ...
It’s hard to stop because the bought essay may not show up as such in plagiarism detection software used by universities. In fact, essay mills typically guarantee that they will pass such software checks – why would they make this guarantee this if they were only revision aids?

So why the petition? A law won’t make plagiarism software any better. But it can make it possible to deal with the essay mills, not just the students who use them. At the moment, if a university suspects that a student has submitted an essay that they have bought, the only laws which could apply are those governing fraud. But to use this would involve university staff giving evidence against individual students, which is time consuming and unlikely to happen. (It’s important that students trust their academic tutors. Anything which reduces this trust is a bad thing. That’s one of the problems with Prevent, by the way.)

It’s also using heavy tactics. In my career I’ve had to deal with many hundreds of cases where students have cheated, and nearly always it was a student who didn’t understand what that what they’re doing was wrong (plagiarism is a difficult concept, and culturally dependent). I can count on the fingers of one hand the number of times where a cheating student was clearly being malevolent. So by all means punish students who cheat – and help them understand that what they have done is wrong – but we must remember that often they cheat through ignorance or desperation.

If there was a law banning the advertising of essay writing services, and the sale of essays through such services, it would be possible to remove the issue at source. Accounts by those who have used such services show that they are clearly seeking to entice students. There are also stories of students being blackmailed once they have used the essay writing service. And under the current legal framework, universities are powerless to deal with the essay mills themselves.

This is why we need a law. It isn’t enough to deal with individual students who cheat: they need to learn; and the problem of catching them is real. There isn’t an existing legal framework which will enable universities and sector bodies to deal with the essay mills themselves. It’s time for the government to lend a helping hand.

Here’s the link to the petition. Please sign it, please share the petition. Help to #BanEssayMills

Tuesday, 18 September 2018

Is university admission an academic decision?


One topic which exercises many universities is admissions: not only for the obvious reason of recruiting enough students to meet targets, but also for the question about who should be in charge.

Across UK higher education, the underlying culture is that it is an academic decision: suitability to study for a programme should be determined by the academics who teach the programme.  This doesn’t mean that actual academics always take decisions, however: many universities have agreed that specific decisions can be taken by professional service staff, as long as they fall within parameters agreed with admissions tutors. So, if a student gets more than so-many tariff points, or better than such-and-such A level grades, they can be offered a place without reference to a tutor.

David Willetts (in his very interesting book, A University Education) reminds us that the UK is odd in this regard. In the US, admissions decisions are not typically made by faculty tutors, or not even in consultation with faculty tutors. Decisions can be based, for example, upon familial donations; upon siblings having attended; or on residency within a particular state. (Before you get too shocked, I recommend that you have a read of Willetts’ book: there’s more too it than nepotism and a disregard for academic standards.)

The difference can be understood, I think, in relation to a very good underlying principle, which is that academic decisions can only be made by academics in the discipline concerned. This is at the heart of academic freedom. Ask yourself a question: what is the academic decision which is at the heart of university admission?  Is it about who socially gets to do higher education? That doesn’t feel academic to me. Is it about whether a person has the necessary prerequisite knowledge? (For instance, do you need A-level maths to take the first-year modules on the programme?) That sounds much more academic, and is at the heart of the differences in the UK. In the UK specialism takes place at the start of university education; in the US students enrol, study a wide variety of modules for a couple of years, and then choose their specialism. And they take an extra year (at least) to study, so there’s time for this breadth.

It's that picture again! 
I don’t think its controversial to say that there are US universities operating this approach which are at least as good as UK universities. The UK systems generates good graduates a year sooner than the US system, but that isn’t because we’re cleverer: its because the system is structured to produce graduates after three years. As part of this, it is necessary to have early specialisation, and this means that admissions decision have to consider specific subject knowledge and readiness for study.

Now I am going to say something slightly controversial. These tests are more about the resources devoted to pre-university education and upbringing rather than any intrinsic academic merit. We know that a private school education boosts a person’s chances of getting good A-level grades and hence a place at a ‘better’ university. We also know that, in aggregate, for students with the same A-level grades, those educated at state schools will do better overall than those educated at private school (see, for instance, this HEFCE research). This means, I think, that private school with better resources, smaller classes, and concomitant greater parental support for learning – has a better short-term impact.  But when learning resources and chances are evened out at university, the impact dissipates.

The point is that university entry based on A levels is about readiness to study. Background knowledge, confidence and social capital are what matters, because this enables a person to graduate in three years.

On this telling, university admissions should really be understood as a business decision. Remove some of the selective elements, and you won’t get the three-year throughput upon which the UK higher education system is built. (The development of foundation years to enable wider entry to selective universities supports this point: only by an extra year can pre-university educational differences be resolved.) University admission is only an academic decision because we set the system up to make it so. More time at university would enable foundation level study to become a norm. And at that point entry decisions would not be about pre-requisite knowledge, and entry barriers would come down.

And this is my challenge to the Office for Students, and to the UK government’s review of higher education. If you’re serious about removing social barriers to higher education participation, what are you going to do to enable longer degree programmes, to take the apparently academic decision out of the admissions loop?

Wednesday, 22 August 2018

Down with cheats!

There’s a petition open on the government petitions website which seeks to address a real problem for UK higher education.  In my view, anyone with an interest in quality and standards, the health of our sector, and student wellbeing, should consider signing it.

The petition ...
The petition – here it is – asks government to legislate to make it illegal to provide or advertise contract cheating services.  Contract cheating services offer to provide essays for students – written to the precise specification provided by the student, and often guaranteed ‘plagiarism free’.  The services claim to be an aid for students’ revision, but this strains credibility.  If all students needed for revision was a model answer, why would a plagiarism free guarantee be a particular selling point?

The truth is that these services are writing essays to order, which students can submit as their own coursework.  This is cheating, plain and simple, and is bad for the reputation of UK higher education, for the student experience and for academic standards.

The petition has been started by Iain Mansfield – you’ll find him on Twitter as @IGMansfield – a former civil servant who knows about higher education from a governmental angle.  I’ve dealt with a fair few examination irregularities during my career, and it is clear that when students are desperate, they can do silly things.  Remove the supply of contract written essays, and there’s one fewer way for students to make a serious mistake.

If the petition gets 100,000 signatures, it will be considered for debate in Parliament, which is a starting point.  Similar laws have been passed in New Zealand, Ireland and many US states.  If they can do it, so can we.  And we should, in my view, do something about a real problem.

According to HESA there’s almost half a million people working in UK HE.  This gives us plenty of directly affected people who can help to make a difference.  Please spread the word, and sign the petition.

Thursday, 16 August 2018

Mastering support for students

One of the things I enjoy about my job is that I get to meet and work with people from across the UK higher education sector, and one of these great folk – Gale Macleod at the University of Edinburgh – recently pointed me to an interesting research paper she had co-authored. The paper – “Teaching at Master’s level: between a rock and a hard place” – looks at programme directors’ perceptions of the challenges faced by PGT students.

(The full reference is: Gale Macleod, Tina Barnes & Sharon R. A. Huttly (2018) Teaching at Master's level: between a rock and a hard place, Teaching in Higher Education, DOI: 10.1080/13562517.2018.1491025; it’s at https://doi.org/10.1080/13562517.2018.1491025 if you have access to the journal …)

The paper argues that there’s a mismatch between the formal expectations of postgraduate students (for instance, via the QAA’s level descriptors) and the reality as experienced by programme directors. The study is based upon a reasonably sample-sized survey; my more limited direct experience chimes well with the paper: “… there is a gap between the reality of PGT students’ readiness for study at Master’s level and institutional assumptions and the QAA vision”.

Since we're talking about Master's, here's a Margherita ...
The issues are about readiness for Masters’ study – the extent to which students are able to be independent and critical learners – and also about the impact of students’ lives on their ability to study: taught postgraduate students are more likely to have work or family responsibilities, meaning that their face real time pressures on their studies. And as well as this, universities often (and my experience definitely chimes with this) assume that taught postgraduate students are much more capable of managing their own learning. This creates a serious problem – learners are less capable (in the sense of being able to facilitate their own learning) than assumed; and institutions do not focus support on these same students.

So what conclusions do I draw from the paper?

This is a tricky problem. Taught postgraduate programmes are serious endeavours for UK universities. In 2016-17, income from taught postgraduate fees amounted to almost £1.25billion across the whole of the sector. That’s about 3.5% of all income, so it’s not by any means overwhelming; but it’s also a tidy sum in absolute terms and about half of the net institutional surplus across the sector. About 1 in every six students in UK universities are studying for a taught postgraduate qualification: again, not the largest group, but also not trivial.

So it’s a problem worth solving, but with these kinds of numbers it isn’t business-critical for most universities. The issues will also vary more sharply by programme: where a programme attracts many students and has higher fees, it is more likely that the university will put in place (at a programme or faculty level) resources to support students. (Some of the most impressive teaching and learning support takes place on MBA programmes …) Conversely, where a programme doesn’t attract many students the likelihood that the university will put in place any necessary support is low.

This is the nub of the problem. In my experience, universities often have a number of taught postgraduate programmes which are very marginal – low student numbers, low fees. They may play an important role in the academic life of a department or school, and provide a small but important pathway for research students. But in financial terms they are a cost. The challenge for universities is whether they should take any action, as the net cost of delivery is often small.

What would action look like? On the positive side, it is possible that a university which addressed student support for taught postgraduate would see an increase in student numbers, and therefore a reduction or elimination of the financial problem. But the reality is that many universities are operating in relatively fixed markets, and this won’t happen in the short term. More likely, there is a need to look at how the cost of a programme can be reduced: sharing modules, reducing options – this can create the space to provide better support for students. Whilst this can look like central managerialism whittling away at the freedom of a school or department to offer interesting programmes, if done well it can help to create a more vibrant departmental offer. Local academic leadership matters tremendously for this to work.

University professional services can play an important role in helping to address the problems identified in the paper. The accessibility of support services to students who are have time pressures is critical. For example, are librarians available online, or out of normal hours? How easy is it to learn how to use online resources such as VLE’s or library catalogues? Do inductions or student welcomes take account of the distinct needs of taught postgraduate students? These matters can often be improved without any – or much – resource, and can make a big difference.

Perhaps something which should be higher up to-do lists than it is at the moment?

Thursday, 7 September 2017

Whose money is it anyway?

It’s hard not to notice the current focus by some in government, parliament and the media on universities, and in particular issues of value (levels of tuition fees) and accountability (how can VC’s high salaries be justified).

There’s lots to be said on this, but in this blog I want to focus on an underlying issue: whose money is it anyway? Put bluntly, if universities are spending private money, then it’s no business of the state what they spend it on, as long as it’s legal.

Universities get money from lots of sources, and they publish information annually – through their annual accounts and through statutory returns to the Higher Education Statistics Agency (HESA) – about what exactly they get and from who. The information is in a standard format, with many categories. Bear with me while I list these; it’s worth seeing to give context to the argument I’ll be making later. There are:

  • Funding body grants


  • Tuition fees, comprising Full-time undergraduate, Full-time postgraduate, Part-time undergraduate, Part-time postgraduate, PGCE, Non-EU domicile, Non-credit-bearing course fees, FE course fees, and Research training support grants.


  • Research grants and contracts, comprising grants from: BEIS Research Councils, The Royal Society, British Academy and The Royal Society of Edinburgh; UK-based charities; UK central government bodies/local authorities, health and hospital authorities; UK central government tax credits for research and development expenditure; UK industry, commerce and public corporations; other UK sources; EU government bodies; EU-based charities; EU industry, commerce and public corporations; EU (excluding UK) other; Non-EU-based charities; Non-EU industry, commerce and public corporations; Non-EU other


  • Other services rendered, comprising income from BEIS Research Councils, UK central government/local authorities, health and hospital authorities, EU government bodies and other sources


  • Other income, comprising: Residences and catering operations (including conferences); Grants from local authorities; Income from health and hospital authorities (excluding teaching contracts for student provision); Other grant income; Capital grants recognised in the year; Income from intellectual property rights; and Other operating income


  • Donations and endowments, comprising New endowments; Donations with restrictions and Unrestricted donations

If you’ve made it through the list (well done!) you’ll see that some of these come from public sources (eg BEIS research grants), some of these are private (eg UK industry grants). Add together all of the public income for a university, divide by the total incomer, and you can work out what percentage of the university’s income is from public sources. Which is surely relevant for understanding how accountable universities need to be with their spending choices.

For some categories, though, it isn’t obvious if it’s public money. The big one here is tuition fee income.

For income from non-EU students, it is clearly private income. Even if they’re supported by their own government, the UK government doesn’t have a duty or obligation in relation to the money.

For postgraduate tuition fees paid by home and EU students, it will be a mixed bag: some will be paid by the students themselves or their employers; some will be funded via postgraduate grants; some will be paid via public PG loans schemes.

For home and EU undergraduate fees, we need to think about it. Where students have to pay tuition fees (remember that Scottish students in Scotland pay no fees) they are able to take out a loan, on less than commercial terms, from the Student Loans Company. And students do this. After graduation, students make repayments towards the loan from their salary; the amount they repay depends on how much they earn. And after 30 years the remaining debt is cancelled. The initial funds are provided to the Student Loans Company by the state; and an allowance for the ultimately unrepaid element – called the RAB charge – is also part of government spending. So is it public or private money? With hindsight, a proportion of it is private, and a proportion public. Up front, the cash is public.

On this basis it is possible to masker the calculation about the proportion of universities income which comes from public funds. I’ve included home and EU undergraduate tuition fees; I’ve excluded postgraduate tuition fees; I’ve included research and other services rendered sources from UK government and public bodies, and from EU government and public bodies (the income for this ultimately derives from UK government funds, as we’re a net contributor to the EU budget.)

What this shows is that universities receive significant public funding. Across the UK as a whole, 58% of income in 2015-16 (the most recent year for which HESA data is available) comes from public sources. In actual money, that is £20.3 billion out of a total income of £34.7 billion. Yes, I did say billion. It is a lot of money!

Nation
% Publicly-funded
England
58%
Wales
65%
Scotland
59%
Northern Ireland
75%
Total UK
58%

Of course this varies between individual universities. Some have very little income (comparatively!) from non-public sources; a few have very little (again, comparatively!) from the public. 

The graph shows the data: each university is one of the bars; they’re rank ordered from the most dependent on the left (Plymouth College of Art, since you ask, with 96% dependency on public funding) through to Heythrop College on the right (with no public funding whatsoever.) Even the famously-private Buckingham University has a little public income - £95k in funding body grants and research income from UK public bodies. Which means that it is second from the right, with about 0.25% of its income from public sources.

Source: HESA data
What of other universities? The Russell Group members range from the mid 20s (LSe with 24%) to the high 60s (Queen’s Belfast with 69%). The big post 1992 civic universities range from the mid 50s (Sunderland with 56%) to the mid 80s (Liverpool John Moores with 86%). The smaller or specialist research intensives (the 1994 Group, as was) range from the high 30s (SOAS with 38%) to the mid 60s (Birkbeck College, with 66%).

So does the state have an interest in how universities spend their money? The data say yes: at least to the extent that the money derives from public sources.

This doesn’t mean that all of the criticisms made of universities are valid. And it doesn’t mean that university autonomy isn’t a good idea. History, and international comparisons, tell us that the best universities are those that have the most freedom to make their own academic choices.

But it does lend validity to arguments that universities need to be accountable for their spending choices. In my experience, universities don’t disagree with this need for accountability. 

What of current criticisms? The danger is that the huge good that universities do for individuals and for society as a whole is forgotten amongst the current hubbub, and damage is then done. To avoid this, those making the noise need to be careful that their criticisms are well-founded. There’s an anti-elitism in current public discourse which easily mutates into unthinking policy.

And universities themselves need to be aware that (some at least) of the criticisms come from a real point. Are student always the first thought? Sometimes research sees like it is king. And is there real transparency? A few universities have a student on their remuneration committees, and their world has not fallen down. Why not more?

Tuesday, 15 August 2017

Make the National Student Survey Great Again!

The NSS data was out last week. This year it’s a new set of questions – some are the same as in previous surveys, some are amended versions of previous questions, and some are entirely new. This means that year-on-year comparisons need to be treated with a little caution.

But one aspect of reporting continues to bother me. The survey measures final year undergraduate student responses to a number of statements. For instance, “Overall, I am satisfied with the quality of the course” on a Likert scale - that is, a 1-5 scale, where 1 = definitely disagree; 2 = mostly disagree; 3 = neither agree nor disagree; 4 = mostly agree; and 5 = definitely agree. The data is presented with a simple summing of the percentages who respond 4 or 5 to given a ‘% agree’ score for every question at every institution. Which in turn means universities can say “93% satisfaction” or whatever it might be.

This is simple and straightforward, but loses important data which could be summarised by using a GPA (Grade Point Average) approach – just like the HE sector commonly uses in other responses, for instance in REF outcomes. Using a GPA, an overall score for a question reflects the proportion giving the five different responses.

To calculate a GPA, there’s a simple sum:

GPA = (% saying ‘1’ x 1) + (% saying ‘2’ x 2) + (% saying ‘3’ x 3) + (% saying ‘4’ x 4) + (% saying ‘5’ x 5)

This gives a number which will be 5 at most (if all respondents definitely agreed) and a minimum of 1 (if all respondents definitely disagreed).

If GPA was used for the reporting, there’d still be one number which users would see, but it would contain more nuance. GPA measures how extreme people’s agreement or disagreement is, not just the proportion who are positive. And this matters.

I looked at the raw data for all 457 teaching institutions in the 2017 NSS. (This is not just universities but also FE Colleges, which work with universities to provide foundation years, foundation degrees and top-up degrees, and alternative providers.)  I calculated the agreement score and the GPA for all teaching institutions for question 27: Overall, I am satisfied with the quality of the course. And then I rank-ordered the institutions using each method.

What this gives you are two ordered lists, each with 457 institutions in it. Obviously, in some cases institutions get the same score; where this happens, they all have the same rank order. And institutional rank is reflects the number of institutions above them in the rank order.

So, for example, on the ‘agreement score’ method, 27 institutions score 100%, the top score available in this method. So they are all joint first place. One institution scored 99%: so this is placed 28th.  Similarly, on the GPA ranking, one institution scored 5.00, the top score using the GPA method. The next highest score was 4.92, which two institutions got. So those two are both joint second.

What I did next was compare the rank orders, to see what difference it made. And it makes a big difference! Take, for example, the Anglo-European College of Chiropractic. It’s 100% score on the ‘agreement score’ method puts it in joint first place. But its GPA of 4.39 places it in joint 79th place. In this instance, its agreement score was 61% ‘mostly agree’ and 39% ‘definitely agree’. Very creditable. But clearly not as overwhelmingly positive as Newbury College, which with 100% ‘definitely agree’ was joint 1st on the agreement score method and also in first place (and all on its own) on the GPA measure.

The different measures can lead to very significant rank-order differences. The examples I’m going to give relate to institutions lower down the pecking order.  I’m not into name and shame so I won’t be saying which ones (top tip – the data is public so if you’re really curious you can find out for yourself with just a bit of Excel work), but take a look at these cases:

Institution A: With a score of 87% on the agreement score method, it is ranked 138/457 overall: just outside the top 30%. With a GPA of 3.95, it is ranked 349/457: in the bottom quarter.

Same institution, same data. 

Or try Institution B: with an agreement score of 73% it is ranked 382/457, putting it in the bottom one-sixth of institutions. But its GPA of 4.28 places it at 129/457, well within the top 30%.

Again, same institution, same data.

In the case of Institution A, 9% of respondents ‘definitely disagreed’ with the overall satisfaction statement. This means that the GPA was brought down. Nearly one in ten students were definitely not satisfied overall.

In the case of institution B, no students at all disagreed that they were satisfied overall (although a decent number, more than a quarter, were neutral on the subject.) This means that their GPA was higher, but the overall satisfaction reflected the non-committal quarter.

I’m not saying that institution A is better than B or vice versa. It would be easy to argue that the 9% definitely disagree was simply a bad experience for one class, and unlikely to be repeated. Or that the 27% non-committal indicated a lack of enthusiasm. Or that the 9% definitely disagree was a worrying rump who were ignored. But what I am saying is that we’re doing a disservice by not making it easier for applicants to access a more meaningful picture.

The whole point of the National Student Survey is to help prospective students make judgements about where they want to study. By using a simple ‘agreement’ measure, the HE sector is letting them down. Without any more complexity we can give a more nuanced picture, and help prospective students. It’ll also give a stronger incentive to universities to work on ensuring that nobody is unhappy. Can this be a bad thing?

GPA is just as simple as the ‘agreement score’. It communicates more information. It encourages universities to address real dissatisfaction.

So this is my call: let’s make 2017 the last year that we report student satisfaction in the crude ‘agreement score’ way. GPA now.

Tuesday, 1 August 2017

Value for money

Universities seems to be having a torrid time, at least as far as their standing in the political firmament goes. As well as pension headaches for USS member institutions (mostly the pre-1992's), there are high-profile stories on VC salaries, Lord Adonis' campaign about a fee-setting cartel, and (low) teaching contact hours. So far, so not very good at all.

There's a feeling that this might be more than a quiet season set of grumbles: David Morris at Wonkhe writes interestingly on this. For what its worth, I suspect that this is indeed politically driven rather than accidental. Maybe Lord Adonis is marking out ground for his re-emergence within a new model Labour Party; maybe Jo Johnson is preparing for tough discussions around future fees. But whatever the end point, it's worth looking at whether the concerns are real.

An underlying point is value for money. The charge is that (English) students don't get a lot for their money. One quick way to look at this is university spend on staff, the single biggest item on university's accounts. HESA publish handy data on student numbers and staff numbers. It's straightforward to calculate the ratio of students to academic staff over the years.

source: HESA, my calculations
The data show that from 2004-05 to 2011-12, for every member of academic staff there were about 14 students. In 2012-13 - the first year of the new fees regime in England - this ratio started to fall, and by 2015-16 there were just over 11 students for every member of academic staff.

Does this mean that the stories of low contact hours, and questionable value for money are wrong? Not necessarily - the data doesn't speak to the reality at individual universities or programmes, nor does it describe any individual student's experience. But it does show that universities have invested in the most important element of their provision: academic staff.