Skip to main content Skip to navigation

Surveys

For complete report see here

For ongoing updates to survey results see here

Introduction

The National Student Survey (NSS) is an annual survey of final year undergraduate students designed to assess the quality of undergraduate degree programmes bases on responses to 22 items in 6 factors (teaching, assessment and feedback, academic support, organisation and management, learning resources and personal development). The survey is conducted by Ipsos-Mori on behalf of the Higher Education Funding Council for England. It is proposed that the scores achieved will be part of a series of metrics used to form the Teaching Excellence Framework (TEF) from 2017 which will give a score for university courses which is linked to fees in future.

The use of module surveys can help to identify good and bad individual teaching within a department. They can be embedded into a continuous improvement process with data owned by the department. Module surveys are ubiquitous amongst universities as a mechanism for gathering feedback about teaching from students.

The validity of satisfaction as a measure for good learning will not be addressed in this project. It is evident that universities which score highly in NSS are not the same universities which score highly in other measures. Whilst some consider this evidence that research-led universities do not prioritise good teaching, others say that this is because the expectations of students (an on students) differ by institution. The link between expectation and satisfaction in consumer products is analysed by Kano modelling which categorises product features with labels such as ‘essential’, ‘nice to have’, ‘neutral’ and ‘reverse’. Kano analysis would say that the expectation of Range Rover customers is different to those of Ford Focus and so Range Rover customers are consequently harder to please.

Warwick does not provide templates, guidance or an embedded process for module surveys. Therefore, a further motivation of this project was to establish practice which may benefit other departments.

The Survey

A pilot paper survey was introduced in Term 1 2015-2016. The forms were distributed and collected by bursary students and sent to IT services for scanning. Lecturers were asked to provide a response to students within 2 weeks of the survey (or by the end of term) indicating their thanks and proposed actions. Survey response rates were almost all students who attended lectures and students have expressed thanks for the opportunity and especially for the responses they’ve received.

The survey was designed to mimic the format of the NSS with a statement and a set of possible answers on the Likert scale. Questions were divided into 3 categories: Content, Management and Delivery. The aim of the three sections was to divide responsibility between the curriculum (often owned outside of the lecturers by the Discipline Degree Leader, or devolved to Module Leader), the administration (timetabling, room etc.) and the lecturer themselves (lecturing style). An overall satisfaction question would allow for calibration between modules. The current survey can be found here.

Three improvement strategies were considered: 1 Move from ‘neither agree nor disagree’ to ‘strongly agree’ 2 Move from ‘strongly agree’ to ‘definitely agree’ 3 Move from ‘disagree’ to ‘agree’ There are two aspects to strategy 1: reduce the number of students answering ‘neither agree nor disagree’ who chose that incorrectly (either because they assumed it was a neutral rather than negative response, or because they conflated it with not applicable) and to persuade intentionally neutral students to be more positive. Strategy 1 is important to investigate since students cannot be coached during the NSS on how the survey is analysed. They can, however, take similar surveys where it is made clear that these answers are negative and that ‘not applicable’ is, in some instances, the correct answer. Although answering ‘not applicable’ does not affect the % satisfied, it does contribute to the second NSS metric which is the arithmetic score for satisfaction. Strategy was discounted because it does not affect the main NSS metric and relies on individual student culture. Strategy 3 should be subdivided as A: ‘pick the worst to improve’ or B: ‘pick the easiest to improve’ or C: ‘pick the measures to improve which will have the greatest impact on satisfaction’. Simple analysis will result in A which is the level that, so far, the department NSS action plan has used along with B. The proposal of this investigation is to work towards C, judge the impact of each measure and prioritise by effect on overall satisfaction. The priority must therefore be any statements which fall into the ‘must-have’ with high dissatisfaction. These are expectations not being met by the department. Following this, questions in the ‘Linear’ classification can be used to push student satisfaction up the scale.

Analysis

The analysis is designed to reduce the effect of individual student scale looking at contributing factors to the overall satisfaction score. The research question is therefore “what in particular caused you to be dissatisfied with this module?”. The Shapely value says that with a group of players (here, aspects of a course), some will contribute more to the overall coalition and some will be a particular threat. Further, some small improvements by players can have a bigger effect than in others. Particular strong improvers or threats are here called ‘Drivers’ (Satisfaction Drivers or Dissatisfaction Drivers). Satisfaction was judged to be a positive response (Strongly or Definitely Agree) and (as with NSS) neither agree nor disagree was judged to be dissatisfaction rather than neutral. The aim of the analysis is therefore to calculate a metric for satisfaction/dissatisfaction driver and to present in an intuitive way as a report. The simplest metric is the percentage of students who were satisfied with a particular question and overall. Table 5: Calculation of Drivers The actual % Satisfaction and Dissatisfaction response to each question was combined with the metric above to form the ‘risk’ level of each factor. For example, a high dissatisfaction driver with a high dissatisfaction score is a bigger risk than one that is a low driver with equally high dissatisfaction. With multiple dimensions of data (Question number, satisfaction driver metric, satisfaction) a bubble chart was used to present results. The size of the circle indicates the driver metric (with red being particularly strong and green being weak) and the y-axis gives the %satisfied/dissatisfied for that question. The graph should be used to identify large red circles with high dissatisfaction, or large red circles with low satisfaction. In order to make best use of this analysis, student expectations must first be established. Therefore, all results have to be collated centrally so that individual modules can be judged against the global driver metric, rather than that for the individual module. The difference between drivers of individual modules can be analysed further. The ‘raw’ answers to individual questions are also included in the report for lecturers

bubble

Results

Over the course of the year, 3260 individual survey responses were collected with 71% of modules conducting a survey. This represents a much higher set of data than was previously available. The overall satisfaction with modules was 82%, this is lower than the target of 90% for modules (based on the Russell Group NSS score). The category related to module content received the highest satisfaction with the delivery receiving the lowest score. Looking at the raw results, the weakest areas were laboratories (though for many this was not applicable) and lectures being ‘engaging’. The biggest drivers of dissatisfaction were ‘Content’, ‘Management (in particular preparation of the lecturer) and Delivery (in particular the ability of the lecturer to answer questions). These were some of the lowest scores for dissatisfaction though, only Delivery is considered a threat. The biggest drivers of satisfaction were ‘interesting content’, ‘management’ (in particular preparation) and delivery (in particular engaging lectures and provided resources). Of these, those related to delivery scored the lowest satisfaction.

kano

Comparing the results shown in Table 6 with the findings of Aerfi (2012); there is similarity in that the content is a ‘must-have’ item as well as relevance (motivation) and applications (enabling of students). The ‘more is better’ quality of collaboration is partially indicated by lecturer response to questions. Laboratories were a ‘delight’ factor in both surveys and students were indifferent to self study in both. The main difference was that delivery was a ‘must have’ in Aerfi but ‘Attractive’ in Lucas. This may be because the delivery was variable in the School of Engineering giving more granularity to the results.

The strategic outcome of the survey is in understanding where to prioritise action (as carried out by Douglas, Douglas and Barnes). Though their satisfaction differed, the relationship with importance was the same. Threats where we must reduce dissatisfaction: The quality of delivery is both a linear driver of dissatisfaction and also a current weakness. Opportunities where we must concentrate efforts to increase satisfaction: Laboratory activities which enhance the topic, engaging lectures and interesting topics are found to be both drivers of satisfaction and also current weaknesses.

1. Arefi, Mahboube, et al. "Application of Kano Model in higher education quality improvement: Study master’s degree program of educational psychology in State Universities of Tehran." World Applied Sciences Journal 17.3 (2012): 347-353.

2. Douglas, Jacqueline, Alex Douglas, and Barry Barnes. "Measuring student satisfaction at a UK university." Quality assurance in education 14.3 (2006): 251-267.Herzberg's two-factor theory." International journal of educational management 19.2 (2005): 128-139.