Skip to main content Skip to navigation

Embedding and demonstrating the value of technology-enhanced cultural impact measurement for arts and culture organisations

Institutional partners

 

Project Summary

The museum and galleries sector faces the challenge of developing effective methods for refining objectives, designing and implementing robust evaluation measures. Technological innovations have raised new possibilities for evaluating cultural value that promise to increase efficiency, reach and validity beyond what is possible with conventional methods. However, methods of evaluating cultural impact still need substantial development to support robust, long-term empirical research on cultural value. This proposed project brings to bear AHRC funded research findings pertaining to several technology-enhanced methods for evaluating cultural value. This project will enable the project partners to share expertise and develop an evaluation framework that works across different types of museum collection.

The project will encourage ‘wider use of evaluation as a tool within the cultural sector itself, rather than as something carried out just for accountability purposes’ aligned with the need cited in the Cultural Value Project Report. Specifically, this proposal will enable museums and galleries to explore, discuss and test the innovative automated evaluation systems created as part of the Qualia (R & D Fund for the Arts) and SMILE (Digital Amplification) research projects. This proposal also builds on the ‘The Role of Technology in Evaluating Cultural Value’ project, funded by the AHRC’s Cultural Value Fund.

The partnership represents key players in the national cultural landscape, securing participation from a broad range of disciplines, art forms and venues. In common they have expertise, knowledge and experience, as well as a shared interest, in the core aims of the proposed research. Crucially, they are also in the position to put new knowledge developed through recently funded AHRC projects directly into practice to benefit audiences by establishing a more responsive mode of cultural engagement underpinned by improved audience data.

The academic partner will work closely with education practitioners at the partner organisations and support them to embed evaluation of cultural value development into their normal programmes through the innovative use of automated systems. The project will enable the partners to work with users and stakeholders on co-creation of structures for applying new digital evaluation tools. At the end of the project the academic partner and partner museums and gallery practitioners will share case studies of their work on the project at a showcase event at the National Gallery and on a project website.

The first part of the project will be delivered through a series of workshops at each partner venue. These workshops will enable practice partners to work with the project team to identify where the work of previously funded AHRC research projects (SMILE and Qualia) can be implemented in programmes for families, teachers and children and young people within their learning programmes. In an end-of-project showcase event, partner institutions will present case studies to demonstrate the range of possibilities for measuring cultural value that is afforded by technology-enhanced evaluation. This final event will take place over two days.

Day 1 (Showcase event): Used to demonstrate to other cultural organisations the possibilities and options available through the use of technology embedded impact measurements for audience research and evaluation. This will include ‘break out’ sessions facilitated by project staff to assist cultural organisations with creative problem-solving in using technology to embed robust evaluation in children and young people programmes.

Day 2 (Getting started event): Devoted to practical sessions, with different sessions helping cultural organisation staff with the key elements of the process for embedding this research knowledge in their practices. In addition, a technology provider will be actively setting up systems for institutions throughout the day.

The siren call of the ‘easy option’ in public engagement and informal learning evaluation

I regularly encounter people eager for an easy option with audience feedback or evaluation who have been seduced by one of the following methods...

The siren call of the ‘easy option’ in public engagement and informal learning evaluation: Post-it (sticky) notes, comment cards, drop a token and tap a button options

I regularly encounter people eager for an easy option with audience feedback or evaluation who have been seduced by one of the following methods:

  • Post-it note feedback
  • Anonymised comment cards
  • Drop a ball or token into a container divided into sections to indicate your response
  • Tap a button on the way out of an event to indicate your views

The main limitations of these approaches are the same as any other self-selection method (e.g. having a stack of comment cards sitting at the exit or information desk). There is no control over the parameters of the sample, so it is impossible to know how representative it is. This limitation makes it particularly invalid to make claims about the impact of an exhibit (or similar) on a visitor population on the basis of a self-selection method. The post-it note approach adds an extra layer of potential bias though by providing the respondents access to what other respondents have already said. Thus, the basic advantage of the questionnaire method (standardisation) is undermined as the stimulus for people’s responses will be constantly shifting over the data collection period. As a source of purely qualitative data about a range of possible responses to an exhibition, it may be okay. However, the lack of any contextual information (age, gender, ethnicity, etc.) is a problem for interpreting such qualitative data.

The lack of control over sampling makes the other options in this category equally unhelpful, except with a button or ball or freestanding kiosk you have even less information about whether the content of the data is valid, that is, whether they truly represent people’s views. For example, it is common for children to ‘participate’ in these types of data collection exercises by tapping buttons or dropping in tokens without regard for the labels on the categories.

There is only one good use of post-it note evaluation I have encountered to date. A question/prompt for the post-it note responses along the lines of ‘What are we missing?’ could conceivably generate valuable responses for organisers / practitioners to learn from.