Formal evaluation has an important role to play in improving the lives of older people, by providing rigorous and scientific evidence upon which to base future interventions and policies. However, many vulnerable service users report that they find formal evaluation to be invasive, intimidating and insensitive, while providers often have difficulty reconciling their typically holistic, person-centered approach with the more structured, academic approach that formal evaluation requires.
Data questionnaires, such as the Common Measurement Framework (CMF) questionnaire that was used throughout the Ageing Better programme to measure loneliness and social isolation, can be particularly troublesome, as in order to obtain a baseline measurement an initial (extremely detailed) questionnaire must be administered upon first contact with any new participant.
Many Brightlife project delivery teams were, perhaps understandably, reluctant to jeopardise trust – a crucial aspect of engagement with vulnerable people (see ‘Making connections’) – by pushing new participants out of their comfort zone at this delicate stage. In an attempt to de-formalise the process for vulnerable new participants, project staff often ended up either not using the questionnaire at all, or completing it on a subsequent visit.
“The paperwork required to be completed by Social Prescribers during their initial assessment with the client was felt to be time consuming and cumbersome. Moreover, the language was viewed as too formal and not conducive to building trust and allowing the client to open-up. As a consequence, rather than follow the assessment forms [they] tended to ask more general questions and complete the forms retrospectively.”
Extract from Social Prescribing Evaluation Report (2016, University of Chester)
However, CMF questionnaires completed after the initial visit could not be included in the evaluation, as they would compromise the data by not reflecting a true baseline measurement. This means that the impact of any intervention on the most vulnerable participants, however significant, is unlikely to be reflected in the quantitative data.
“Conversations and willingness to take part in the evaluation becomes easier once people feel they belong and trust has been established with the most meaningful conversations happening over a number of weeks. This has been at odds with how the Brightlife evaluation needed to be collected and as a result we do not feel has given a true picture of people’s journeys.”
(Community Compass ‘Social Activities’ End of Project Report)
Brightlife was extremely fortunate to work with the Centre for Ageing and Mental Health at the University of Chester as its local evaluation partner, providing access to the valuable expertise of specialists in ageing and related subjects. However, it could sometimes be challenging to tread the line between collaboration and independence.
While it was important for the evaluators to maintain a degree of distance from project delivery at Brightlife, it was also essential to maintain regular communication to ensure that as Brightlife evolved, the evaluation could respond both flexibly and iteratively. The ongoing dialogue that was maintained between Brightlife and the university facilitated this and enabled a strong and professional relationship to be achieved.
Evaluation is an integral part of the cycle of service commissioning and delivery: without robust impact measurement to provide evidence about what really works, it is ultimately harder to create effective interventions and to improve lives. So while commissioners should by no means dismiss the value of formal evaluation; nor should they accept ill-fitting solutions that alienate both providers and service users and fail to capture impact effectively.
The ideal solution lies in co-production: evaluators need to work with commissioners and potential providers from an early stage, to identify barriers to engagement and to choose the techniques that are most likely to yield accurate, useful data. Similarly, just as older people themselves should be involved in the development of projects and services (see Meeting needs), they should also have meaningful input into the design of any evaluation.
In some cases, co-production of evaluation is not possible and providers are required to work within existing evaluation frameworks (for example in the national Ageing Better evaluation). In these cases, commissioners can encourage engagement with the evaluation process by offering support to delivery partners.
At Brightlife, support with administration of the CMF questionnaire was given by the data coordinator, including training and advice on how to promote the significance of the evaluation to participants in a positive way, as well as how to anticipate common questions that participants reluctant to take part in the evaluation might ask. Various levels of support were available according to the needs of each provider – for some, this involved the data coordinator promoting directly to participants the benefits of taking part in the evaluation.
Some providers, for example Community Compass (Social Activity Tasters; Share Club), found that it was possible to make the CMF questionnaires and other formal evaluation less intimidating for participants by providing one-to-one support. This was found to be most effective when done in a familiar environment such as their own home, separate from any project activity.
Even in cases where quantitative evaluation techniques have been successfully employed, the resulting data can fail to capture essential nuance and context.
In order to accurately assess the broader impact of projects and services on the wellbeing of participants, it is therefore important that the results of any quantitative evaluation are considered alongside qualitative evidence.
Of course, any modification to existing evaluation tools and methods to include collection of such qualitative data is likely to require a degree of flexibility by those responsible for designing and conducting the evaluation itself.
People want to feel ‘listened to’ during any evaluation process, so a dialogue approach rather than a data-gathering approach can be much more effective in engaging participants.
This approach was used very successfully by Community Compass, in both the Share Club and Social Activity Taster projects.
“At Community Compass we retain a flexible approach to all that we deliver in line with the ‘test and learn’ approach of the wider Brightlife project. We believe that the best way to evaluate what you are doing is by speaking and listening to people, people will often ‘vote with their feet’ and we have repeated models that have worked well, but also replaced activities or venues that are less successful in response to participant and volunteer advice.”
This approach can be challenging to reconcile with the more rigorous, controlled measurement that is typically required during academic evaluation.
For example, during the evaluation of the Social Prescribing scheme by the University of Chester, the Brightlife team reported multiple examples of where the scheme had helped participants to develop new friendships, extend their social networks, and improve their health and/or levels of self-worth, self-confidence and motivation.
However, the evaluators concluded that because these perceptions of health improvement and increased social networks were based on anecdotal reports, more formal evidence would be required to support any claims that the scheme had indeed been successful.
This apparent dismissal of what was felt to be a large volume of evidence was frustrating for the delivery team, especially as many participants in the scheme had been extremely unwilling to engage with the CMF questionnaire so much of the impact had not been captured formally.
It is important to consider the potential limitations of using self-reporting models for both quantitative and qualitative evaluation, particularly when working with older people.
Those of an older generation often share a cultural aversion to expressing negative feelings, tending to place a higher value on stoicism or ‘just getting on with it’ than their younger counterparts might. As a result, they may be less likely to admit to experiencing loneliness, and more likely to understate the extent of their own problems.
Similarly, there is a risk that data from self-reported ‘before and after’ measures of loneliness and social isolation may be skewed as a result of participants learning more about these issues as a result of their involvement in projects. For example, some participants in the Connect Up project (The Neuromuscular Centre) reported that they hadn’t realised how isolated they had been at the start of the project, but with hindsight realised that they had in fact been quite lonely.
These limitations can be addressed by using independent assessments (made by health professionals or by providers) alongside self-reporting techniques.
Informal qualitative evaluation tools can also be used to capture the impact of interventions. Interviews and other personal narratives are memorable, emotive and flexible – they can be employed to assist with everything from brand-building and recruitment to PR and fundraising. As such, they can be extremely effective in engaging potential funders, supporters and service users alike.
The effectiveness of impact narratives in engaging potential supporters and service users was clearly demonstrated at Brightlife through local newspaper coverage that was secured as a result of PR activity. In most cases, this was achieved by the Brightlife team rather than the individual service providers, pointing again to the valuable role that commissioners can play in raising the profile of the interventions they fund.
It is not always easy to identify and capture those personal stories that will best demonstrate the impact of an intervention. Often, the people responsible for gathering evidence of the impact of a project or service do not work directly with service users, so are less likely to be familiar with individual stories.
Encouraging ‘front-line’ staff, particularly volunteers, to participate in story-gathering can help with this. For example, during delivery of the Buddying and Befriending project, CCDT organised regular social events for volunteers: not only did this allow buddies/befrienders to socialise with and learn from each other, but it also provided an opportunity for staff to enlist the support of volunteers in capturing positive stories about the impact of the project.
When gathering and using personal stories, great care must be taken to respect and protect the individuals to whom they belong. People should be encouraged to share their stories at a pace and in an environment with which they are comfortable. For some, this may involve speaking on camera to an interviewer during an activity session; others may prefer talking quietly to a trusted volunteer in their own home.
Brightlife found video to be a useful tool for recording and sharing stories in a relatively non-invasive way. By working with an external video production supplier, the project team was able to capture some extremely nuanced and powerful stories.
Consent to use personal stories must be fully informed at the point of recording. Even where case workers and volunteers are already aware of stories that would be suitable to use as case studies, they should not be recorded without the knowledge of the relevant individual. When seeking consent, it is important to make the person fully aware of where their story may potentially appear (for example in a newspaper, in promotional material or in a fundraising campaign), as it is unlikely to be practical to obtain their separate permission every time it is used.