The challenges of measuring the “return on engagement” of cultural relations
What the British Council calls “Cultural Relations”—the building of trust and engagement through the exchange of knowledge and ideas between people worldwide—is a long-term investment and its full benefits are not achieved immediately. The objective of cultural relations is not to support short-term foreign policy messages; instead, by engaging through the common languages of education, arts, science and sport—promoting understanding and removing misunderstanding—long-term people-to-people and society-to-society relationships and engagement are fostered. By building up a “cultural relations credit” over time, other international engagement can be more effective whether government-to-government relations are friendly or tense. For example, at a time of diplomatic tension between Russia and the U.K., the British Council supported continuing cultural engagement through the arts by helping broker relationships that resulted in a major exhibition of U.K. art going to Moscow and of Russian art going to London.
The difficulty with the “long-term” is that public and private funders tend to be more interested in the short term so international cultural engagement must also demonstrate value for money and impact in the short and medium term to retain the confidence and support of its funders, stakeholders and partners. In short, there must be a demonstrable “return on engagement.” The British Council, over a number of years, has developed an increasingly rigorous approach to evaluation which provides short, medium and long-term indicators of success. Our approach is both quantitative and qualitative. The numbers that contribute to our corporate “scorecard” are one aspect and help provide short-term indicators of impact across our global network. We also use project specific quantitative data and qualitative “stories” to show the medium and long-term impact of our work. Before examining how we evaluate today, it is worth looking at how we started systematic and formal evaluation some 20 years ago to understand the drivers, the learning processes involved and the challenges of introducing evaluation into the “culture” of an organization such as the British Council. Many of the lessons learned then are still relevant today. Our evaluation system has of course improved immeasurably over the years and is increasingly accepted as essential and useful within the “culture” of the organization. It has taken time but the mainstreaming of evaluation really is an achievement.
Evaluation at the British Council: A Little Bit of History
In 1991, I worked at the British Council in Paris on what we called an “evaluation pilot.” The driver was a review by the NAO (National Audit Office) which basically said that the British Council had reasonable planning systems, was relatively efficient but that we needed to improve our impact measurement systems. We commissioned a well-known external consultancy firm to develop a methodology which they set out in a telephone- directory sized manual. It wasn’t rocket science. We were told to take a more systematic approach to defining the “priority groups” we wished to engage with, interview 50 of the most senior people (half of whom we should not know) and give questionnaires to everyone else involved in our programs. In Paris, our “priority groups” then comprised the elite and future elite of France and professionals in the sectors in which the British Council worked. In a country where a diploma from a small group of Grandes Ecoles meant a virtually guaranteed path to influence, identifying the elite was a relatively easy exercise. The “professional” groups were pretty much self-defining given that the British Council worked in British Studies, science, English language and the arts. As obvious as it may sound today, this was the first time we had taken a systematic and strategic approach to analyzing and defining our target audiences and it helped us immeasurably in sharpening our focus.
The French found our interest in evaluation rather curious and this helped us gain access to an extraordinarily senior group of people. The interview questions were, I recall, rather banal. Asking a senior government or cultural figure in Paris whether they had ever been to the U.K. was actually a little embarrassing. However, once the questionnaire was completed, the conversations we had were extraordinary and this was what truly enriched our programs and expanded our networks over the ensuing years. This demonstrated better than anything that listening to what people want rather than designing something you think they want is more effective. Again, this sounds obvious but even today there are countless examples of public diplomacy agents focused more on messaging than listening.
The second part of the process was a series of questionnaires for those engaged in our programs, which assessed customer satisfaction and tried to ascertain whether perceptions of the U.K. had improved as a result our work. We were surprised that people were quite happy to complete them and even more surprised at how useful the exercise was. Asking program participants for measurable feedback was invaluable in demonstrating success and in determining which programs to drop and which to continue or change.
Part of our brief was to ask our counterparts in France how they evaluated their international cultural work. The response was splendid: “La culture est trop importante pour être évaluée” (culture is too important to be evaluated). Many of us had secret sympathy for this view at the beginning but as we embarked on a much more systematic approach to defining target audiences and took a more market-oriented approach to our work, asking a range of senior contacts what their priorities were and seeking the views of our “customers” as to what they thought of the British Council and the U.K., we all began to be somewhat less skeptical about monitoring and evaluation. Not only did we now have quantitative evidence of success for our funders and partners, we were also better placed to make informed resource decisions about how we could achieve the greatest impact amongst the people we wanted to reach.
Almost 20 years on, this all sounds rather basic but I recount the story of our first real foray into evaluation in 1991 because the lessons learned then in the first few post Cold-War years were invaluable, and are still relevant today. It also serves as a useful reminder of why we need to assess the impact of cultural relations. First, most cultural relations practitioners are using other people’s money for at least part of their activity. In the case of the British Council, just under one third of our budget is from the U.K. government; using taxpayers’ money brings with it an obligation to demonstrate value for money and “benefit.” The drivers behind the 1991 evaluation pilot—the NAO wanting us to demonstrate efficient and effective use of public funds—are even more important today given the enormous pressure on public and indeed private funds. Second, and in some ways more importantly, cultural relations practitioners, like most professionals, want to know that they are “making a difference” and constantly seek to improve effectiveness by learning lessons from the past. Without a robust planning, monitoring and evaluation process, this is simply not possible.
The 1991 NAO report was therefore an important catalyst for the organization to start taking evaluation seriously. It helped us see the value of defining what we wanted to achieve, who we wanted to reach, what difference we wanted to make and how we would know when we had achieved it. It went beyond the obligation of accountability and showed us how to achieve greater impact and demonstrate success to our partners and stakeholders. This has made us immeasurably stronger.
Planning, Monitoring and Evaluation in the British Council Today
In the 20 years since we embarked on formal evaluation, we have developed and continue to develop, much more robust and sophisticated quantitative and qualitative approaches to PME (planning, monitoring and evaluation). We introduced a “balanced scorecard” in the 1990s backed up by a qualitative “storyboard.” Inevitably, our first version was over-complex and tried to measure anything that moved. Whilst we have simplified our systems and now “measure what matters,” one of the biggest challenges over the years was how to make monitoring and evaluation part of the “culture” of the organization so that it was not seen simply as another management task. With the tight strategic framework and the outcomes and results focused planning we have introduced over the last few years, monitoring and evaluation has had to take a more central place in the British Council. When the most senior staff in the organization are held “accountable” for results, monitoring and evaluation has to be mainstreamed and everyone begins to appreciate its relevance.
The importance of defining target audiences, objectives, outcomes and success measures is central to our evaluation system; planning, monitoring and evaluation are therefore are inextricably linked. To achieve our organizational purpose, the British Council has a corporate strategic framework which guides our programming across our global network from the smallest local projects to our large-scale international products. All our activity contributes to three strategic programmatic strands: creative and knowledge economy, intercultural dialogue and climate change. Strategy informs impact targets and planning from the top of the organization down, and impact deliverables contribute to the organization’s overall performance from junior colleagues “on the ground” up to the CEO. We have been quite successful in ensuring our teams understand the strategic framework and the importance of the role of monitoring and evaluation in delivering results. Establishing clear, concise and ambitious organizational objectives not only provides a compass for every action we take, but it provides a set of targets which enable effective evaluation.
Rigorous internal mechanisms for project design and development ensure quality control and establish a set of criteria for impact assessment. Before project ideas receive any resources, teams must articulate the project outcomes (the change the project hopes to accomplish), outputs (the goods or services produced to achieve these outcomes) and audiences (numbers and quality of engagement). These are the basic elements of the British Council’s internal project commissioning informed by the Project Logic model of corporate planning and performance. A four stage process beginning with design and development, moving on to proof of concept, build and test, and ending with the release of the project, occurs over two to three years to ensure the success of the business model. All project proposals include a robust monitoring and evaluation process (normally, around five percent of a project’s resources are set aside for evaluation) and have an “exit strategy.” If targets are not met, the project will need to be changed or stopped.
Through each project, we need to be able to articulate a story of impact and legacy. This is primarily through two interconnected functions: audiences and change. Who are we seeking to work with, and in what numbers? Three categories of audiences help us narrow our focus: Leaders—decision makers on a national or regional level, Influencers—emerging leaders and gatekeepers to larger audiences, and Aspirants—primarily young people who are seeking information and opportunities. The scale to which these audiences are involved in our programming depends on the outcomes of the project—i.e. the change we wish to accomplish. Change often falls into two categories: it is either a personal learning change—regarding perception or capacity building, or an action change—a shift in behavior, setting an agenda, or an institutional change. The type of changes desired in the project outcomes—both long-term and short term—dictate the level of engagement with which audience and the level of investment per individual.
The British Council recognizes that cultural relations work varies from project to project. In measuring impact, short-term quantitative measures of audiences engaged and reached are important, as well as qualitative indicators of the social implications, changes achieved and lasting legacy within specific cultural contexts. Therefore we use customized project-specific research, monitoring and evaluation methods in addition to the universal “Balanced Scorecard” which ensures that all British Council programming is subject to some standardized quantitative measures.
The Corporate Scorecard includes the level of audiences engaged directly; audiences reached through cascaded means or through radio, television and the web; the amount of products and services delivered by the project; and survey scores that assess customers’ attitudes toward and expectations of the British Council’s programming, its quality, reputation and likelihood of recommendation. Projects, countries and regions have targets within the scorecard for which teams are accountable. This data informs planning and success against organizational strategic targets and is becoming more important as the British Council moves towards impact-led planning.
The changes delivered by a project among the audiences involved— whether they are personal or action-oriented—do not tend to lend themselves to standardized quantitative metrics. However, being clear about the exact nature of the outcome ahead of time, and using varying evaluation methods including surveys, network analysis, in-depth interviewing and storyboarding does enable project managers to measure the difference a project has made—both in the short and long term.
Our monitoring and evaluation is not perfect but the quantitative and qualitative methods together do provide us with useful data and compelling stories which help persuade our stakeholders and partners of the benefit of supporting and working with the British Council. To articulate this in practical terms, the case study below shows the strategic contributions of one project to local, regional and corporate British Council outcomes.
Artistic Innovation Leads to More Inclusive Societies: Cultural Relations and Human Rights
In Asia, two million people are moving into urban areas each month, creating cities filled with the ferment of economic possibilities and societal tensions between traditional ways of living and the impact that the opportunities and risks of globalization present.
To stimulate the entrepreneurial possibilities of young people in these places while encouraging them to create open and diverse societies, part of the British Council’s regional Creative Cities project (spanning 11 countries in East Asia and China), included a Hong Kong-based “48 Hour Inclusive Design Challenge.”
A creative workshop in the guise of a design competition, the Challenge asked designers from across the region to divide into groups and team up with volunteers with physical disabilities, competing to produce a design concept for a product usable by both disabled and non disabled people within 48 hours. Each team was led by a design mentor from the U.K. and a local disabled design partner.
There were three outcomes for the 48 Hour Inclusive Design Challenge: 1) “Put disabled people at the heart of the innovation process and demonstrate how they can be a vital part in the design process as a template for social inclusion.” 2) “Share U.K. expertise in Inclusive Design and increase the capacity of designers or design educators in China and the East Asia region to engage with a disadvantaged community in the process of innovation. 3) “Increase the activity of city and cultural leaders to promote the benefits of developing Inclusive Design.”
Inclusive design can be a tool for bringing business advantage, diversity and innovation to design communities while at the same time raising awareness about people with special needs. None of the East Asian designers in this particular competition, no matter their seniority in the design field, had ever worked with a disabled person. The British Council invited the Helen Hamlyn Centre of the Royal College of Arts in the U.K. to bring inclusive design experts in as mentors, helping designers think out of the box and highlighting good examples of common technology like Bluetooth which have roots in assistive technology.
The winning team’s design, the MPwerStyx, was inspired by two brothers with an inherited metabolic disorder that damages body tissues and limits the development of joint movements. The brothers enjoyed surfing the internet, but found a traditional computer mouse cumbersome. Inspired by their dexterity with Chinese cutlery, the winning team reinvented the mouse in the style of a pair of chopsticks.
Surveys and interviews with the design team participants and the disabled volunteers made clear that a strong shift in perception had occurred (more on that below).
The following are summaries of the quantitative and qualitative evaluations conducted at three levels of the British Council, based on information gathered from participants in the 48 Hour Inclusive Design Challenge. Each section represents a distinct set of needs that each project must aim to fulfil: local/country, regional and global.
The most immediate measurement of impact is at the local level: how the project met the British Council Hong Kong office’s local strategic goals. Did the project take full advantage of—or, ideally, strengthen—the British Council’s pre-existing Hong Kong relationships? Did the project connect the British Council with new audiences? And what sort of change happened as a result?
British Council’s Hong Kong office was responsible for determining the impact of the project on a local level. As they took the lead on the 48 Hour Inclusive Design Challenge, they were also responsible for compiling all measurement and evaluation figures for the project within each participating country, and therefore, the impact of the project overall.
Since the competition was headquartered and held in Hong Kong, it was an opportunity for the office to expand and strengthen its local partnerships with public, private and not-for-profit institutions. The project included extensive pre-event activities beginning several weeks before the competition to drum up interest among key influencers and the general public. Key Hong Kong design organizations joined the British Council as partners in the project, including the Hong Kong Government’s DesignSmart Initiative which supplied grant funding and Hong Kong Youth Advocates who volunteered to provide administrative support during the competition. The partnership element is crucial; it not only ensures an informed, culturally aware approach, but it makes the most of the British Council resources assigned to the project by leveraging further investment.
The British Council Hong Kong team engaged one “Leader,” with the project—Victor Lo, Chairman of the Hong Kong Design Center, in an effort to ensure high level support. As a senior official with national decision making capabilities, Lo is considered a “Leader” within the British Council’s audience metrics, and therefore would be accounted for on the Scorecard as such. While Lo is not the prime audience for this project, engagement with leaders is often important in securing the project’s goals enjoy longevity past the competition.
The 48-hour Inclusive Design Challenge engaged 79 “Influencers” in China and Hong Kong’s disabled and design communities (the second tier of the British Council’s Scorecard audience profile). These Influencers were either advisors and partners or local designers participating in the competition. In the case of the six disabled design partners who lead the challenge teams alongside a U.K. design mentor, longer, deeper relationships were established with the British Council as a result of collaborating together on the project design and delivery. In opening up the challenge and making it a publicly accessible competition, the British Council was able to call upon the larger set of Influencers involved, as gatekeepers to their communities, to draw in a further 250 young people and general members of the public (Aspirants) who either volunteered during the challenge or who were in attendance. Competition audiences engaged directly with the subject matter, as they voted on the best designs.
Pitching the event to journalists and a partnership with a local news organization resulted in an estimated two million impressions in Hong Kong. Successful media is a key output from a project like the Design Challenge, but the British Council does not focus on readership, or viewership within the Scorecard’s audience profile. Media helps raise awareness, but our aim is to measure the degree to which audiences have been directly impacted as a result of our work.
Project partner Hong Kong Design Center provided strongly positive feedback that the project met the organization’s needs: “It’s really a good chance to cooperate with the British Council to co-organize this meaningful public event to promote the idea of inclusive design successfully. The event was successful as it attracted many people from design and different industries, media and the general public to participate and arouse their awareness of this topic.”
Regional: China and East Asia
Regionally, the British Council evaluated the 48-hour Inclusive Design Challenge in all 11 participating countries. Did the project achieve its outcomes—did it deliver a personal-learning, or action-oriented change? What could be done next time to improve the project? Did the participating teams from regions outside of Hong Kong incorporate their learnings in future projects? Project managers analyzed quantitative and qualitative data to answer these questions.
A survey distributed to participants and audience members of the 48 Hour Inclusive Design Challenge asked respondents to indicate whether the event met their expectations, whether they considered the event “high quality,” and whether they considered the British Council a leader in its field. Respondents were also encouraged to provide qualitative feedback in blank spaces provided. Those surveyed were given five choices (ranging from “strongly agree” to “strongly disagree.”) Responses were collated and weighted to give an average score representing the given question on a hundred-point scale. The “Expectations” question, for example, yielded a total score of 82 – halfway between Strongly Agree and Agree; respondents also felt that the event was a high quality event, earning a score of 85 in this category. The question about the British Council’s reputation in the cultural relations field—always a difficult metric to work with if people have little international experience—earned an average score of 76
Perhaps telling a more detailed story, however, are the aims originally set out by the project team alongside the accomplishments. Project leaders aimed to “put disabled people at the heart of the innovation process and demonstrate how they can be a vital part in the design process as a template for social inclusion.” In the end, six disabled design partners collaborated with six U.K. design mentors and with 44 designers and design educators from 11 different countries across Hong Kong, China and East Asia. These 56 experts competed and then collaborated to produce the design of a product that ultimately solved a problem faced by disabled people.
Perception change amongst competition participants and audience members was an important qualitative consideration at the regional level. One designer from Guangzhou commented, “In China we seldom think of people with disabilities – it’s a taboo subject. The situation is getting better now but there is still much to be done. The competition challenged us morally, emotionally and technically – what an inspiration!” One of the volunteers from mainland China observed, “I feel so thankful to the British Council and all the designers for creating designs that break the barrier between the non-disabled and disabled worlds. This is the first time in my life that I’ve been treated as a normal member of society.”
Media coverage of the competition allowed seven million people to hear about the project in China and East Asia, North America and New Zealand through the media; prominent full-page stories appeared in Hong Kong newspaper and stories filled Chinese websites.
Global: The British Council
The Impact of the 48-hour Inclusive Design Challenge was not only felt in terms of audiences engaged—Leaders, Influencers and Aspirants—or in the personal learning of attendees regarding the perception of disabilities, but there was an action-oriented change in the marketplace as a direct result of the project: the winning design concept was showcased in the 2008 London Design Festival to an audience of over 2,000 and subsequently bought by a production house, ready to be created and made available to millions of people. None of these achievements could have been made in isolation. Partnership was the key to success in this case. 50 percent of the project budget was derived through strategic partnership with local government and NGOs.
New economic and social developments often hinge upon the creativity and innovation that arts professionals bring to the table. This Inclusive Design Challenge demonstrated yet again that the arts and creative industries provide unique ways for people to debate contemporary issues, challenge opinions and increase mutual understanding while simultaneously laying the groundwork for innovations that will drive economic growth.
Qualitative feedback from the East Asian participants made clear the Challenge highlighted the practical benefits of a collaborative, inclusive approach to product design: an opportunity to identify new market opportunities and insight into how the utilization of aesthetics combined with usability lead to real-world entrepreneurship. Mentorship by the Royal College of Arts staff contributed to fostering new ways of working while simultaneously highlighting the U.K. as an effective partner for skills development and contributing U.K. expertise to international cooperation – both key goals for the British Council globally in our Creative and Knowledge Economy program area and for one of the project’s key outcomes. As a result, the Hong Kong Design Centre has already organized another Inclusive Design workshop, and one of the project partners, Cyberport, is exploring a future partnership with the Royal College of Art in incorporating inclusive design in improving digital lifestyle in Hong Kong.
Beside the benefits to the regional creative economy, this project also contributed to another priority for the British Council: through Intercultural Dialogue, a more open and inclusive society. Through shared work in a creative endeavour, both disabled and non-disabled participants contributed to positive social change in East Asia while strengthening the bonds between different perspectives within their region and with the U.K.—a key outcome of our global work in building trust and engagement between people worldwide.
By Sharon Memis
Sharon Memis is the Director of British Council USA. Memis joined the British Council in 1988 and has managed international programs in education, the arts, science and governance in Paris, Rome and Brussels. Before taking up her current post in 2006, she was director of corporate planning and performance in London and frequently spoke at conferences about the British Council’s evaluation system. The British Council celebrates its 75th anniversary in November 2009. More information can be found online at www.britishcouncil.org.