This chapter discusses key considerations and processes in relation to planning and conducting evaluations of pro bono programs within a pro bono practice. It outlines:
- Why evaluation is important in the pro bono context;
- How to plan an evaluation;
- How to collect and analyse data;
- Ways to report findings and share evaluation results; and
- Commissioning evaluations.
There is also a glossary of key terms included at the end of the chapter, as well as some simple templates to assist in the evaluation planning process. There is an example pro bono program referred to throughout the chapter as a case study (the ‘Employment Law Service’ program) to demonstrate the application of evaluation activities.
- 1.13.1 What is Evaluation?
- 1.13.2 Planning an Evaluation
- 1.13.3 Implementing an Evaluation
- 1.13.4 Commissioning an Evaluation
- 1.13.5 Glossary
1.13.1 WHAT IS EVALUATION?
Evaluation refers to the systematic process of assessing the merit, value, or impact of a program, project, policy, or strategy. It involves collecting and analysing data against a set of specific questions or criteria to generate findings about the design, implementation, or outcomes of a program or project. Evaluation is a useful tool in service delivery contexts and can help organisations demonstrate the benefits of their work, assist in decision making processes, or document key lessons for future programs.
In the pro bono context, firms can use evaluation to demonstrate the value of pro bono programs in a range of areas. This includes demonstrating the impact of the program for clients, supporting the firm’s strategy and demonstrating its values, and supporting the recruitment and retention of staff with an interest in social justice.
It is important to recognise that there is significant diversity and variation in evaluation approaches, and that evaluation and impact measurement can fulfil many functions. The remainder of this chapter outlines a straightforward process for planning and implementing an evaluation, however it should be kept in mind that there is no one single approach to evaluating a program.
1.13.2 PLANNING AN EVALUATION
This section outlines the common steps required to plan an evaluation. Typically, it is best to plan an evaluation at the beginning of a program or project to ensure the objectives are clear and appropriate and data collection processes are established and integrated. However, it would still be beneficial to retrospectively apply these steps for an evaluation of an existing pro bono program.
The following key steps may assist in planning an evaluation:
- Develop a theory of change or a program logic model.
- Outline the objectives and the scope of the evaluation.
- Create an evaluation framework with key evaluation questions, indicators, data sources, and data collection methods.
Brief prompts are provided at the end of each section to summarise the key considerations for each step. These considerations would generally be reflected in some way in an evaluation plan. Typically, an evaluation plan includes:
- Background to the program
- The theory of change for the program
- The objectives and scope of the evaluation
- An evaluation framework
- Methodology or approach to data collection and analysis
- Roles and responsibilities
- Timing for reporting
Theory of Change
A theory of change – also referred to as a program logic model – is a useful starting point when scoping and planning for an evaluation. In short, a theory of change identifies the logical flow or link between program activities and processes (e.g., workshops) with the intended outcomes of the program (e.g., improvements in client awareness of their legal rights). This model lays the foundation for an evaluation by clarifying what ‘success’ will look like in the short, intermediate and longer-term.
There are many ways to construct a model like this, but most will follow a structure similar to that in the figure below:
It may not always seem feasible to develop a documented theory of change for each program being evaluated, particularly if the program is small or under-resourced. However, explicitly considering and documenting the intended activities, outputs, and outcomes of the program is an important step in scoping the evaluation and deciding on the most appropriate approach.
Case Study: Employment Law Service
A firm is partnering with Community Legal Centres (CLC) to offer pro bono outreach services to workers at risk of exploitation. The program involves upskilling CLC staff in employment law services and ensuring workers in precarious employment have access to legal services and advice. The figure below outlines a very basic theory of change for the employment law service program. While the program may not directly cause worker exploitation across industries to decrease, it aims to contribute to achieving this broader impact. Therefore, even though achieving the longer-term outcomes may not be directly attributable to the program, it is still important to link the direct or immediate outcomes of the program to the wider goals the program is aiming to contribute to in the longer-term. |
Appendix 1 provides a template for a basic theory of change.
In summary: Is there a clear articulation of what success looks like for the program? Is there a shared understanding of the program’s intended activities, outputs, and outcomes? Has this been documented in a theory of change?
Objectives and scope
Before commencing an evaluation, it is important to understand what the firm is aiming to achieve by investing in the evaluation. Taking the time in the early stages of evaluation planning to ensure there is a shared understanding of what is being evaluated and why helps keep the evaluation focused. Objectives can vary widely, but might include:
- Demonstrating accountability: evaluation can demonstrate how a firm’s resources were effectively or efficiently used to achieve the pro bono program’s objectives or outcomes. Evaluation can also help demonstrate that pro bono legal work is being performed to the same high standard as all other work in the firm.
- Demonstrating value: evaluation can demonstrate the value of a firm’s pro bono work which can help to build support for investing in future pro bono programs, promote the firm’s values, and enable the firm to participate in the pro bono community.
- Generate learning: knowing whether pro bono efforts have achieved their goals or understanding the reasons why they have not, helps inform future efforts to make the best use of limited resources. The lessons learned through an evaluation process can help inform decision-makers on what to invest in going forward, or how to improve existing program offerings. It can also provide insights into how the program can contribute to meeting the broader strategic business goals of the firm.
Once the evaluation objectives have been clarified, the scope of the evaluation needs to be defined so that it can be planned and resourced appropriately. Questions to help set the evaluation scope include:
- What will be evaluated: will the evaluation cover an entire program, a single project, or a particular component of a project?
- What time period will be included: will the evaluation cover the entire duration of a program, or a limited time period?
- What resources are available: are there extra resources available for the evaluation or will existing project resources need to incorporate any capacity for evaluation?
- What is out of scope: are there are any areas of inquiry that will not be covered by the evaluation? Should the evaluation focus on a particular set of outcomes (e.g., outcomes experienced by a particular stakeholder group only) or a particular evaluation area (e.g., design and implementation process of the program only).
A program’s evaluation objectives and scope may be broad, or they may be quite defined. For example, they might relate only to understanding whether internal staff have increased their knowledge and capabilities in pro bono practice. Even if the program outcomes as specified in the theory of change note that community members will be impacted by the program, it may not be appropriate or feasible to evaluate the direct result on program beneficiaries.
Case Study: Employment Law Service
Background information:
There are limited resources available for the program evaluation, therefore the following evaluation objectives and scope would be appropriate: Objectives:
Scope:
|
In summary: Why is this evaluation being undertaken? Is there a clea and defined purpose for the evaluation with established parameters and scope?
Evaluation Framework
An evaluation framework outlines the key evaluation questions, indicators, data sources and data collection methods. In summary:
- Key Evaluation Questions (KEQs): high-level questions that are used to guide the evaluation. KEQs are different to the questions asked in a survey or an interview, but they tell us what we need to find out through data collection and analysis. It can be useful to think of KEQs in terms of overarching categories or domains. For example:
- Appropriateness (e.g., was the pro bono program designed appropriately for the target population?)
- Effectiveness (e.g., to what extent did the pro bono program achieve the intended outcomes?)
- Efficiency (e.g., to what extent did the pro bono program represent value for money?)
- Learning (e.g., what lessons are there for future programs?)
The aim is to have complementary, but not overlapping questions. KEQs should address areas of inquiry related to the theory of change (e.g., how many clients did the program reach?) and to the evaluation objectives and scope. For example, it may not be necessary to assess the cost-effectiveness (efficiency) of a particular program if that is not an evaluation objective.
- Indicators: indicators outline the information needed to be able to answer the KEQs. For example, if we want to know whether pro bono clients are accessing more services, an indicator could include an increase in the number of referrals made to a particular service provider. Indicators could also include:
- Total pro bono hours performed by the firm
- Average number of pro bono hours performed on a per-lawyer per-year basis
- Pro bono hours as a percentage of total billable hours
- The number of lawyers and non-lawyers in the firm participating in pro bono work
- The number of pro bono clients assisted
- The number of pro bono matters
- Lawyer experiences undertaking pro bono legal work
- Staff experiencing increased work satisfaction
- Staff developing increased skills
An indicator is the level of evidence that underpins the answer to the KEQ.
- Data sources: data sources refer to where the information and evidence will come from. People, including staff, clients, and stakeholders can be data sources. Similarly, documentation and records are also a data source, such as referral notes, case notes, and budgets. Specifying the data sources helps indicate where the data will be when we need to collect it, and who from.
- Data collection methods: Finally, data collection methods specify how the information will be collected. Methods outline the measures to be used to gather information from the data sources. This might be through a systematic review of project records, or it might include surveys, questionnaires, interviews, or focus groups.
Other elements that can be useful to incorporate into an evaluation framework include responsibility – which can be important when there are multiple people or organisations involved – and timing so that it is clear when data will be collected.
The figure below summarises the interaction between these components of the evaluation framework:
An evaluation framework can be relatively concise, or it may be very comprehensive. It should be directly related to the evaluation objectives and scope, and can be relied on throughout the evaluation to guide the data collection, analysis, and reporting.
Case Study: Employment Law Service
Considering the intended outcomes of the employment law service program and the evaluation objectives and scope, the following excerpt from the evaluation framework would be appropriate. An evaluation framework would usually include approximately five key evaluation questions, which may also be accompanied by a series of sub-questions.
|
In summary: Is there a clear sense of the evaluation questions that need to be answered? What do you need to know to answer the evaluation questions? Where will you get the data from, and how?
1.13.3 IMPLEMENTING AN EVALUATION
This section provides high-level guidance on implementing an evaluation once the plan has been finalised. Implementing an evaluation broadly involves:
- Developing data collection tools
- Implementing data collection
- Data analysis
- Reporting
Data collection
The evaluation framework outlines the data that needs to be collected, from where, and how. The plan can be used as a guide to ensure that processes are established to collect and access data when it’s needed, as the evaluation is most likely to work well if it can be integrated into existing work processes.
Data can be quantitative or qualitative, with most evaluations using both in a mixed methods approach. While there are many data collection tools that could be used, the methods most commonly relied on include:
- Monitoring data, databases, and project documentation
- Surveys, questionnaires, and feedback forms
- Interviews and focus groups
Generally speaking, data can be divided between monitoring data and primary data. Monitoring data refers to information collected as a regular and ongoing part of program delivery, while primary data refers to a more tailored or intensive approach to gather information at specified points in the evaluation.
Monitoring data collection
Monitoring data refers to the regular, periodic, and continuous collection of information throughout the duration of a program. Monitoring data is primarily related to the inputs, activities and outputs components of the theory of change and is generally focused on the process elements of a program.
Monitoring data can be used to track progress towards program objectives and respond to indicators including the number of pro bono cases, hours spent by staff members on pro bono matters, or breakdown of referral sources, for example.
Most firms are likely to already be collecting some of the monitoring information that would be useful for evaluating their pro bono program through their matter management and billing systems, or as the result of their:
- External obligations, such as reporting required by government
- General record-keeping policies, such as matter opening procedures
- Pro bono intake procedures
- Approach to crediting and recognising of pro bono work
Some firms with established pro bono practices may also have customised databases for pro bono matters to capture a range of additional information which may include:
- Client profiles (e.g., gender, cultural and linguistic background, income, geographic location)
- Area of law or practice (e.g., governance, employment, tax)
- Referral source
- Billing or time-tracking records (e.g., time split between casework and advice, law reform, pro bono administration, continued legal education (CLE) training)
- Seniority of pro bono staff member
Most evaluations will draw on monitoring data, and so it is beneficial to ensure there are systems in place to efficiently capture this information through delivery of the program. Partner organisations may also need to collect this data, so it is useful to have the specific indicators documented and understood by all involved.
Primary data collection
In instances where monitoring data alone is not able to answer the key evaluation questions, additional data might need to be obtained through discrete evaluation tools and methods, including surveys, questionnaires, feedback forms, interviews or focus groups.
These methods can be quantitative or qualitative in nature, or a combination of both. It may be more efficient to integrate additional data collection tools into existing monitoring processes, for example, incorporating some extra questions into an existing matter closure report.
Some of the most common data collection instruments that can be designed and implemented include:
- Surveys, questionnaires, or feedback forms: Surveys and questionnaires can be an efficient and effective way of collecting a range of data from a large number of participants using closed or open-ended questions. Surveys can be developed online or in hard-copy and easily distributed to potential respondents. In some instances, this process might be implemented at the start and end of an engagement with a client, or solely at the end. It is important to refer to the theory of change to understand when these processes might occur. For example, short-term outcomes can be captured immediately at the end of an activity, but a period of time may need to pass before intermediate outcomes can be demonstrated.
- Interviews or focus groups: Qualitative data collected from interviews or focus groups can be useful for demonstrating the impact of a program and understanding the experience of participants. These are typically more time consuming and resource intensive, so it is advisable to be clear about the specific questions to ask and to document these in an interview or focus group guide.
There are many resources online about survey and interview question design. However, some basic principles include:
- Write questions that are fit for purpose for the population – consider literacy levels, whether English is the client’s first language and other demographic features.
- Only ask questions that are useful – all information collected should have a purpose in the evaluation, if it doesn’t then it shouldn’t be collected.
- Make sure respondents are clear on how their information and data will be used, and that they have the right to not participate if they prefer.
Case Study: Employment Law Project
Some of the primary data collection methods identified for the employment law service program include staff feedback forms and semi-structured qualitative interviews. The example questions below could therefore be used in these instruments.
|
Data analysis
Analysis refers to the process of synthesising and arranging the data to generate results, conclusions, and findings. The approach to analysis depends on the methods used and the nature of the data collected. In the pro bono context, the following approaches to analysis will likely be the most useful:
- Quantitative data: information in numeric form. Quantitative data can generally be counted or compared on a numerical scale. Analysing quantitative data involves using statistical methods to describe, summarise, or compare data. Quantitative data are generally presented as frequencies (the number of times something has occurred), percentages, ratios, or means, medians, and modes. While quantitative data can be analysed through more complex statistical testing, these forms of descriptive statistics are likely the most realistic approach in the pro bono context. Quantitative results are generally presented as graphs, figures, charts, or tables.
- Qualitative data: information in narrative form. Qualitative data is generally recorded in text format, including interview transcripts and notes, or responses to open-ended survey questions. Qualitative data is most commonly analysed thematically, through a process of coding. In summary, coding involves reading and re-reading transcripts, notes, or text to identify key themes and ideas, and then organising the data against these themes. Qualitative results are usually presented as explanatory text, often including example quotations.
Case Study: Employment Law Service
Results from different questions and sources should be consolidated where possible to strengthen the evaluation findings. For example, finding that CLC staff have increased their skills in employment law could be demonstrated through an analysis such as the below:
“The program helped me understand the benefits of incorporating more outreach services into our practice. I was initially hesitant to put my hand up to take part in the employment law program – particularly as I wasn’t really familiar with employment law – but now feel like we can make a meaningful difference to people’s lives” By triangulating this data, we can make the finding that the employment law service program contributed to staff learning more about employment law, feeling more confident to practice employment law, and that this likely translated into an increase of staff being interested in and offering pro bono employment law services to the target population. |
Reporting
The frequency of reporting is generally guided by the evaluation objectives and scope, as well as the reason behind why the evaluation is required. For instance, the sponsor of the pro bono program might request quarterly or biannual updates on program uptake. Or an external funder might ask to see an evaluation at the end of the funding period that demonstrates what outcomes have been achieved. Understanding the audience is important in reporting, but also factors into the early stages when setting the objectives and scope.
During the planning process it is useful to spend some time clarifying the reporting format preferences and frequencies. Some organisations align regular reporting with their financial year and annual reports – ultimately it is down to when it is going to be most useful to inform decision making.
There is no one structure for evaluation reports, but typically they will involve:
- An executive summary
- Introduction
- Background and context to the program
- Evaluation purpose, scope, and summary of the methodology (which would include how many people responded to surveys, and participated in focus groups)
- Results in detail, generally structured against the KEQs
- Key findings and recommendations for actions to help progress towards long-term outcomes
- Appendices
1.13.4 COMMISSIONING AN EVALUATION
Planning and conducting a rigorous evaluation can be a challenging, time-consuming, and resource-intensive task. For larger or more complex programs firms might consider commissioning an evaluation consultant instead of undertaking the evaluation in-house. There are numerous research and evaluation specialists available, many with experience evaluating pro bono programs. An evaluation consultant can develop a theory of change, an evaluation plan, conduct the evaluation activities, and provide a report with detailed results and key findings.
Engaging an independent external consultant may also help demonstrate accountability, as there is less risk of bias or subjectivity throughout the evaluation. Depending on the budget available, firms might draft an evaluation plan as both a starting point for the consultant and to reduce the cost to engage them.
Evaluation consultants can be engaged through developing and disseminating a Request for Quotation (RFQ). RFQs typically include the following information:
- Background to the project
- Evaluation objectives and scope
- Preferred methodology or approach – though firms can request consultants provide a suggested approach instead
- Timelines
- Budget – or an indication of the scale associated with undertaking the evaluation
- Selection criteria, e.g.,
- Relevant experience – firms might also like to ask consultants to nominate referees
- Qualifications and experience of the project team
- Capacity to perform work on time and in budget
- Other relevant information
Evaluation consultants can be found in multiple ways, including seeking referrals from partners who have worked with consultants before and can provide a recommendation, or locating evaluation societies, such as the Australian Evaluation Society. It is beneficial to obtain multiple quotes to compare submissions and ensure there is robust selection criteria guiding the selection process. It can also be useful to assemble a small number of people from across the organisation to review submissions.
Once the consultant has been engaged, project management procedures should be established for managing the evaluation. This is often achieved through scheduling regular meetings throughout the engagement or asking for ongoing reporting updates.
1.13.5 GLOSSARY
One of the challenges in evaluation is ensuring everyone has a shared understanding of what is being discussed. Table 1 provides some basic definitions of key terms found in evaluation. Some (but not all) are referenced in this document. They are provided to help give some clarity to key concepts in evaluation.
Table 1: Glossary of terms
Term | Definition |
Activity | Tasks undertaken to deliver an output, which contributes to a project or program. |
Baseline | The starting point for the indicator or basis on which success / change will be measured, e.g., where things stand at the start. |
Effectiveness | The extent to which a program achieves its objectives or outcomes. |
Efficiency | The extent to which a program is delivered with the lowest possible use of resources, to the areas of greatest need, and continues to improve over time by finding better or lower cost ways to deliver outcomes. |
Evaluation | A rigorous, systematic, and objective process to assess the effectiveness, efficiency, appropriateness and sustainability of programs. |
Indicator | A measure used to assess or check a program or project’s effectiveness. It can also be thought of as an (often incomplete) ‘window of insight’ on a particular outcome or concept. |
Inputs | Any resources of any kind that are fed into and made use of during the program, for example, human resources, telecommunications, cash, and in-kind funding. |
Monitoring | A process to periodically report against planned targets. Monitoring is typically focused on outputs rather than outcomes and is used to inform managers about the progress of a program and to detect problems that may be able to be addressed through corrective actions. |
Objectives | Concise statement about what a program or project is aiming to achieve. |
Outcome | Changes in physical, social, or organisational attributes (e.g., changes in behaviour, resource use, energy production, attitudes, awareness, policies). |
Output | The products, goods, and services which are produced by the program. |
Program logic / Theory of change | A tool that presents the logic of a program in a diagram or chart (with related descriptions). The model illustrates the logical linkage between the identified need or issue that a program is seeking to address; its intended activities and processes; their outputs; and the intended program outcomes. |
Reach | The size/scale of the influence or impact of the program, how many people know about it, how many people are involved, how many people’s lives have been touched by it. |
Scope | The boundaries applied to your evaluation to identify limits. For example, the timeframe to be evaluated, or the elements of the program that are outside the evaluation’s focus or ability to answer. |
Stakeholders | Individuals and organisations who are involved in or may be affected by project activities. |
This chapter was reviewed in 2022 by the Australian Pro Bono Centre with the assistance of the team at First Person Consulting, particularly Matt Healey and Mallory Notting. The Centre acknowledges and is grateful for the generous contributions of all those who assisted with the 2022 refresh of the Australian Pro Bono Manual.