Archive for the ‘Evaluation’ Category

Reader Question: Starting a new program and convincing foundations to fund?

February 26, 2010

Photo by tacomabibelot

I recently did a session on Programs: Developing, Managing, and Evaluating for the Emerging Nonprofit Leadership Network and I was asked a question by a participant about starting a new program. The participant was wondering how does convince a foundation to fund a new program when you have never done it before, and therefore don’t have evaluations showing it was effective?

This question surprised me because I thought most would know the answer, but I found that many at nonprofits were wondering this same thing. The answer is research. You should rarely, if ever, start a new program without research supporting your intervention. So, what if no one has ever done what you want to do – or someone has done it, but there isn’t research supporting it yet? Well, then you find research supporting components of the program.

I’ll take an easy example, say you want to start a program where 10th graders become tutors and mentors for at-risk 6th graders to help them improve academic achievement. Sure, there might not be research on that specific program, but you should definitely be able to find research on whether mentoring is effective, at what ages mentoring has been effective, what research has found to be successful interventions for academic achievement, research on causes for low achievement for at-risk youth, etc. Using this research you should be able to build a case to support the program you want to do.

So, what if you can’t find research or research does not support what you want to do? If you can’t find research to support any component of your program in any way, then it probably isn’t the best choice. If you find research but it doesn’t support your approach – then figure out why and think of ways you can address that.

Where do you find this research? Online, articles, journals, etc. Personally, I use Google Scholar when searching for articles, but you can also use local libraries to access journals and books. Also keep in mind if you find a good article that fits what you are looking for, look at the citations and who the author cited. It is more likely than not you will find a bunch more support or useful information that will help build your case!

Wondering how big of a sample size you need?

February 26, 2010

Photo by eleaf

So, you have decided to do an evaluation – or are doing some preliminary research for a proposed program. You sit down and try to figure out the details, which includes how many people should you send your survey to? If you are just doing asking a bunch of people to participate in a survey asking about why they donate, or whether they enjoyed your program, then the sample size probably isn’t as important. It is important is you want to be able to generalize your findings to the general population – or to the targeted population. So, how do you determine it? Well I could tell you the complex formula and math behind determining a sample size, but it is easier just to point you to a simple sample size calculator you can download. This link will take you to a survey course website, on the bottom left of the page you will see “Sample Size Calculator” click it and download.

Once it pops up, it might be a little confusing so here are a few tips to make it easier:

  • The first tab “Type of Analysis” you can usually leave the defaults – unless you are doing complex sampling – which you probably aren’t.
  • The second tab “Values and Settings” is most important. Make sure to enter your population size, etc.
  • The third tab “Corrections” is pretty much self-explanatory and you will probably not use it – but if you do it explains what each option means by the selection box.
  • Once you have entered everything in, then the box to the right should say a number – that is the number of people your sample should include.

*When you download it, there is a “quickhelp” folder that explains what each box means if you are confused about what to put there. Good luck!

Exploring Effective Strategies for Facilitating Evaluation Capacity Building

February 26, 2010

This AEA session was of particular interest to me. I would love to see more nonprofits investing in building their capacity with evaluation, and this session discussed ten strategies to do so:

  1. Coaching/Mentoring: building a relationship with an evaluation expert who provides individualized technical and professional support
  2. Technical Assistance: receiving help from an internal or external evaluator
  3. Technology: using online resources such as websites and/or e-learning programs to learn from and about evaluation
  4. Written Materials: reading and using written documents about evaluation processes and findings
  5. Training: attending courses, workshops, and seminars on evaluation
  6. Involvement in an Evaluation Process: participating in the design and/or implementation of an evaluation
  7. Internship: participating in a formal program that provides practical evaluation experience for novices
  8. Meetings: allocating time and space to discuss evaluation activities specifically for the purpose of learning from and about evaluation
  9. Appreciative Inquiry: using an assets-based, collaborative, narrative approach to learning about evaluation that focuses on strengths within the organization
  10. Communities of Practice: sharing evaluation experiences, practices, information, and readings among members who have common interests and needs (sometimes called learning circles)

See posts about other sessions I attended at this year’s AEA: “American Evaluation Conference Summary Post

Unique Methods in Advocacy Evaluation

February 26, 2010

This AEA session discussed common advocacy evaluation methods:

  • Stakeholder surveys or interviews – Print, telephone, or online questioning that gathers advocacy stakeholder perspectives or feedback.
  • Case studies – Detailed descriptions and analyses (often qualitative) of individual advocacy strategies and results.
  • Focus groups – Facilitated discussions with advocacy stakeholders (usually about 8-10 per group) to obtain their reactions, opinions, or ideas.
  • Media tracking – Counts of an issue’s coverage in the print, broadcast, or electronic media.
  • Media content or framing analysis – Qualitative analysis of how the media write about and frame issues of interest.
  • Participant observation – Evaluator participation in advocacy meeting or events to gain firsthand experience and data.
  • Policy tracking – Monitoring of an issue or bill’s progress in the policy processes.
  • Public polling – Interviews (usually by telephone) with a random sample of advocacy stakeholders to gather data on their knowledge, attitudes, or behaviors.

And highlighted four new methods that have been developed specifically to address advocacy evaluation’s unique challenges:

  • Bellwether methodology – Interviews conducted with “bellwethers” or influential people in public/private sectors whose positions require that they track a broad range of policy issues. Part of sample is not connected to issue of interest and sample does not have advance knowledge of interview topic. Used to assess political will as outcome, forecast likelihood of future policy proposals/changes, assess extent that advocacy messages have “broken through”, and to gauge whether an issue is on federal/state/local policy agenda and how it is positioned.
  • Policymaker ratings – Advocates (or other informed stakeholders) rate policymakers of interest on scales that assess policymakers’ support for, and influence on, the issue. Used to assess extent to which a policymaker supports an issue and whether that support is changing over time.
  • Intense period debriefs – Advocates are engaged in evaluative inquiry shortly after a policy window or intense period of action occurs. Used when advocacy efforts are experiencing high intensity levels of activity and advocates have little time to pause for data collection.
  • System mapping – A system is visually mapped, identifying the parts and relationships in that system that are expected to change and how they will change, and then identifying ways of measuring or capturing whether those changes have occurred. Used to try to achieve systems change.

Please note that the above notes are credited to the “Unique Methods in Advocacy Evaluation” by Julia Coffman and Ehren Reed.

See posts about other sessions I attended at this year’s AEA: “American Evaluation Conference Summary Post

How do we define and measure social impact?

February 26, 2010

Photo by spettacolopuro

This month’s Nonprofit Millennial Blogging Alliance (NMBA) topic relates to social impact and how we define and measure it.

So, what is social impact? Well, I did what anyone that has access to internet would do, I googled it. It seems there isn’t really a clear, precise definition for it. I couldn’t even find a definition on Wikipedia – the closest I got was Social Impact Assessment or Social Impact Theory. So, I am going to go with a mish-mash of definitions and partial definitions I found:

Social impact = the influences or affects an organization or group can have to impact people’s lives. This influence or affect increases with immediacy and strength, and can have both positive and negative social consequences.

So, to use an easy example: More and more people continue to join Twitter because they know more people who are on Twitter, their close friends are now on Twitter, and everyone seems to be joining Twitter. Hence, one would say the social impact of Twitter is quite large and continues to grow as its strength and immediacy grows.

For nonprofits, this would be used more in the sense of how a nonprofit taking advantage of social change to make a difference in people’s lives.

So, how would one measure social impact?

Well, since social impact is more that just evaluating the effectiveness of an intervention it would make sense that a simple evaluation wouldn’t be enough.

An interesting concept I came across was that one could put together an impact map, which will help organizations to clearly show relationships between inputs (resources) and outputs (activities, outcomes). Basically it helps an organization understand how they create change.

The impact map could be combined with a social impact assessment, which “includes the processes of analysing, monitoring and managing the intended and unintended social consequences, both positive and negative, of planned interventions (policies, programs, plans, projects) and any social change processes invoked by those interventions. Its primary purpose is to bring about a more sustainable and equitable biophysical and human environment.” This would allow a nonprofit to map the relationships and measure the change that resulted from those relationships.

A more government-type perspective on social impact assessment can be found here. Some may also go as far as measuring the financial return on a social impact using a social return on investment.

Check out some other perspectives on social impact and how to measure it from NMBA bloggers:

What is Social Impact? by Nonprofit Periscope

Measuring Social Impact (wait…what is social impact?) by Onward and Upward

Interactive Techniques to Facilitate Evaluation Learning

February 26, 2010

This was an interesting session that I attended at the American Evaluation Association’s Annual Conference. It had some great tidbits. Here are a few things I wanted to share from the session:

The presenter discussed what portion of things people learn, and how they learn them. This is what she shared:
– People remember… 10% of what they read (book, handout)
– 20% of what they hear (head a lecture, podcast)
– 30% of what they see (look at displays, diagrams, exhibits)
– 50% of what they head AND see (live demonstration, video, site visit)
– 70% of what they say OR write (worksheet, discussion)
– 90% of what they do (practice, teach)

Manipulatives help learning!
– Manipulatives are objects that engage the learning in touching, feeling, and manipulation
– Stimulate brain either as part of the learning experience or provide opportunities for movement
– Examples: basket of strange feeling objects, pipe cleaners, clay, cards, paper table covers that people can doodle on

Current research establishes a link between movement and learning!
– Can use brain breaks, energizers to get people moving
– Example of energizer: when asking questions use movement “Raise your hand/clap if you use Twitter”

See posts about other sessions I attended at this year’s AEA: “American Evaluation Conference Summary Post

American Evaluation Conference Summary Post

February 26, 2010

I am currently in Florida attending the American Evaluation Association (AEA) conference. To follow conference related tweets, search #eval09 on twitter.

The days are jam-packed with fantastic sessions and I likely won’t get to post all of the interesting and useful tidbits until this weekend and early next week, but I am going to get them all up by the end of this week. This post will include links to the posts, as I post them:

Guest Post: Make Your Reports Accessible – Three Easy Tips

February 26, 2010

Photo by Leo Reynolds

by Luise Barnikel at IssueLab

The shifting landscape and expectations of information seekers leaves your nonprofit with the difficult task of catching up and rethinking dissemination.

Your research provides valuable insight into critical social issues. To generate the biggest impact from the knowledge shared, your research report should be engaging to the various audiences it will touch, and adapt to today’s expectations for knowledge sharing.

So here are three easy tips to keep in mind when you are planning and designing your next research report.

1. Make your research usable, and re-usable. We understand the time and effort that goes into creating a thorough research report. Still, choosing a restrictive copyright can discourage readers from sharing or using your information – even for a good cause. There are copyright options that allow your audience to use the information in a wide variety of ways and even build upon it to create original research. An easy way to apply non-restrictive but legitimate copyrights to a document is using Creative Commons. IssueLab encourages its contributing organizations to use Creative Commons, because it “increases sharing and improves collaboration.”

2. Leave Them Asking for More. The research abstract can be a great way to generate further interest in the entire body of work, but really it should tell a journalist on deadline everything they need to know. Abstracts that leave out vital information – or are too long to read quickly – can actually deter readers from downloading the report to learn more. There’s a fine line between cliffhanger and information overload, but those who are truly interested in reading your report will ultimately do it when they have the time. So, distill valuable information, make the abstract comprehensive and quotable, but don’t just copy and paste the executive summary.

3. Get the facts out there. Once your report is released, go through it and extract short phrases, quotes, and statistics that can easily be shared online. Micro-blogging (sending brief text updates) has become an increasingly important skill and tool for organizations that wish to keep constituents informed. You can also create graphic summaries or pull charts that can be posted on Facebook or displayed alongside the abstract. Lastly, always make sure you include a direct link to your report listing page or .pdf – nothing worse than not finding the source of good information!

What are your thoughts on other easy ways to make research more usable?

Evaluation: insider or outsiders job?

February 26, 2010

Photo by Pink Sherbert Photography

Who should evaluate your program? That question has probably been asked in your organization at one point or another. Most nonprofit organizations hire an evaluator that comes in for a few months or a year, evaluates the program, gives them a report and then leaves. Then a year, or years later, the process repeats itself. Each time with the organization dishing out anywhere from a few thousand to hundreds of thousands of dollars.

Being an evaluation consultant, I am more than happy to help nonprofit organizations with their evaluations, but it makes me very sad when I see nonprofits that aren’t doing evaluation simply because they can’t afford it. This is one of the reasons why I think that building the capacity of nonprofit organizations to do their own evaluations is so important.

Nonprofits don’t need to do fancy random assignment experimental evaluations for them to be good or useful. It can be a simple survey at the end of a program that helps with program improvement.

I do think bringing in an outsider’s perspective can be valuable for evaluations, particularly when having an objective person is important. But, when that isn’t the case, there really is no reason why an evaluation can’t be done internally. It can save money, promote use, and increase involvement of internal staff (which increases likelihood of use).

I’d like to ask you (nonprofit workers/organizations) to share in the comment section whether you do evaluations, whether they are done internally or externally, and why?

MESI Wrap-up: Utilization-Focused Evaluation

February 26, 2010

The session “Utilization-Focused Evaluation: New Directions” was led by Michael Patton, author of Utilization-Focused Evaluation.

The goal of utilization-focused evaluation is to enhance the utility and actual use of evaluations. So, you should identify the primary intended users and make sure the evaluation will be useful to them. Patton believes that no evaluation should go forward unless and until there are primary intended users who will use the information that can be produced. The primary intended users also need to be involved in the process, and the evaluator’s job is to help intended users clarify their purpose and objectives.

When using utilization-focused evaluation, you need to make sure you match the evaluation design to the evaluation’s purpose, resources, and timeline to optimize use.

 See other posts from the 2009 MN Evaluation Studies Institute.