- Are the goals and objectives of the program clear and measurable? In order to design and conduct an evaluation, the stated goals and objectives of the program must be precise and able to be evaluated. You may want answers to specific questions or the ability to test a particular hypothesis.
- Who is the potential audience for the evaluation? Be clear about who you want to take your evaluation.
- Who will conduct the evaluation? Will it be your staff (who may convey their bias), volunteers, or people from outside of your organization? College or university students can be a great way to get the job done. Try to enlist graduate students from a local college or university sociology, planning, or marketing department to design and conduct the evaluation and to interpret and report on results.
- How will evaluation data be collected and analyzed?The methods you choose to collect information depend on the kind of information you want, the rigor you wish to insist upon, and the resources and time you can make available. Most routine program evaluations rely on one or two of these methods:
- Counting and testing
- Review of records
- Video or photographic documentation
- Focus groups
- Pre- and post-surveying (to see if there is a measurable change in opinions after the program has been experienced)
- How will evaluation data be analyzed? Making sense of the information you collect involves a careful review. Information should be interpreted against the goals and objectives originally defined. There may also be unanticipated results that must be analyzed.
- How will evaluative information be reported or presented? How the evaluation is finally structured and presented is directly related, once again, to how the information will be used. It is useful periodically to step back and assess even tried and true programs to determine if a new program design could achieve program goals even better or if certain refinements are warranted.
Adapted from Grantsmanship Center
Program Planning and Evaluation