Extensively revised to reflect modern views of program implementation, this volume introduces the variety of functions served by implementation studies and the roles played by qualitative and quantitative data. The reader is shown the importance of assessing how a program design works in actual practice--and is given the planning tools and procedures to make assessment happen. The evaluator′s role in documenting, describing, observing and assessing how a program is implemented is covered in detail. Step-by-step guidelines are provided for identifying key program processes and arrangements for assessment, for selecting optimal strategies for conducting the assessment, and for developing and analyzing questionnaires, interviews, observations, and program records.


This edition of How to Measure Attitudes draws on examples from a broader range of disciplines and professions than the first edition. It helps novice evaluators with the difficult task of assessing whether the affective and attitude objectives of a program have been met. The most commonly used attitude measures are described and sources of existing measurement instruments are listed. If no existing instrument is appropriate, step-by-step instructions are given enabling readers to construct their own. Methods for analyzing and reporting attitude data are also included.


Discussing the evaluator′s role in performance measurement, this volume focuses on ways to select, develop, and analyze tests. It reviews a variety of potential performance measures--including different types of tests, observations and extant data, and records--then guides the reader to determine which ones are more appropriate for the evaluation. If no existing test is suitable, step-by-step instructions on how to construct one that has scientific reliability and validity are given. The analysis and reporting of data gained from performance testing are also described. Current issues in performance testing are disclosed--including those related to legal challenges and test validity. Examples are drawn from a wide range of fields, including education, business, and social services.


Here is a basic introduction to a variety of elementary statistical techniques, including those for summarizing data, for examining differences between groups, and for examining relationships between two measures. Analyses of Effect Size--a relatively recent and simple approach for examining differences between groups and for conducting meta-analysis is shown. Only the most basic and useful statistical techniques that are appropriate for answering essential evaluation questions have been included. Worksheets and practical examples are given throughout the volume to support the use of each technique. Guidance is provided on the use of statistical techniques for constructing tests and questionnaires, on methods for choosing appropriate statistics on using meta-analytic techniques, and on using statistical packages – particularly SPSS (a statistical package for microcomputers) – giving readers all the information needed to properly analyze their data.


Replete with examples from a wide range of disciplines, this concise volume shows the reader how to communicate results to users and stakeholders throughout the evaluation process. The authors stress the importance of maintaining a variety of channels of formal and informal reporting mechanisms, as well as the need to tailor the medium and message for intended audiences and users. Easy-to-use worksheets are provided to help readers prepare reports. Practical tips on how to communicate effectively, on using graphs and tables, and on presenting the final report are all contained in this important publication.