Toward more informative uses of statistics: Alternatives for program evaluators |
Journal/Book: Eval Program Plann. 1998; 21: The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, England. Pergamon-Elsevier Science Ltd. 243-254.
Abstract: Statistical analyses of evaluations should answer the questions evaluators were asked to address. Null hypothesis significance testing is often inappropriate for program evaluation. Furthermore, it is possible that evaluation data gathered when a program is under development should be analyzed differently from data gathered after program is well-established. Evaluations will be most informative when (a) analyses reflect the evaluation questions at hand, (b) limitations in stakeholder understanding of statistical analyses are addressed, (c) the magnitude of Type II statistical errors are respected, and (d) effect sizes are described in units stakeholders can understand and are placed in the context of similar and alternative programs. This presentation illustrates these points.
Note: Article Posavac EJ, Loyola Univ Chicago, Chicago,IL 60626 USA
Keyword(s): null hypothesis; effect size; outcomes; statistical error; evaluation; report; CLINICAL-SIGNIFICANCE; MEANINGFUL CHANGE; SOFT PSYCHOLOGY; META-ANALYSIS; PSYCHOTHERAPY; RESEARCHERS; KNOWLEDGE; CARE; SIZE
© Top Fit Gesund, 1992-2024. Alle Rechte vorbehalten – Impressum – Datenschutzerklärung