Evaluation is a word and term that is sometimes difficult to define. As an activity evaluation points toward “judging the worth or merit,” of an object or process. The history of evaluation reveals how “judging…” has change over time. This section considers the historical development of evaluation, evaluation measurement tools and adult learning evaluation.
HISTORICAL DEVELOPMENT OF EVALUATION – 6.1
During the 1940-57’s an educational psychologist, B. Bloom (1956), developed a criterion for three learning domains – affective, cognitive and psychomotor. Each domain contains levels of learning, related learning activities and measurable outcomes. Working with learning within three domains and related activities learning became a more holistic endeavor upon which to base evaluation. Prior to this time the bases for evaluating learning was specific content and pre-determined outcome measures for the content. Bloom introduced evaluation as discerning performances between students.
The Cold War era (1958-72) reintroduced content specific learning with an emphasis on sciences, math and foreign languages. Evaluation of learning returned to the “normative” standard and reliance on the Bell curve. Results from this evaluation process indicate where a student falls within a normed group of students who have bested on similar information.
A second learning evaluation criterion also emerged during this period. Known as “criterion-based evaluation,” it evaluated a student on a pass/fail measure by specific content only.
Because of the growth of evaluation methods, the period 1973-99 saw the emergence of the American Evaluation Association. It is now a worldwide organization and evaluation methods, criterion and specialists have increased. An example of new evaluation methodologies and strategies include, “objective-oriented.” This methodology seeks to measure the achievement levels for goals and objectives. A second example, “participant-oriented,” seeks data from first-hand program activities of learners as well as from the importance of the participants in the learning process.
An evaluation strategies developed within the past 15 years see empowerment as an important measuring component for evaluation. However, the measuring strategies for this approach lack objective measures. It depends, instead on observation and individual perspective.
Evaluation has also taken on another role during this time. That is to evaluate “return on investment,” (ROI). The methodology for this evaluation stems from the work of D. Kirkpatrick (1967). He evaluated training outcomes in terms of, a) trainee’s response to the curriculum and learning process, b) knowledge or skill acquisition at the end of the training, c) behavioral changes in the job and, d) improvement in individual or organizational outcomes.
L. Benjamin and D. Campbell, (2014), present a recent innovative look at evaluation. They write that program evaluation, especially for nonprofit organizations, limits evaluation to the program only, thereby missing positive outcomes of participants and the program workforce. In order to evaluate these sources for outcomes the authors suggest four principles for measuring program outcomes, a) honor relationships, b) allow variation, c) respect agency, d) support collaboration. Each of these evaluation strategies is illustrated.
EVALUATION MEASUREMENT TOOLS – 6.2
Broadly speaking there are two types of measurement tools, quantitative and qualitative. When thinking about quantitative evaluation tools it is important to have a firm grounding in the measuring units, the level of measurement, and the level of data each strategy provides. Qualitative measurement tools are less precise because they gather data through observation, focus groups, interviews and open-ended questions.
An example of a learning evaluation strategy that utilizes both measurement strategies is the “Cone of experience,” Dale (1946). His work contains measurable outcomes for retention of knowledge through 10 different methods of learning ranging from reading to active learner participation.
EVALUATING ADULT LEARNING – 6.3
Knowles (1980) discovered that age and life experiences were an influence on evaluation. His research identified an educational approach for adults, which he called, “andragogy.” He found that adults bring previous life and learning experiences with them to the classroom. Therefore, learning for this age group necessitates more interaction between teacher and learner in order for the learner to develop more of their potential. Blondy (2007) found that measuring this learning assumption necessitates both qualitative and quantitative evaluation strategies.