What refers to the ease with which training outcome measures can be collected?

Report

Training Evaluation

Submitted by

Miss Nathaporn Janped 55760213

MissThunchanok Neamsawan 55760541

MissSirada Janthon 55760718

Present

Mr. Lorenzo E.Garin Jr

Training and Development

Naresuan University International College

Content

Titles Pages

Content 2

Introduction 4

Reasons for evaluating training 5

- Formative Evaluation 6

- Summative Evaluation 6

Overview of the Evaluation Process 8

Outcomes Used in the Evaluation of Training Program 9

-Reaction Outcomes 9

-Learning or Cognitive Outcomes 9

-Behavior and Skill-Based Outcomes 9

-Affective Outcomes 10

-Results 10

-Return on Investment 10

Determining Whether Outcomes are Appropriate 11

-Relevance 11

-Reliability 13

-Discrimination 13

-Practicality 14

Evaluation Practices 15

- Which Training Outcomes Should be Collected 16

- Evaluation Designs 18

- Threats to Validity 18

- Types of Evaluation designs 21

Consideration in choosing an Evaluation design 24

Determining return on investment 25

- Determining Costs 25

- Determining Benefits 26

Other Methods for Cost-benefit Analysis 27

Practical Considerations in Determining ROI 28

Success Cases and Return on Expectations 28

Measuring Human Capital and Training Activity 28

Sources 29

Training Evaluation

Training evaluation is a continual and systematic process of assessing the value or potential value of a training program, course, activity or event. Results of the evaluation are used to guide decision-making around various components of the training and its overall continuation, modification, or elimination.

Introduction

That is the training function was interested in assessing the effectiveness of training program.

-Training effectiveness refers to the benefit that the company and the trainees receive from training

-Training outcomes or Criteria refers to measures that the trainer and the company use to evaluate training programs.

-Training evaluation refers to the process of collecting the outcomes needed to determine whether training is effective

-Evaluation design refers to the collection—including what, when, how, and from whom—that will be used to determine the effectiveness of the training program.

Reasons for evaluating training

Companies are investing million of dollars in training programs to help gain a competitive advantage.

The influence of training is largest for organization performance outcomes and human resource outcomes and weakest for financial outcomes.

Training evaluation provides a way to understand the investments that training produces and provides information needed to improve training.

Training evaluation involves both formative and summative evaluation.

Formative Evaluation

-Formative evaluation refers to the evaluation of training that takes place during programs design and envelopment. That is , formative evaluation help to ensure that

1. The training program is well organized and runs smoothly.

2. Trainees learn and satisfied with the programs.

-pilot testing refers to the process of previewing the training program with potential trainees and managers or with other customers (persons who are paying for the development of the program).

Summative Evaluation

-Summative evaluation refers to an evaluation conducted to determine the extent to which training have changed as a result of participating in the training programs.

From the discussion of summative and formative evaluation, it is probably apparent to you why a training program should be evaluated:

1. To identify the program’s strengths and weaknesses.

2. To assess whether the content, organization, and administration of the program – including the schedule, accommodations, trainers, and materials—contribute to learning and the use of training content on the job.

3. To identify which trainees benefit most or least from the program.

4. To assist in marketing programs though the collection of information from participants about whether they would recommend the program to other, why they attended the program, and their level of satisfaction whit the program.

5. To determine the financial benefit and costs of the program.

6. To compare the costs and benefits of training versus nontraining investment.

7. To compare the costs and benefit of different training programs to choose the best program.

Overview of the Evaluation Process

Before the chapter explains each aspect of training evaluation in detail, you need to understand the evaluation process, which is summarized in Figure 6.1

Figure 6.1 The Evaluation Process

Outcomes Used in the Evaluation of Training Process

Reaction Outcome

Reaction outcome refer to trainees’ perceptions of the program, including the facilities, trainers, and content. They are often celled class or instructors evaluations.

Learning or Cognitive Outcomes

-cognitive outcomes are used to determine the degree to which trainees are familiar with the principles, facts, techniques, procedures, and processes emphasized in the training program.

Behavior and Skill-Based Outcomes

- Skill-based outcome are used to assess the level of technical or motor skill and behaviors. Skill-based outcome include acquisition or learning of skill (skill learning ) and use of skill on the job (skill transfer).

Affective Outcomes

- Affective outcomes include attitudes and motivation. Affective outcomes that might be collected in an evaluation include tolerance for diversity, motivation to learn, safety attitudes, and customer service orientation. Affective outcomes can be measured using surveys.

Results

- Results are used to determine the training program’s payoff for the company.

- Examples of results outcomes include increased production and reduced costs related to employee turnover rates of top talent (managers or other employees),accidents, and equipment downtime, as well as improvements in product quality or customer service.

Return on Investment

- Return on investment(ROI) refers to comparing the training’s

monetary benefits with the cost of the training.

-Direct costs include salaries and benefits for all employees involved in training, including trainees, instructors, consultants, and employees who design the program; program material and supplies; equipment or classroom rentals or purchases; and travel costs.

- Indirect costs are not related directly to the design, development, or delivery of the training program.

- Benefits are the value that the company gains from the training program.

DETERMINING WHETHER OUTCOMES ARE APPROPRIATE

An important issue in choosing outcome is to determine whether they are appropriate. That is, are these outcomes the best ones to measure to determine whether the training program is effective? Appropriate training outcomes need to be relevant, reliable, discriminative and practical.

Relevance

-Criteria relevance refers to the extent to which training outcomes are related to the learned capabilities emphasized in the training program. The learned capabilities required to succeed in the training program should be the same as those required to be successful on the job.

Figure 6.2 shows two ways that training outcomes may lack relevance.

Figure 6.2

[pic]

-Criterion contamination refers to the extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions. For example, if managers’ evaluations of job performance are used as a training outcome, trainees may receive higher ratings of job performance simply because the managers know they attended the training program, believe the program is valuable, and therefore give high ratings to ensure that the training looks like it positively affects performance

- Criterion deficiency refers to the failure to measure training outcomes that were emphasized in the training objectives. For example, the objectives of a spreadsheet skills training program emphasize that trainees both understand the commands available on the spreadsheet (e.g., compute) and use the spreadsheet to calculate statistics using a data set. An evaluation design that uses only learning outcomes such as a test of knowledge of the purpose of keystrokes is deficient because the evaluation does not measure outcomes that were included in the training objectives (e.g., use a spreadsheet to compute the mean and standard deviation of a set of data).

Reliability

-Reliability refers to the degree to which outcomes can be measured consistently over time. For example, a trainer gives restaurant employees a written test measuring knowledge of safety standards to evaluate a safety training program that they attended.

Discrimination

-Discrimination refers to the degree to which trainees’ performance on the outcome actually reflects true differences in performance. For example, a paper-and-pencil test that measures electricians’ knowledge of electrical principles must detect true differences in trainees’ knowledge of electrical principles.

Practicality

-Practicality refers to the ease with which the outcome measures can be collected. One reason companies give for not including learning, performance, and behavior outcomes in their evaluation of training program is that collecting them is too burdensome. (It takes too much time and energy, which detracts from the business.) For example, in evaluating a sales training program, it may be impractical to ask customers to rate the salesperson’s behavior because this would place too much of time commitment on the customer (and probably damage future sales relationships).

EVALUATION PRACTICES

[pic]

Figure 6.3 shows outcomes used in training evaluation practices. Surveys of companies’ evaluation practices indicate that reactions (an affective outcome) and cognitive outcomes are the most frequently used outcomes in training evaluation. Despite the less frequent use of cognitive, behavioral, and results outcomes, research suggests that training can have a positive effect on those outcomes.

Which Training Outcomes Should be Collected?

From our discussion of evaluation outcomes and evaluation practices, you may have the mistaken impression that it is necessary to collect all five levels of outcomes to evaluate a training program.

It is important to recognize the limitations of choosing to measure only reaction and cognitive outcomes. Consider the previous discussions of learning and transfer of training in Chapter 4 and 5. Remember that for training to be successful, learning and transfer of training must occur. Figure 6.4 shows the multiple objectives of training program and their implication for choosing evaluation outcomes. Training programs usually have objectives related to both learning and transfer. That is, they want trainees to acquire knowledge and cognitive skills and also to demonstrate the use of the knowledge or strategies they learned in their on-the-job behavior. As a result, to ensure an adequate training evaluation, companies must collect outcome measures related to both learning and transfer.

The training function automatically tracks these outcomes. Managers use training and development to encourage observable behavior changes in employees that will lead to desirable business results such as client satisfaction and lower turnover.

Note that outcome measures are not perfectly related to each other. That is, it is tempting to assume that satisfied trainees learn more and will apply their knowledge and skills to the job, resulting in behavior change and positive results for the company. However, research indicates that the relationships among reaction, cognitive, behavior, and result outcomes are small.

Which training outcomes measure is best?

The answer depends on the training

Positive transfer of training is demonstrated when learning occurs and positive changes in skill-based, affective, or results outcomes are also observed. No transfer of training is demonstrated if learning occurs but no changes are observed in skill-based, affective, or learning outcomes. Negative transfer is evident when learning occurs but skills, affective outcomes or results are less than at pre-training levels. Results of evaluation studies that find no transfer or negative transfer suggest that the trainer and the manager need to investigate whether a good learning environment (e.g., opportunities for feedback and practice) was provided in the training program, trainees were motivated and able to learn, and the needs assessment correctly identified training needs.

EVALUATION DESIGNS

The design of the training evaluation determines the confidence that can be placed in the results; that is, how sure a company can be that training is either responsible for changes in evaluation outcomes or has failed to influence the outcomes.

This discussion of evaluation design begins by identifying these “alternative explanation” that the evaluation should attempt to control for. Next various evaluation designs are compared. Finally, this section discusses practical circumstances that the trainer needs to consider in selecting an evaluation design.

Threats to validity: Alternative Explanations for Evaluation Results

- Threats to validity refers to factors that will lead an evaluator to question either (1) the believability of the study results or (2) the extent to which the evaluation results are generalizable to other groups of trainees and situations. The believability of study results refers to

-internal validity. The internal threats to validity relate to characteristics of the company (history), the outcome measures (instrumentation, testing), and the persons in the evaluation study (maturation, regression toward the mean, mortality, initial group differences).

Trainers are also interested in the generalizability of the study results to other groups and situations

- external validity relate to how study participants react to being included in the study and the effects of multiple types of training. Because evaluation usually does not involve all employees who have completed a program (or who may take training in the future), trainers want to be able to say that the training program will be effective in the future with similar groups.

Methods to Control for Threats to Validity

-Because trainer often want to use evaluation study results as basis for changing training programs or demonstrating that training does work (as a means to gain additional funding for training from those who control the training budget),

-It is important to minimize the threats to validity. There are three ways to minimize threats to validity: the use of pretests and post-tests in evaluation designs, comparison groups, and random assignment.

-Pretests and Post-tests One way to improve the internal validity of the study results is to first establish a baseline

-pre-training measure of the outcome. Another measure of the outcomes can be taken after training.

-post-training measure. A comparison of the post-training and pre-training measures can indicate the degree to which trainees have changed as a result of training.

Use of Comparison Groups Internal validity can be improved by using a control or comparison group.

A comparison group refers to a group of employees who participate in the evaluation study but do not attend the training program. The comparison employees have personal characteristics (e.g., gender, education, age, tenure, and skill level) as similar to the trainees as possible.

The Hawthorne effect refers to employees in an evaluation study performing at a high level simply because of the attention they are receiving. Use of a comparison group helps to show that any effects observed are due specifically to the training rather than the attention the trainees are receiving. Use of a comparison group helps to control for the effects of history, testing, instrumentation, and maturation because both the comparison group and the training group are treated similarly, receive the same measures, and have the same amount of time to develop.

-Random Assignment Random assignment refers to assigning employees to the training or comparison group on the basis of chance alone. That is, employees are assigned to the training program without consideration of individual differences (ability or motivation) or prior experiences. Random assignment helps to ensure that trainees are similar in individual characteristics such as age, gender, ability, and motivation.

Types of Evaluation Designs

A number of different designs can be used to evaluate training program. When measures are collected (pre-training, post-training), the costs, the time it takes to conduct the evaluation, and the strength of the design for ruling out alternative explanations for the results.

Post-test Only

The post-test-only design refers to an evaluation design in which only post-training outcomes are collected. This design can be strengthened by adding a comparison group (which helps to rule out alternative explanations for changes). The post-test-only design is appropriate when trainees (and the comparison group, if one is used) can be expected to have similar levels of knowledge, behavior, or results outcomes (e.g., same number of sales or equal awareness of how to close a sale) prior to training.

Pretest/Post-test

The pretest/post-test refers to an evaluation design in which both pre-training and post-training outcome measures are collected. There is no comparison group. The lack of a comparison group makes it difficult to rule out the effects of business conditions or other factors as explanations for changes.

Pretest/Post-test with Comparison Group

The pretest/post-test with comparison group refers to an evaluation design that includes trainees and comparison group. Pre-training and post-training outcome measures are collected from both groups. If improvement is greater for the training group than the comparison group, this finding provides evidence that training is responsible for the change. This type of design controls for most treats to validity.

Time Series

Time Series refers to an evaluation design in which training outcomes are collected at periodic intervals both before and after training. (In the other evaluation designs discussed here, training outcomes are collected only once after and maybe once before training.) The strength of this design can be improved by using reversal, which refers to a time period in which participants no longer receive the training intervention. A comparison group can also be used with a time series design. One advantage of the time series design is that it allows an analysis of the stability of training outcomes over time.

Solomon Four-Group

The Solomon four-group design combines the pretest/post-test comparison group and the post-test-only control group design. In the Solomon four-group design, a training group and a comparison group are measured on the outcomes both before and after training.

Considerations in Choosing an Evaluation Design

There is no one appropriate evaluation design. First, managers and trainers may be unwilling to devote the time and effort necessary to collect training outcomes. Second, managers or trainers may lack the expertise to conduct an evaluation study. Third, a company may view training as an investment from which it expects to receive little or no return. A more rigorous evaluation design should be considered if any of the following conditions true:

1. The evaluation result can be used to change the program.

2. The training program is ongoing and has the potential to have an important influence on employees or customers.

3. The training program involves multiple classes and a large number of trainees.

4. Cost justification for training is based on numerical indicators.

5. Trainer or others in the company have the expertise to design and evaluate the data collected the data collected from an evaluation study.

6. The cost of the training creates a need to show that it works.

7. There is sufficient time for conducting an evaluation.

8. There is interest in measuring change from prêt raining levels or in comparing two or more different programs.

Determining Return on Investment

Return on investment (ROI) is an important training outcome. Cost-benefit analysis in this situation is the process of determining the economic benefits of a training program using accounting methods that look at training costs and benefits.

1. To understand total expenditures for training, including direct and indirect costs

2. To compare the costs of alternative training program

3. To evaluate the proportion of money spent on training development, administration, and evaluation, as well as to compare monies spent on training for different groups of employees

4. To control costs

Determining Costs

One method for comparing costs of alternative training programs is the resource requirements model compares equipment, facilities, personnel, and materials cost across different stages of the training process.

Determining Benefits

To identify the potential benefits of training, the company must review the original reasons that the training was conducted.

1. Technical, academic, and practitioner literature summarizes the benefit that has been show to relate to a specific training program.

2. Pilot training program assess the benefits from a small group of trainees before a company commits more resources.

3. Observance of successful job performers helps a company determine what successful job performers do differently than unsuccessful job performer.

4. Trainees and their managers provide estimates of training benefits.

Example of a cost-Benefit Analysis

A cost-benefit analysis is best explained by an example. The benefits of the training were identified by considering the objective of the training program and the type of outcomes the program was to influence.

Other Methods for Cost-Benefit Analysis

Other more sophisticated methods are available for determining the dollar value of training

Utility analysis is a cost-benefit analysis method that involves assessing the dollar value of training based on estimates of the different in job performance between training programs.

Practical Considerations in Determining ROI

As mentioned earlier in the chapter, ROI analysis may not be appropriate for all training program. Return on investment analysis may not be appropriate for all training programs.

Success Cases and Return on Expectations

-Return on expectation (ROE) refers to the process though which evaluation demonstrates to key business stakeholders, such as top-level manager, that their expectation about training has been satisfied.

Measuring Human Capital and Training Activity

It is important to remember that evaluation can also involve determining the extent to which learning and training activities and the training function contribute to the company strategy and help to achieve business goals.

-workforce analytics refers to practice of using quantitative methods and scientific methods to analyze data from human resource databases, corporate financial statement, employee surveys, and other data sources to make evidence-based decision and show that human resource practices influence important company metrics.

Sources

Based on L. Freifeld,”verizon’s new # is 1, “training (January/February 2012) : 28-30; M. Weinstein,”Verizon Connects to Success,”training (January/February 2011): 40-42.

-----------------------
Conduct a Needs Analysis

Develop Measurable Learning Objectives and Analyze Transfer of Training

Develop Outcome Measures

Choose an Evaluation Strategy

Plan and Execute the Evaluation

What is the best training outcome measure?

A pencil-and-paper test is the best means for measuring skill-based outcomes.

Which process of collecting the outcomes needs to determine whether training is effective?

Training evaluation is the systematic process of collecting information and using that information to improve your training. Evaluation provides feedback to help you identify if your training achieved your intended outcomes, and helps you make decisions about future trainings.

What is used to assess cognitive outcomes?

Conventional approaches to evaluating cognitive outcomes of training typically use paper-and-pencil tests that emphasize gains or differences in declarative knowledge. Yet a key factor in differentiating expert and novice performance is the way individuals organize their knowledge.
Which training outcome relates to trainees providing feedback about their satisfaction with a trainer? Appropriate training outcomes need to discriminate.