Repository logo
 

Automated scoring in assessment centers: evaluating the feasibility of quantifying constructed responses

Date

2014

Authors

Sanchez, Diana R., author
Gibbons, Alyssa, advisor
Kraiger, Kurt, advisor
Kiefer, Kate, committee member
Troup, Lucy, committee member

Journal Title

Journal ISSN

Volume Title

Abstract

Automated scoring has promised benefits for personnel assessment, such as faster and cheaper simulations, but there is yet little research evidence regarding these claims. This study explored the feasibility of automated scoring for complex assessments (e.g., assessment centers). Phase 1 examined the practicality of converting complex behavioral exercises into an automated scoring format. Using qualitative content analysis, participant behaviors were coded into sets of distinct categories. Results indicated that variations in behavior could be described by a reasonable number of categories, implying that automated scoring is feasible without drastically limiting the options available to participants. Phase 2 compared original scores (generated by human assessors) with automated scores (generated by an algorithm based on the Phase 1 data). Automated scores had significant convergence with and could significantly predict original scores, although the effect size was modest at best and varied significantly across competencies. Further analyses revealed that strict inclusion criteria are important for filtering out contamination in automated scores. Despite these findings, we cannot confidently recommend implementing automated scoring methods without further research specifically looking at the competencies in which automated scoring is most effective.

Description

Rights Access

Subject

assessment centers
technology
qualitative content analysis
automated scoring

Citation

Associated Publications