Repository logo
 

Automated scoring in assessment centers: evaluating the feasibility of quantifying constructed responses

dc.contributor.authorSanchez, Diana R., author
dc.contributor.authorGibbons, Alyssa, advisor
dc.contributor.authorKraiger, Kurt, advisor
dc.contributor.authorKiefer, Kate, committee member
dc.contributor.authorTroup, Lucy, committee member
dc.date.accessioned2007-01-03T06:23:25Z
dc.date.available2007-01-03T06:23:25Z
dc.date.issued2014
dc.description.abstractAutomated scoring has promised benefits for personnel assessment, such as faster and cheaper simulations, but there is yet little research evidence regarding these claims. This study explored the feasibility of automated scoring for complex assessments (e.g., assessment centers). Phase 1 examined the practicality of converting complex behavioral exercises into an automated scoring format. Using qualitative content analysis, participant behaviors were coded into sets of distinct categories. Results indicated that variations in behavior could be described by a reasonable number of categories, implying that automated scoring is feasible without drastically limiting the options available to participants. Phase 2 compared original scores (generated by human assessors) with automated scores (generated by an algorithm based on the Phase 1 data). Automated scores had significant convergence with and could significantly predict original scores, although the effect size was modest at best and varied significantly across competencies. Further analyses revealed that strict inclusion criteria are important for filtering out contamination in automated scores. Despite these findings, we cannot confidently recommend implementing automated scoring methods without further research specifically looking at the competencies in which automated scoring is most effective.
dc.format.mediumborn digital
dc.format.mediummasters theses
dc.identifierSanchez_colostate_0053N_12706.pdf
dc.identifier.urihttp://hdl.handle.net/10217/88593
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartof2000-2019
dc.rightsCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.
dc.subjectassessment centers
dc.subjecttechnology
dc.subjectqualitative content analysis
dc.subjectautomated scoring
dc.titleAutomated scoring in assessment centers: evaluating the feasibility of quantifying constructed responses
dc.typeText
dcterms.rights.dplaThis Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
thesis.degree.disciplinePsychology
thesis.degree.grantorColorado State University
thesis.degree.levelMasters
thesis.degree.nameMaster of Science (M.S.)

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sanchez_colostate_0053N_12706.pdf
Size:
5.05 MB
Format:
Adobe Portable Document Format
Description: