Software-basierte Evaluation freier Antwortformate
Software-based evaluation of open-ended test items
Journal article › Research › Peer reviewed
Publication data
By | Hendrik Härtig |
Original language | German |
Published in | Zeitschrift für Didaktik der Naturwissenschaften, 20(1) |
Pages | 115-128 |
Editor (Publisher) | Springer |
ISSN | 0949-1147, 2197-988X |
DOI/Link | https://doi.org/10.1007/s40573-014-0012-6, http://link.springer.com/article/10.1007%2Fs40573-014-0012-6 |
Publication status | Published – 2014 |
With respect to the tremendous work and costs, the majority of large scale assessments utilize closed-response item formats. However it can be shown that open-ended question could at least increase the validity of such assessments. Within the last few years software has been developed which can process and categorize written answers based on latent semantic analyses. So far there are some studies evaluating the possibility to use such software packages and apply a software-based evaluation of open-ended test items. First results for large groups of English speaking students within an assessment of conceptual understanding are encouraging. Within the study presented here, the applicability of software-based evaluation of open-ended test items has been proven for a small scale assessment of physics PCK. On the one hand good to perfect agreement for half of the eight items between the software and human experts was found. On the other hand we identified two possibilities for increasing the agreement: The size of the sample might have been too small for some of the items. Furthermore the structure of the items may influence the results significantly.