The main research question we address is: How can we use or adapt psychometric techniques to determine the quality of assessment of sequences of user interactions in serious games? To answer this question, we will first develop a format for representing the kind of sequences of interactions that are assessed in serious games. Then we will adapt techniques from psychometry to determine methods and develop software that analyses sequences of interactions. We will test our software on the playthroughs of the DialogueTrainer software for practicing communication skills. We will use the results of these tests to improve scenarios, and test again to verify that the improved scenarios indeed lead to better results on the psychometric quality scores.
The consortium consists of two partners: Utrecht University and DialogueTrainer. Utrecht University contributes expertise on determining the quality of an assessment, and with expertise on representing and analysing sequences of interactions in serious games. DialogueTrainer contributes expertise on software development for scenarios in serious games, and with data.
The project will deliver the following results:
- a format description for the kind of interactions in serious games we want to assess
- an open source component that performs several analyses of assessments of sequences ofinteractions in serious games
- a dashboard that displays the results of the assessments for (an instance of) a serious game.
With respect to dissemination, we plan to:
- publish a paper about the results of our experiments on determining the quality of scenarios in DialogueTrainer
- release the assessment analyses component as a game component on the RAGE ecosystem portal
- present our results and our component at a meeting of the DGA.
The goal of a serious game is to support the learning of a player. For example, in the Job Quest game, a player trains for job interviews, and tries to create a good CV. A player learns by producing artefacts, and by taking actions in a game. The game assesses the skills of a player by analysing the artefacts created, and by analysing the sequence of actions the player takes in the game. Analysing created artefacts is highly domain specific, and requires for example essay analysis. The sequence of actions of a player usually contains more structure. Often a player can choose between multiple actions, and each choice has a particular score on some parameters. A serious game can only be effective if the assessment in the game is of good quality. For example, the assessment should be valid and reliable. How do we determine the quality of the assessment of a serious game? In this research proposal we address the question of how we can determine the quality of the assessment of the sequence of actions of a player in a game. We propose to analyse sequences of game interactions by means of various testing theories, such as Item Response Theory. We analyse individual items and complete tests, but also develop new reliability measures based on paths players take in a game. We develop a format for describing sequences of interactions, a reusable component for analysing such sequences, and a dashboard to visualise the analysis.
Principal applicant: Prof.dr. Johan Jeuring
Affiliation: Utrecht University