77 research outputs found

    A machine learning-based procedure for leveraging clickstream data to investigate early predictability of failure on interactive tasks

    Get PDF
    Early detection of risk of failure on interactive tasks comes with great potential for better understanding how examinees differ in their initial behavior as well as for adaptively tailoring interactive tasks to examinees’ competence levels. Drawing on procedures originating in shopper intent prediction on e-commerce platforms, we introduce and showcase a machine learning-based procedure that leverages early-window clickstream data for systematically investigating early predictability of behavioral outcomes on interactive tasks. We derive features related to the occurrence, frequency, sequentiality, and timing of performed actions from early-window clickstreams and use extreme gradient boosting for classification. Multiple measures are suggested to evaluate the quality and utility of early predictions. The procedure is outlined by investigating early predictability of failure on two PIAAC 2012 Problem Solving in Technology Rich Environments (PSTRE) tasks. We investigated early windows of varying size in terms of time and in terms of actions. We achieved good prediction performance at stages where examinees had, on average, at least two thirds of their solution process ahead of them, and the vast majority of examinees who failed could potentially be detected to be at risk before completing the task. In-depth analyses revealed different features to be indicative of success and failure at different stages of the solution process, thereby highlighting the potential of the applied procedure for gaining a finer-grained understanding of the trajectories of behavioral patterns on interactive tasks

    Using Response Times for Modeling Missing Responses in Large-Scale Assessments

    Get PDF
    Examinees differ in how they interact with assessments. In low-stakes large-scale assessments (LSAs), missing responses pose an obvious example of such differences. Understanding the underlying mechanisms is paramount for making appropriate decisions on how to deal with missing responses in data analysis and drawing valid inferences on examinee competencies. Against this background, the present work aims at providing approaches for a nuanced modeling and understanding of test-taking behavior associated with the occurrence of missing responses in LSAs. These approaches are aimed at a) improving the treatment of missing responses in LSAs, b) supporting a better understanding of missingness mechanisms in particular and examinee test-taking behavior in general, and c) considering differences in test-taking behavior underlying missing responses when drawing inferences about examinee competencies. To that end, the present work leverages the additional information contained in response times and integrates research on modeling missing responses with research on modeling response times associated with observed responses. By documenting lengths of interactions, response times contain valuable information on how examinees interact with assessments and may as such critically contribute to understanding the processes underlying both observed and missing responses. This work presents four modeling approaches that focus on different aspects and mechanisms of missing responses. The first two approaches focus on modeling not-reached items. The second two approaches aim at modeling omitted items. The first approach employs the framework for the joint modeling of speed and ability by van der Linden (2007) for modeling the mechanism underlying not-reached items due to lack of working speed. On the basis of both theoretical considerations as well as a comprehensive simulation study, it is argued that by accounting for differences in speed this framework is well suited for modeling the mechanism underlying not-reached items due to lack thereof. In assessing empirical test-level response times, it is, however, also illustrated that some examinees quit the assessment before reaching the end of the test or being forced to stop working due to a time limit. Building on these results, the second approach of this work aims at disentangling and jointly modeling multiple mechanisms underlying not-reached items. Employing information on response times, not-reached items due to lack of speed are distinguished from not-reached items due to quitting. The former is modeled by considering examinee speed. Quitting behavior - defined as stopping to work before the time limit is reached while there are still unanswered items - is modeled as a survival process, with the item position at which examinees are most likely to quit being governed by their test endurance, conceptualized as a third latent variable besides speed and ability. The third approach presented in this work focuses on jointly modeling omission behavior and response behavior, thus providing a better understanding of how these two types of behavior differ. For doing so, the approach extends the framework for jointly modeling speed and ability by a model component for the omission process and introduces the concept of different speed levels examinees operate on when generating responses and omitting items. This approach supports a more nuanced understanding of both the missingness mechanism underlying omissions and examinee pacing behavior through assessment of whether examinees employ different pacing strategies when generating responses or omitting items The fourth approach builds on previous theoretical work relating omitted responses to examinee disengagement and provides a model-based approach that allows for identifying and modeling examinee disengagement in terms of both omission and guessing behavior. Disengagement is identified at the item-by-examinee level by employing a mixture modeling approach that allows for different data-generating processes underlying item responses and omissions as well as different distributions of response times associated with engaged and disengaged behavior. Item-by-examinee mixing proportions themselves are modeled as a function of additional person and item parameters. This allows relating disengagement to ability and speed as well as identifying items that are likely to evoke disengaged test-taking behavior. The approaches presented in this work are tested and illustrated by a) evaluating their statistical performance under conditions typically encountered in LSAs by means of comprehensive simulation studies, b) illustrating their advances over previously developed approaches, and c) applying them to real data from major LSAs, thereby illustrating their potential for understanding examinee test-taking behavior in general and missingness mechanisms in particular. The potential of the approaches developed in this work for deepening the understanding of results from LSAs is discussed and implications for the improvement of assessment procedures - ranging from construction and administration to analysis, interpretation and reporting - are derived. Limitations of the proposed approaches are discussed and suggestions for future research are provided

    Combining Clickstream Analyses and Graph-Modeled Data Clustering for Identifying Common Response Processes

    Get PDF
    Complex interactive test items are becoming more widely used in assessments. Being computer-administered, assessments using interactive items allow logging time-stamped action sequences. These sequences pose a rich source of information that may facilitate investigating how examinees approach an item and arrive at their given response. There is a rich body of research leveraging action sequence data for investigating examinees' behavior. However, the associated timing data have been considered mainly on the item-level, if at all. Considering timing data on the action-level in addition to action sequences, however, has vast potential to support a more fine-grained assessment of examinees' behavior. We provide an approach that jointly considers action sequences and action-level times for identifying common response processes. In doing so, we integrate tools from clickstream analyses and graph-modeled data clustering with psychometrics. In our approach, we (a) provide similarity measures that are based on both actions and the associated action-level timing data and (b) subsequently employ cluster edge deletion for identifying homogeneous, interpretable, well-separated groups of action patterns, each describing a common response process. Guidelines on how to apply the approach are provided. The approach and its utility are illustrated on a complex problem-solving item from PIAAC 2012

    Combining Clickstream Analyses and Graph-Modeled Data Clustering for Identifying Common Response Processes

    Get PDF
    Complex interactive test items are becoming more widely used in assessments. Being computer-administered, assessments using interactive items allow logging time-stamped action sequences. These sequences pose a rich source of information that may facilitate investigating how examinees approach an item and arrive at their given response. There is a rich body of research leveraging action sequence data for investigating examinees’ behavior. However, the associated timing data have been considered mainly on the item-level, if at all. Considering timing data on the action-level in addition to action sequences, however, has vast potential to support a more fine-grained assessment of examinees’ behavior. We provide an approach that jointly considers action sequences and action-level times for identifying common response processes. In doing so, we integrate tools from clickstream analyses and graph-modeled data clustering with psychometrics. In our approach, we (a) provide similarity measures that are based on both actions and the associated action-level timing data and (b) subsequently employ cluster edge deletion for identifying homogeneous, interpretable, well-separated groups of action patterns, each describing a common response process. Guidelines on how to apply the approach are provided. The approach and its utility are illustrated on a complex problem-solving item from PIAAC 2012

    Using Sequence Mining Techniques for Understanding Incorrect Behavioral Patterns on Interactive Tasks

    Get PDF
    Interactive tasks designed to elicit real-life problem-solving behavior are rapidly becoming more widely used in educational assessment. Incorrect responses to such tasks can occur for a variety of different reasons such as low proficiency levels, low metacognitive strategies, or motivational issues. We demonstrate how behavioral patterns associated with incorrect responses can, in part, be understood, supporting insights into the different sources of failure on a task. To this end, we make use of sequence mining techniques that leverage the information contained in time-stamped action sequences commonly logged in assessments with interactive tasks for (a) investigating what distinguishes incorrect behavioral patterns from correct ones and (b) identifying subgroups of examinees with similar incorrect behavioral patterns. Analyzing a task from the Programme for the International Assessment of Adult Competencies 2012 assessment, we find incorrect behavioral patterns to be more heterogeneous than correct ones. We identify multiple subgroups of incorrect behavioral patterns, which point toward different levels of effort and lack of different subskills needed for solving the task. Albeit focusing on a single task, meaningful patterns of major differences in how examinees approach a given task that generalize across multiple tasks are uncovered. Implications for the construction and analysis of interactive tasks as well as the design of interventions for complex problem-solving skills are derived

    Investigating dynamics in attentive and inattentive responding together with their contextual correlates using a novel mixture IRT model for intensive longitudinal data

    Get PDF
    In ecological momentary assessment (EMA), respondents answer brief questionnaires about their current behaviors or experiences several times per day across multiple days. The frequent measurement enables a thorough grasp of the dynamics inherent in psychological traits, but it also increases respondent burden. To lower this burden, respondents may engage in careless and insufficient effort responding (C/IER) and leave data contaminated with responses that do not reflect what researchers want to measure. We introduce a novel approach to investigate C/IER in EMA data. Our approach combines a confirmatory mixture item response theory model separating C/IER from attentive behavior with latent Markov factor analysis. This allows for (1) gauging the occurrence of C/IER and (2) studying transitions among states of different response behaviors as well as their contextual correlates. The approach can be implemented using standard R packages. In an empirical application, we showcase the efficacy of this approach in both pinpointing C/IER instances in EMA and gaining insights into their underlying causes. In a simulation study investigating robustness against unaccounted changes in measurement models underlying attentive responses, the approach proved robust against heterogeneity in loading patterns but not against heterogeneity in the factor structure. Extensions to accommodate the latter are discussed.<br/

    Erzeugung eines Bose-Einstein-Kondensats aus Erbiumatomen in einer quasi-elektrostatischen Dipolfalle

    Get PDF
    Ultrakalte Gase dienen seit den Achtzigerjahren der Erforschung neuartiger physikalischer Phänomene, in deren Rahmen zum Beispiel erstmals die Realisierung atomarer Bose-Einstein-Kondensate sowie die quantenentarteter Fermi-Gase gelang. Anfangs konnten diese Zustände ausschließlich mit Alkaliatomen erreicht werden. Erst im vergangenen Jahrzehnt gelang es auch Atome mit komplexerer innerer Struktur zu kondensieren, die teilweise einen von Null verschiedenen elektronischen Bahndrehimpuls im Grundzustand aufweisen, wie etwa die Lanthanoide Erbium und Dysprosium. Ultrakalte Gase aus solchen Atomen lassen sich auf neue Art und Weise mit Laserlicht fern verstimmter Frequenz manipulieren. Weiterhin weisen sie einen starken dipolaren Charakter auf, welcher die Wechselwirkungen im ultrakalten Gas dominiert und damit zu neuen Wechselwirkungsphänomenen führt. Im Rahmen dieser Arbeit wurde erstmalig eine Bose-Einstein-Kondensat von Erbiumatomen im Potential einer extrem weit von den atomaren Resonanzen verstimmten quasi-elektrostatischen Dipolfalle erzeugt. Zentraler Teil dieser Arbeit war die Realisierung eines effizienten Umladeprozesses aus der magnetooptischen Falle in die quasi-elektrostatischen Dipolfalle, in welcher eine weitere evaporative Kühlung des Bose-Gases bis in den Bereich der Quantenentartung erfolgte. Da sich das Bose-Einstein-Kondensat nur in sehr reinen Ultrahochvakuumatmosphären beobachten lässt, wurde in einem ersten Schritt ein Vakuumsystem konstruiert und aufgebaut, welches aus einer Effusionszelle, einem Zeeman-Kühler und einer Hauptkammer besteht. Mit Hilfe der Effusionszelle wird ein Atomstrahl produziert und nach deren Verlassen, mit Hilfe des Kühlübergangs bei 400,91 nm transversal gekühlt und kollimiert. Longitudinal wird der Atomstrahl mit Hilfe eines Zeeman- Kühlers abgebremst, der ebenfalls den Kühlübergang bei 400,91 nm nutzt. Aus dem Atomstrahl wird eine magnetooptische Falle geladen, die mit Licht der Wellenlänge 582,84 nm betrieben wird. Die in der magnetooptischen Falle gefangenen Atome werden zur weiteren Abkühlung in ein quasi-elektrostatisches Dipolfallenpotential umgeladen, welches durch den fokussierten Laserstrahl eines CO2 -Lasers der Wellen- länge 10,6 μm gebildet wird. Mit Hilfe der darauf folgenden Evaporationsphase konnte die Phasenraumdichte über den kritischen Wert für den Phasenübergang erhöht und die Herausbildung eines Bose-Einstein-Kondensats aus Erbiumatomen beobachtet werden. Das Bose-Einstein-Kondensat beinhaltet etwa 3 · 10^4 Erbiumatome und hat eine Lebensdauer von etwa 8 s. Eine Perspektive der hier gewonnenen Ergebnisse ist die Erzeugung neuartiger Quantenzustände von Materie, sowie zum Beispiel die Untersuchung des fraktionalen Quanten-Hall-Effekts in extrem starken künstlichen Magnetfeldern induziert durch weit verstimmte optische Raman-Lichtfelder

    Regulation der Protein-Homöostase durch das Co-Chaperon Tpr2

    Get PDF
    Die Erhaltung der Protein-Homöostase ist essentiell für die Funktion einer jeden Zelle. Molekulare Chaperone regulieren dabei ein sensibles Gleichgewicht aus Proteinabbau und -faltung. Neusynthetisierte oder fehlgefaltete Proteine werden in ihre native Konformation gefaltet, wodurch sie ihre Funktion ausüben können und die Entstehung von Proteinaggregaten verhindert wird. Irreversibel beschädigte Proteine oder Proteinkomplexe werden, durch Interaktion des molekularen Chaperons Hsc/Hsp70 mit der Chaperon-assoziierten Ubiquitinligase CHIP, mit einer Ubiquitinkette als Abbausignal markiert. Während der Faltung und des Abbaus von Proteinen interagiert Hsc/Hsp70 mit unterschiedlichen Co-Chaperonen. Diese konkurrieren um die Bindung an Hsc/Hsp70 und regulieren so unterschiedliche Funktionen des Chaperons. In der vorliegenden Arbeit sollte die Funktion des Co-Chaperons Tpr2 analysiert werden. Tpr2 besitzt zwei TPR-Domänen, durch die es die molekularen Chaperone Hsc/Hsp70 und Hsp90 gleichzeitig binden kann. Zusätzlich besitzt Tpr2 eine J-Domäne, welche signifikant homolog zur J-Domäne des Co-Chaperons Hsp40 ist. Während des normalen Faltungsprozesses werden Substrate von Hsc/Hsp70 auf Hsp90 übertragen. Tpr2 reguliert die Hsc/Hsp70-Funktion und induziert die Substratbindung durch Hsc/Hsp70. Dadurch können Substrate entgegen des normalen Faltungsweges von Hsp90 auf Hsc/Hsp70 zurück übertragen werden. Biochemische Analysen mit Drosophila TPR2 (dTPR2) konnten zeigen, dass dTPR2 die Chaperon-Funktion von Hsc70 verstärkt. Dabei führte die C-terminale Bindung von dTPR2 an Hsc70 zu einer signifikanten Verstärkung der Bindung denaturierter Luziferase durch Hsc70. dTPR2 weist keine intrinsische Chaperon-Aktivität auf, wie es für eine Reihe anderer Co-Chaperone gezeigt wurde. Daher reguliert dTPR2 die Hsc/Hsp70- Funktion nicht substratspezifisch, sondern nimmt eine allgemeine, konservierte Funktion als Retro- Faktor in der Hsc/Hsp70-Hsp90-Maschinerie ein. In-vivo-Experimente konnten weiter zeigen, dass Tpr2 mit der Ubiquitinligase CHIP um die C-terminale Bindestelle von Hsc70 konkurriert und so CHIPvermittelte Abbauvorgänge reguliert. Dies wirkt sich z.B. auf den Abbau des Insulinrezeptors aus. Verringerte Tpr2-Mengen führten zur Destabilisierung des Insulinrezeptors in Hek293-Zellen. Funktionsanalysen in Drosophila melanogaster konnten zeigen, dass dTPR2 essentiell für den Erhalt der Proteostase ist. Ubiquitäre Expressionsveränderungen von dTPR2 führten zur Letalität in frühen Entwicklungsstadien. Insbesondere die Überexpression von dTPR2, welche zur Letalität im zweiten Larvenstadium führte, macht deutlich, wie wichtig ein fein definiertes Expressionslevel für die Entwicklung des Organismus ist. Durch erhöhte dTPR2-Mengen könnte die Aktivierung zahlreicher Rezeptoren und Kinasen inhibiert worden sein. Die muskuläre Überexpression von dTPR2 führte zu einem Funktionsverlust der Muskulatur. Die bereits beschriebene Funktion von Tpr2 lässt darauf schließen, dass Chaperon-vermittelte Abbauwege inhibiert wurden, welche für den Erhalt der Muskulatur essentiell sind. Die Depletion von dTPR2 in der Muskultur hatte keinen Einfluss auf die Muskelfunktion. Darüber hinaus konnte jedoch gezeigt werden, dass dTPR2 die Lebensspanne der Fliegen reguliert. Muskuläre Expressionsveränderungen von dTPR2 führten zu einer Verlängerung der Lebensspanne. Die in dieser Arbeit vorgestellten Daten konnten zeigen, wie wichtig das Co-Chaperon dTPR2 für die Entwicklung und Regulation der Lebensspanne von Drosophila melanogaster ist. Ein definiertes Expressionslevel scheint dabei von besonderer Bedeutung für den Erhalt der Proteostase und die Funktion der Muskulatur zu sein. Tpr2 reguliert als konservierter Retro-Faktor die Funktion des molekularen Chaperons Hsc/Hsp70. Dabei induziert Tpr2 die räumliche Kopplung von Hsc/Hsp70 und Hsp90 und stabilisiert eine Substratbindung durch Hsc/Hsp70. Zusätzlich konkurriert Tpr2 mit CHIP um die C-terminale Bindestelle von Hsc/Hsp70, wodurch CHIP-vermittelte Abbauprozesse reguliert werden können

    An in-principle super-polynomial quantum advantage for approximating combinatorial optimization problems via computational learning theory

    Get PDF
    It is unclear to what extent quantum algorithms can outperform classical algorithms for problems of combinatorial optimization. In this work, by resorting to computational learning theory and cryptographic notions, we give a fully constructive proof that quantum computers feature a super-polynomial advantage over classical computers in approximating combinatorial optimization problems. Specifically, by building on seminal work by Kearns and Valiant, we provide special instances that are hard for classical computers to approximate up to polynomial factors. Simultaneously, we give a quantum algorithm that can efficiently approximate the optimal solution within a polynomial factor. The quantum advantage in this work is ultimately borrowed from Shor’s quantum algorithm for factoring. We introduce an explicit and comprehensive end-to-end construction for the advantage bearing instances. For these instances, quantum computers have, in principle, the power to approximate combinatorial optimization solutions beyond the reach of classical efficient algorithms
    corecore