125 research outputs found

    How the Selfish Brain Organizes its Supply and Demand

    Get PDF
    During acute mental stress, the energy supply to the human brain increases by 12%. To determine how the brain controls this demand for energy, 40 healthy young men participated in two sessions (stress induced by the Trier Social Stress Test and non-stress intervention). Subjects were randomly assigned to four different experimental groups according to the energy provided during or after stress intervention (rich buffet, meager salad, dextrose-infusion and lactate-infusion). Blood samples were frequently taken and subjects rated their autonomic and neuroglycopenic symptoms by standard questionnaires. We found that stress increased carbohydrate intake from a rich buffet by 34 g (from 149 ± 13 g in the non-stress session to 183 ± 16 g in the stress session; P < 0.05). While these stress-extra carbohydrates increased blood glucose concentrations, they did not increase serum insulin concentrations. The ability to suppress insulin secretion was found to be linked to the sympatho-adrenal stress-response. Social stress increased concentrations of epinephrine 72% (18.3 ± 1.3 vs. 31.5 ± 5.8 pg/ml; P < 0.05), norepinephrine 148% (242.9 ± 22.9 vs. 601.1 ± 76.2 pg/ml; P < 0.01), ACTH 184% (14.0 ± 1.3 vs. 39.8 ± 7.7 pmol/l; P < 0.05), cortisol 131% (5.4 ± 0.5 vs. 12.4 ± 1.3 μg/dl; P < 0.01) and autonomic symptoms 137% (0.7 ± 0.3 vs. 1.7 ± 0.6; P < 0.05). Exogenous energy supply (regardless of its character, i.e., rich buffet or energy infusions) was shown to counteract a neuroglycopenic state that developed during stress. Exogenous energy did not dampen the sympatho-adrenal stress-responses. We conclude that the brain under stressful conditions demands for energy from the body by using a mechanism, which we refer to as “cerebral insulin suppression” and in so doing it can satisfy its excessive needs

    Long-range angular correlations on the near and away side in p&#8211;Pb collisions at

    Get PDF

    Underlying Event measurements in pp collisions at s=0.9 \sqrt {s} = 0.9 and 7 TeV with the ALICE experiment at the LHC

    Full text link

    MEASURING CHILDREN’S USER EXPERIENCE WITH E-ASSESSMENTS: IMPLICATIONS FOR A BETTER INTERPRETATION OF UX EVALUATION METHODS FOR SCHOOL-AGED CHILDREN

    Get PDF
    Electronic assessment (e-assessment) is becoming increasingly popular in modern education systems and is emerging as a major and relevant method for evaluating students. Among the variety of new approaches and tools that have been introduced for e-assessment, the absence of well-defined protocols for checking the fairness and reliability of evaluations for every student remains a major problem. To check the credibility of e-assessments, it is paramount to enhance the understanding of children's User Experience (UX), as learners possess limited cognitive resources, and substantial variations exist across different age groups. The interface's usability, characterized by its ease of comprehension, ease of use, and ease of learning, can guarantee a positive UX for children. Children can contribute to the legitimacy of assessment findings only when they are actively permitted to engage in the system and successfully demonstrate their expertise and skills. It is therefore essential to integrate user-centric technology into e-assessments to ensure equitable opportunities for all users. Thus, the primary objective of this doctoral dissertation is to enhance the methodological understanding of UX evaluation methods for young school-aged children in the context of e-assessments. As a first step, we conducted a systematic review (272 full papers) that allowed us to identify an absence of interaction design techniques that can provide an accurate assessment of the fundamental concepts of UX for students between the ages of 6 and 12 (Research Paper 1). Furthermore, we identified the need for methods that might be employed before or after the evaluation setting to prevent any interference with the children's performance during the assessment. When we began our investigation, we opted to utilize questionnaires as the principal instrument for evaluating children’s UX. Nevertheless, it became apparent that the scale—which must be carefully designed when aimed at children, particularly those in our age bracket—was restricted in range and thus required more careful assessment to improve its validity and reliability. Therefore, we decided to dedicate resources to analyzing the scale design as a first action (Research Paper 2) by conducting a cognitive interview study (N = 25) and administering a digital large-scale questionnaire in a real assessment scenario (N = 3,263). Upon scrutinizing the many biases present in this research, we decided to employ experts’ perspectives to tackle the issues effectively and provide a valuable contribution to integrating information in a multidisciplinary area (Research Paper 3). We derived 10 heuristics on the basis of an evidence-based corpus of 506 guidelines, the evaluation of several domain experts (N = 24), and one heuristic evaluation workshop (N = 2). These heuristics may be utilized to assess the UX and usability elements of e-assessments among children aged 6 to 12 years. Along with our studies, our inquiry found that the methods used to evaluate children's UX needed more careful examination. We advocate for the enhancement and advancement of scales that are both enjoyable and easy for children to use. Furthermore, we suggest that future research efforts incorporate various stakeholder roles for adults and children to promote a paradigm shift by pushing forward participatory design trends. Overall, with this doctoral dissertation, we provide an essential step for improving comprehensiveness and future approaches for evaluating children's UX in an e-assessment context

    User Experience challenges for designing and evaluating Computer-Based Assessments for children

    Get PDF
    Computer-Based Assessment (CBA), i.e., the use of computers instead of paper & pencil for testing purposes is now increasingly used, both in education and in the workforce. Along with this trend, several issues regarding the usage of computers in assessment can be raised. With respect to CBA, test validity and acceptance appear at stake during interacting with a complex assessment system. For instance, individual differences in computer literacy (i.e. ability to handle technology) might cause different outcomes that are not related to the problem-solving task. Prior investigation has shown that there is a scarcity of research on the User Experience (UX) in the context of CBA, also due to a focus on adult users. This doctoral thesis aims to adapt and develop new evaluation methods from the Human-Computer Interaction (HCI) field, applied in the context of CBA. The contributions will result in the development of best practices guidelines for both research and practitioners by adopting design and evaluation methods drawn from the field of Child-computer Interaction (CCI)

    How do pupils experience Technology-Based Assessments? Implications for methodological approaches to measuring the User Experience based on two case studies in France and Luxembourg

    No full text
    Technology-based assessments (TBAs) are widely used in the education field to examine whether the learning goals were achieved. To design fair and child-friendly TBAs that enable pupils to perform at their best (i.a. independent of individual differences in computer literacy), we must ensure reliable and valid data collection. By reducing Human-Computer Interaction issues, we provide the best possible assessment conditions and user experience (UX) with the TBA and reduce educational inequalities. Good UX is thus a prerequisite for better data validity. Building on a recent case study, we investigated how pupils perform TBAs in real-life settings. We addressed the context-dependent factors resulting from the observations that ultimately influence the UX. The first case study was conducted with pupils age 6 to 7 in three elementary schools in France (n=61) in collaboration with la direction de l’évaluation, de la prospective et de la performance (DEPP). The second case study was done with pupils age 12 to 16 in four secondary schools in Luxembourg (n=104) in collaboration with the Luxembourg Centre for Educational Testing (LUCET). This exploratory study focused on the collection of various qualitative datasets to identify factors that influence the interaction with the TBA. We also discuss the importance of teachers’ moderation style and mere system-related characteristics, such as audio protocols of the assessment data. This study contribution comprises design recommendations and implications for methodological approaches to measuring pupils’ user experience during TBAs

    Experimenter Effects in Children Using the Smileyometer Scale

    No full text
    Researchers in the social sciences like human-computer interaction face novel challenges concerning the development of methods and tools for evaluating interactive technology with children. One of these challenges is related to the validity and reliability of user experience measurement tools. Scale designs, like the Smileyometer, have been proven to contain biases such as the tendency for children to rate almost every technology as great. This explorative paper discusses a possible effect of two experimenter styles on the distribution of 6-8 years old pupils' ratings (N= 73) to the Smileyometer. We administered the scale before and after a tablet-based assessment in two schools. Experimenter 1 employed a child-directed speech compared to a monotone speech of Experimenter 2. While brilliant (5 out of 5) was the most frequent answer option in all conditions, the mean scores were higher and associated with a lower variability across both conditions for Experimenter 2. We discuss a possible experimenter effect in the Smileyometer and implications for evaluating children’s user experiences
    • …
    corecore