465 research outputs found

    Is My Exercise Partner Similar Enough? Partner Characteristics as a Moderator of the Köhler Effect in Exergames

    Get PDF
    Objective: Recent research has shown the Köhler motivation gain effect (working at a task with a more capable partner where one's performance is indispensable to the group) leads to greater effort in partnered exercise videogame play. The purpose of this article was to examine potential moderators of the Köhler effect by exploring dissimilarities in one's partner's appearance, namely, having an older partner (compared with a same-age partner) and having a heavier-weight partner (compared with a same-weight partner). Subjects and Methods: One hundred fifty-three male and female college students completed a series of plank exercises using the “EyeToy: Kinetic™” for the PlayStation® 2 (Sony, Tokyo, Japan). Participants first completed the exercises individually and, after a rest, completed the same exercises with a virtually present partner. Exercise persistence, subjective effort, self-efficacy beliefs, enjoyment, and intentions to exercise were recorded and analyzed. Results: A significant Köhler motivation gain was observed in all partner conditions (compared with individual controls) such that participants with a partner held the plank exercises longer (P<0.001) and reported higher subjective effort (P<0.01). These results were unmoderated by partner's age and weight, with one exception: Males tended to persist longer when paired with an obese partner (P=0.08). Conclusions: These results suggest that differences in age and weight do not attenuate the Köhler effect in exergames and may even strengthen it

    Open Science in Software Engineering

    Full text link
    Open science describes the movement of making any research artefact available to the public and includes, but is not limited to, open access, open data, and open source. While open science is becoming generally accepted as a norm in other scientific disciplines, in software engineering, we are still struggling in adapting open science to the particularities of our discipline, rendering progress in our scientific community cumbersome. In this chapter, we reflect upon the essentials in open science for software engineering including what open science is, why we should engage in it, and how we should do it. We particularly draw from our experiences made as conference chairs implementing open science initiatives and as researchers actively engaging in open science to critically discuss challenges and pitfalls, and to address more advanced topics such as how and under which conditions to share preprints, what infrastructure and licence model to cover, or how do it within the limitations of different reviewing models, such as double-blind reviewing. Our hope is to help establishing a common ground and to contribute to make open science a norm also in software engineering.Comment: Camera-Ready Version of a Chapter published in the book on Contemporary Empirical Methods in Software Engineering; fixed layout issue with side-note

    Collaborative Brain-Computer Interface for Aiding Decision-Making

    Get PDF
    We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making

    Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: development and preclinical evaluation (IDEAL stage 0)

    Get PDF
    OBJECTIVE: Surgical workflow analysis involves systematically breaking down operations into key phases and steps. Automatic analysis of this workflow has potential uses for surgical training, preoperative planning, and outcome prediction. Recent advances in machine learning (ML) and computer vision have allowed accurate automated workflow analysis of operative videos. In this Idea, Development, Exploration, Assessment, Long-term study (IDEAL) stage 0 study, the authors sought to use Touch Surgery for the development and validation of an ML-powered analysis of phases and steps in the endoscopic transsphenoidal approach (eTSA) for pituitary adenoma resection, a first for neurosurgery. METHODS: The surgical phases and steps of 50 anonymized eTSA operative videos were labeled by expert surgeons. Forty videos were used to train a combined convolutional and recurrent neural network model by Touch Surgery. Ten videos were used for model evaluation (accuracy, F1 score), comparing the phase and step recognition of surgeons to the automatic detection of the ML model. RESULTS: The longest phase was the sellar phase (median 28 minutes), followed by the nasal phase (median 22 minutes) and the closure phase (median 14 minutes). The longest steps were step 5 (tumor identification and excision, median 17 minutes); step 3 (posterior septectomy and removal of sphenoid septations, median 14 minutes); and step 4 (anterior sellar wall removal, median 10 minutes). There were substantial variations within the recorded procedures in terms of video appearances, step duration, and step order, with only 50% of videos containing all 7 steps performed sequentially in numerical order. Despite this, the model was able to output accurate recognition of surgical phases (91% accuracy, 90% F1 score) and steps (76% accuracy, 75% F1 score). CONCLUSIONS: In this IDEAL stage 0 study, ML techniques have been developed to automatically analyze operative videos of eTSA pituitary surgery. This technology has previously been shown to be acceptable to neurosurgical teams and patients. ML-based surgical workflow analysis has numerous potential uses-such as education (e.g., automatic indexing of contemporary operative videos for teaching), improved operative efficiency (e.g., orchestrating the entire surgical team to a common workflow), and improved patient outcomes (e.g., comparison of surgical techniques or early detection of adverse events). Future directions include the real-time integration of Touch Surgery into the live operative environment as an IDEAL stage 1 (first-in-human) study, and further development of underpinning ML models using larger data sets

    How to Get the Most out of Your Curation Effort

    Get PDF
    Large-scale annotation efforts typically involve several experts who may disagree with each other. We propose an approach for modeling disagreements among experts that allows providing each annotation with a confidence value (i.e., the posterior probability that it is correct). Our approach allows computing certainty-level for individual annotations, given annotator-specific parameters estimated from data. We developed two probabilistic models for performing this analysis, compared these models using computer simulation, and tested each model's actual performance, based on a large data set generated by human annotators specifically for this study. We show that even in the worst-case scenario, when all annotators disagree, our approach allows us to significantly increase the probability of choosing the correct annotation. Along with this publication we make publicly available a corpus of 10,000 sentences annotated according to several cardinal dimensions that we have introduced in earlier work. The 10,000 sentences were all 3-fold annotated by a group of eight experts, while a 1,000-sentence subset was further 5-fold annotated by five new experts. While the presented data represent a specialized curation task, our modeling approach is general; most data annotation studies could benefit from our methodology

    Big data and data repurposing – using existing data to answer new questions in vascular dementia research

    Get PDF
    Introduction: Traditional approaches to clinical research have, as yet, failed to provide effective treatments for vascular dementia (VaD). Novel approaches to collation and synthesis of data may allow for time and cost efficient hypothesis generating and testing. These approaches may have particular utility in helping us understand and treat a complex condition such as VaD. Methods: We present an overview of new uses for existing data to progress VaD research. The overview is the result of consultation with various stakeholders, focused literature review and learning from the group’s experience of successful approaches to data repurposing. In particular, we benefitted from the expert discussion and input of delegates at the 9th International Congress on Vascular Dementia (Ljubljana, 16-18th October 2015). Results: We agreed on key areas that could be of relevance to VaD research: systematic review of existing studies; individual patient level analyses of existing trials and cohorts and linking electronic health record data to other datasets. We illustrated each theme with a case-study of an existing project that has utilised this approach. Conclusions: There are many opportunities for the VaD research community to make better use of existing data. The volume of potentially available data is increasing and the opportunities for using these resources to progress the VaD research agenda are exciting. Of course, these approaches come with inherent limitations and biases, as bigger datasets are not necessarily better datasets and maintaining rigour and critical analysis will be key to optimising data use

    The wisdom of the crowd playing The Price Is Right

    Get PDF
    In The Price Is Right game show, players compete to win a prize, by placing bids on its price. We ask whether it is possible to achieve a “wisdom of the crowd” effect, by combining the bids to produce an aggregate price estimate that is superior to the estimates of individual players. Using data from the game show, we show that a wisdom of the crowd effect is possible, especially by using models of the decision-making processes involved in bidding. The key insight is that, because of the competitive nature of the game, what people bid is not necessarily the same as what they know. This means better estimates are formed by aggregating latent knowledge than by aggregating observed bids. We use our results to highlight the usefulness of models of cognition and decision-making in studying the wisdom of the crowd, which are often approached only from non-psychological statistical perspectives

    Staged decline of neuronal function in vivo in an animal model of Alzheimer's disease

    Get PDF
    The accumulation of amyloid-β in the brain is an essential feature of Alzheimer's disease. However, the impact of amyloid-β-accumulation on neuronal dysfunction on the single cell level in vivo is poorly understood. Here we investigate the progression of amyloid-β load in relation to neuronal dysfunction in the visual system of the APP23×PS45 mouse model of Alzheimer's disease. Using in vivo two-photon calcium imaging in the visual cortex, we demonstrate that a progressive deterioration of neuronal tuning for the orientation of visual stimuli occurs in parallel with the age-dependent increase of the amyloid-β load. Importantly, we find this deterioration only in neurons that are hyperactive during spontaneous activity. This impairment of visual cortical circuit function also correlates with pronounced deficits in visual-pattern discrimination. Together, our results identify distinct stages of decline in sensory cortical performance in vivo as a function of the increased amyloid-β-load

    Adaptive Movement Compensation for In Vivo Imaging of Fast Cellular Dynamics within a Moving Tissue

    Get PDF
    In vivo non-linear optical microscopy has been essential to advance our knowledge of how intact biological systems work. It has been particularly enabling to decipher fast spatiotemporal cellular dynamics in neural networks. The power of the technique stems from its optical sectioning capability that in turn also limits its application to essentially immobile tissue. Only tissue not affected by movement or in which movement can be physically constrained can be imaged fast enough to conduct functional studies at high temporal resolution. Here, we show dynamic two-photon Ca2+ imaging in the spinal cord of a living rat at millisecond time scale, free of motion artifacts using an optical stabilization system. We describe a fast, non-contact adaptive movement compensation approach, applicable to rough and weakly reflective surfaces, allowing real-time functional imaging from intrinsically moving tissue in live animals. The strategy involves enslaving the position of the microscope objective to that of the tissue surface in real-time through optical monitoring and a closed feedback loop. The performance of the system allows for efficient image locking even in conditions of random or irregular movements

    Minimum follow-up time required for the estimation of statistical cure of cancer patients: verification using data from 42 cancer sites in the SEER database

    Get PDF
    BACKGROUND: The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. METHODS: Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. RESULTS: The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. CONCLUSION: The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients
    corecore