38,214 research outputs found

    A new multimodal paradigm for biomarkers longitudinal monitoring: a clinical application to women steroid profiles in urine and blood.

    Get PDF
    Most current state-of-the-art strategies to generate individual adaptive reference ranges are designed to monitor one clinical parameter at a time. An innovative methodology is proposed for the simultaneous longitudinal monitoring of multiple biomarkers. The estimation of individual thresholds is performed by applying a Bayesian modeling strategy to a multivariate score integrating several biomarkers (compound concentration and/or ratio). This multimodal monitoring was applied to data from a clinical study involving 14 female volunteers with normal menstrual cycles receiving testosterone via transdermal route, as to test its ability to detect testosterone administration. The study samples consisted of urine and blood collected during 4 weeks of a control phase and 4 weeks with a daily testosterone gel application. Integrating multiple biomarkers improved the detection of testosterone gel administration with substantially higher sensitivity compared with the distinct follow-up of each biomarker, when applied to selected urine and serum steroid biomarkers, as well as the combination of both. Among the 175 known positive samples, 38% were identified by the multimodal approach using urine biomarkers, 79% using serum biomarkers and 83% by combining biomarkers from both biological matrices, whereas 10%, 67% and 64% were respectively detected using standard unimodal monitoring. The detection of abnormal patterns can be improved using multimodal approaches. The combination of urine and serum biomarkers reduced the overall number of false-negatives, thus evidencing promising complementarity between urine and blood sampling for doping control, as highlighted in the case of the use of transdermal testosterone preparations. The generation in a multimodal setting of adaptive and personalized reference ranges opens up new opportunities in clinical and anti-doping profiling. The integration of multiple parameters in a longitudinal monitoring is expected to provide a more complete evaluation of individual profiles generating actionable intelligence to further guide sample collection, analysis protocols and decision-making in clinics and anti-doping

    Linking recorded data with emotive and adaptive computing in an eHealth environment

    Get PDF
    Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development

    Transportation mode recognition fusing wearable motion, sound and vision sensors

    Get PDF
    We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure
    corecore