14,340 research outputs found

    Detecting changes of transportation-mode by using classification data

    Get PDF

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field

    Exploiting multimedia in creating and analysing multimedia Web archives

    No full text
    The data contained on the web and the social web are inherently multimedia and consist of a mixture of textual, visual and audio modalities. Community memories embodied on the web and social web contain a rich mixture of data from these modalities. In many ways, the web is the greatest resource ever created by human-kind. However, due to the dynamic and distributed nature of the web, its content changes, appears and disappears on a daily basis. Web archiving provides a way of capturing snapshots of (parts of) the web for preservation and future analysis. This paper provides an overview of techniques we have developed within the context of the EU funded ARCOMEM (ARchiving COmmunity MEMories) project to allow multimedia web content to be leveraged during the archival process and for post-archival analysis. Through a set of use cases, we explore several practical applications of multimedia analytics within the realm of web archiving, web archive analysis and multimedia data on the web in general

    Smart aging : utilisation of machine learning and the Internet of Things for independent living

    Get PDF
    Smart aging utilises innovative approaches and technology to improve older adults’ quality of life, increasing their prospects of living independently. One of the major concerns the older adults to live independently is “serious fall”, as almost a third of people aged over 65 having a fall each year. Dementia, affecting nearly 9% of the same age group, poses another significant issue that needs to be identified as early as possible. Existing fall detection systems from the wearable sensors generate many false alarms; hence, a more accurate and secure system is necessary. Furthermore, there is a considerable gap to identify the onset of cognitive impairment using remote monitoring for self-assisted seniors living in their residences. Applying biometric security improves older adults’ confidence in using IoT and makes it easier for them to benefit from smart aging. Several publicly available datasets are pre-processed to extract distinctive features to address fall detection shortcomings, identify the onset of dementia system, and enable biometric security to wearable sensors. These key features are used with novel machine learning algorithms to train models for the fall detection system, identifying the onset of dementia system, and biometric authentication system. Applying a quantitative approach, these models are tested and analysed from the test dataset. The fall detection approach proposed in this work, in multimodal mode, can achieve an accuracy of 99% to detect a fall. Additionally, using 13 selected features, a system for detecting early signs of dementia is developed. This system has achieved an accuracy rate of 93% to identify a cognitive decline in the older adult, using only some selected aspects of their daily activities. Furthermore, the ML-based biometric authentication system uses physiological signals, such as ECG and Photoplethysmogram, in a fusion mode to identify and authenticate a person, resulting in enhancement of their privacy and security in a smart aging environment. The benefits offered by the fall detection system, early detection and identifying the signs of dementia, and the biometric authentication system, can improve the quality of life for the seniors who prefer to live independently or by themselves

    Improving activity recognition using a wearable barometric pressure sensor in mobility-impaired stroke patients.

    Get PDF
    © 2015 Massé et al.Background: Stroke survivors often suffer from mobility deficits. Current clinical evaluation methods, including questionnaires and motor function tests, cannot provide an objective measure of the patients mobility in daily life. Physical activity performance in daily-life can be assessed using unobtrusive monitoring, for example with a single sensor module fixed on the trunk. Existing approaches based on inertial sensors have limited performance, particularly in detecting transitions between different activities and postures, due to the inherent inter-patient variability of kinematic patterns. To overcome these limitations, one possibility is to use additional information from a barometric pressure (BP) sensor. Methods: Our study aims at integrating BP and inertial sensor data into an activity classifier in order to improve the activity (sitting, standing, walking, lying) recognition and the corresponding body elevation (during climbing stairs or when taking an elevator). Taking into account the trunk elevation changes during postural transitions (sit-to-stand, stand-to-sit), we devised an event-driven activity classifier based on fuzzy-logic. Data were acquired from 12 stroke patients with impaired mobility, using a trunk-worn inertial and BP sensor. Events, including walking and lying periods and potential postural transitions, were first extracted. These events were then fed into a double-stage hierarchical Fuzzy Inference System (H-FIS). The first stage processed the events to infer activities and the second stage improved activity recognition by applying behavioral constraints. Finally, the body elevation was estimated using a pattern-enhancing algorithm applied on BP. The patients were videotaped for reference. The performance of the algorithm was estimated using the Correct Classification Rate (CCR) and F-score. The BP-based classification approach was benchmarked against a previously-published fuzzy-logic classifier (FIS-IMU) and a conventional epoch-based classifier (EPOCH). Results: The algorithm performance for posture/activity detection, in terms of CCR was 90.4 %, with 3.3 % and 5.6 % improvements against FIS-IMU and EPOCH, respectively. The proposed classifier essentially benefits from a better recognition of standing activity (70.3 % versus 61.5 % [FIS-IMU] and 42.5 % [EPOCH]) with 98.2 % CCR for body elevation estimation. Conclusion: The monitoring and recognition of daily activities in mobility-impaired stoke patients can be significantly improved using a trunk-fixed sensor that integrates BP, inertial sensors, and an event-based activity classifier

    Multimodality imaging in vivo for preclinical assessment of tumor-targeted doxorubicin nanoparticles.

    Get PDF
    This study presents a new multimodal imaging approach that includes high-frequency ultrasound, fluorescence intensity, confocal, and spectral imaging to improve the preclinical evaluation of new therapeutics in vivo. Here we use this approach to assess in vivo the therapeutic efficacy of the novel chemotherapy construct, HerDox during and after treatment. HerDox is comprised of doxorubicin non-covalently assembled in a viral-like particle targeted to HER2+ tumor cells, causing tumor cell death at over 10-fold lower dose compared to the untargeted drug, while sparing the heart. Whereas our initial proof-of-principle studies on HerDox used tumor growth/shrinkage rates as a measure of therapeutic efficacy, here we show that multimodal imaging deployed during and after treatment can supplement traditional modes of tumor monitoring to further characterize the particle in tissues of treated mice. Specifically, we show here that tumor cell apoptosis elicited by HerDox can be monitored in vivo during treatment using high frequency ultrasound imaging, while in situ confocal imaging of excised tumors shows that HerDox indeed penetrated tumor tissue and can be detected at the subcellular level, including in the nucleus, via Dox fluorescence. In addition, ratiometric spectral imaging of the same tumor tissue enables quantitative discrimination of HerDox fluorescence from autofluorescence in situ. In contrast to standard approaches of preclinical assessment, this new method provides multiple/complementary information that may shorten the time required for initial evaluation of in vivo efficacy, thus potentially reducing the time and cost for translating new drug molecules into the clinic

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
    corecore