913 research outputs found

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    Role of Spectral Peaks in Autocoorelation Domain for Robust Speech Recognition

    Get PDF
    This paper presents a new front-end for robust speech recognition. This new front-end scenario focuses on the spectral features of the filtered speech signals in the autocorrelation domain. The autocorrelation domain is well known for its pole preserving and noise separation properties. In this paper we will use the autocorrelation domain as an appropriate candidate for robust feature extraction. The proposed method introduces a novel representation of speech for the cases where the speech signal is corrupted by additive noises. In this method, the speech features are computed by reducing additive noise effects via an initial filtering stage, followed by the extraction of autocorrelation spectrum peaks. Robust features based on theses peaks are derived by assuming that the corrupting noise is stationary in nature. A task of speaker-independent isolated-word recognition is used to demonstrate the efficiency of these robust features. The cases of white noise and colored noise such as factory, babble and F16 are tested. Experimental results show significant improvement in comparison to the results obtained using traditional front end methods. Further enhancement has been done by applying cepstral mean normalization (CMN) on the above extracted features

    Who Said That? Towards a Machine-Prediction-Based Approach to Tursiops Truncatus Whistle Localization and Attribution in a Reverberant Dolphinarium

    Get PDF
    Dolphin communication research is an active period of growth. Many researchers expect to find significant communicative capacity in dolphins given their known sociality and large and complex brains. Moreover, given dolphins’ known acoustic sensitivity, serving their well-studied echolocation ability, some researchers have speculated that dolphin communication is mediated in large part by a sophisticated “vocal” language. However, evidence supporting this belief is scarce. Among most dolphin species, a particular tonal class of call, termed the whistle, has been identified as socially important. In particular, for the common bottlenose dolphin, Tursiops truncatus – arguably the focal species of most dolphin cognitive and communication research – research has fixated on “signature whistles,” individuallydistinctive whistles that seem to convey an individual’s identity to conspecifics, can be mimicked, and can be modulated under certain circumstances in ways that may or may not be communicative. Apart from signature whistles, most studies of dolphin calls concern group-based repertoires of whistles and other, pulse-form call types. However, studies of individual repertoires of non-signature whistles, and the phenomenon of combined signature and non-signature vocal exchanges among dolphins, are conspicuously rare in the literature, tending to be limited by either extreme subject confinement or sparse attributions of vocalizer identity. Nevertheless, such studies constitute a logical prerequisite to an understanding of the communicative potential of whistles. This absence can be explained by a methodological limitation in the way in which dolphin sounds are recorded. In particular, no established method exists for recording the whistles of an entire social group of dolphins so as to reliably attribute them to their vocalizers. This thesis proposes a dolphinarium-based system for achieving audio recording with whistle attribution, as well as visual behavioral tracking. Towards achieving the proposed system, I present foundational work involving the installation of permanent hydrophone arrays and cameras in a dolphinarium that enforces strict animal safety regulations. Attributing tonal sounds via the process of sound localization – estimation of a sound’s point of origin based on the physical properties of its propagation – in a highly reverberant environment is a notoriously difficult problem, resistant to many conventional signal processing techniques. This thesis will provide evidence of this difficulty, and also a demonstration of a highly e↵ective machine-learning-based solution to the problem. This thesis also provides miscellaneous hardware and the pieces of a computational pipeline towards completion of the full proposed, automated system. Once completed, the proposed system will provide an enormous data stream that will lend itself to large-scale studies of individual repertoires of non-signature whistles and combined signature and non-signature vocal exchanges among an invariant group of socializing dolphins, representing a unique and necessary achievement in dolphin communication research

    Reconstructing Human Motion

    Get PDF
    This thesis presents methods for reconstructing human motion in a variety of applications and begins with an introduction to the general motion capture hardware and processing pipeline. Then, a data-driven method for the completion of corrupted marker-based motion capture data is presented. The approach is especially suitable for challenging cases, e.g., if complete marker sets of multiple body parts are missing over a long period of time. Using a large motion capture database and without the need for extensive preprocessing the method is able to fix missing markers across different actors and motion styles. The approach can be used for incrementally increasing prior-databases, as the underlying search technique for similar motions scales well to huge databases. The resulting clean motion database could then be used in the next application: a generic data-driven method for recognizing human full body actions from live motion capture data originating from various sources. The method queries an annotated motion capture database for similar motion segments, able to handle temporal deviations from the original motion. The approach is online-capable, works in realtime, requires virtually no preprocessing and is shown to work with a variety of feature sets extracted from input data including positional data, sparse accelerometer signals, skeletons extracted from depth sensors and even video data. Evaluation is done by comparing against a frame-based Support Vector Machine approach on a freely available motion database as well as a database containing Judo referee signal motions. In the last part, a method to indirectly reconstruct the effects of the human heart's pumping motion from video data of the face is applied in the context of epileptic seizures. These episodes usually feature interesting heart rate patterns like a significant increase at seizure start as well as seizure-type dependent drop-offs near the end. The pulse detection method is evaluated for applicability regarding seizure detection in a multitude of scenarios, ranging from videos recorded in a controlled clinical environment to patient supplied videos of seizures filmed with smartphones

    Gaseous time projection chambers for rare event detection: Results from the T-REX project. II. Dark matter

    Full text link
    As part of the T-REX project, a number of R&D and prototyping activities have been carried out during the last years to explore the applicability of Micromegas-read gaseous TPCs in rare event searches like double beta decay (DBD), axion research and low-mass WIMP searches. While in the companion paper we focus on DBD, in this paper we focus on the results regarding the search for dark matter candidates, both axions and WIMPs. Small ultra-low background Micromegas detectors are used to image the x-ray signal expected in axion helioscopes like CAST at CERN. Background levels as low as 0.8×1060.8\times 10^{-6} c keV1^{-1}cm2^{-2}s1^{-1} have already been achieved in CAST while values down to 107\sim10^{-7} c keV1^{-1}cm2^{-2}s1^{-1} have been obtained in a test bench placed underground in the Laboratorio Subterr\'aneo de Canfranc. Prospects to consolidate and further reduce these values down to 108\sim10^{-8} c keV1^{-1}cm2^{-2}s1^{-1}will be described. Such detectors, placed at the focal point of x-ray telescopes in the future IAXO experiment, would allow for 105^5 better signal-to-noise ratio than CAST, and search for solar axions with gaγg_{a\gamma} down to few 1012^{12} GeV1^{-1}, well into unexplored axion parameter space. In addition, a scaled-up version of these TPCs, properly shielded and placed underground, can be competitive in the search for low-mass WIMPs. The TREX-DM prototype, with \sim0.300 kg of Ar at 10 bar, or alternatively \sim0.160 kg of Ne at 10 bar, and energy threshold well below 1 keV, has been built to test this concept. We will describe the main technical solutions developed, as well as the results from the commissioning phase on surface. The anticipated sensitivity of this technique might reach 1044\sim10^{-44} cm2^2 for low mass (<10<10 GeV) WIMPs, well beyond current experimental limits in this mass range.Comment: Published in JCAP. New version with erratum incorporated (new figure 14

    A Near-to-Far Learning Framework for Terrain Characterization Using an Aerial/Ground-Vehicle Team

    Get PDF
    In this thesis, a novel framework for adaptive terrain characterization of untraversed far terrain in a natural outdoor setting is presented. The system learns the association between visual appearance of different terrain and the proprioceptive characteristics of that terrain in a self-supervised framework. The proprioceptive characteristics of the terrain are acquired by inertial sensors recording measurements of one second traversals that are mapped into the frequency domain and later through a clustering technique classified into discrete proprioceptive classes. Later, these labels are used as training inputs to the adaptive visual classifier. The visual classifier uses images captured by an aerial vehicle scouting ahead of the ground vehicle and extracts local and global descriptors from image patches. An incremental SVM is utilized on the set of images and training sets as they are grabbed sequentially. The framework proposed in this thesis has been experimentally validated in an outdoor environment. We compare the results of the adaptive approach with the offline a priori classification approach and yield an average 12% increase in accuracy results on outdoor settings. The adaptive classifier gradually learns the association between characteristics and visual features of new terrain interactions and modifies the decision boundaries

    A computational framework for sound segregation in music signals

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200
    corecore