781 research outputs found

    An ethogram of biter and bitten pigs during an ear biting event: first step in the development of a Precision Livestock Farming tool

    Get PDF
    peer-reviewedPigs reared in intensive farming systems are more likely to develop damaging behaviours such as tail and ear biting (EB) due to their difficulty in coping with the environment and their inability to perform natural behaviours. However, much less is known about the aetiology of EB behaviour compared to tail biting behaviour. Application of new intervention strategies may be the key to deal with this welfare issue. The discipline of Precision Livestock Farming (PLF) allows farmers to improve their management practices with the use of advanced technologies. Exploring the behaviour is the first step to identify reliable indicators for the development of such a tool. Therefore, the aim of this study was to develop an ethogram of biter and bitten pigs during an EB event and to find potential features for the development of a tool that can monitor EB events automatically and continuously. The observational study was carried out on a 300 sow farrow-to-finish commercial farm in Ireland (Co. Cork) during the first and second weaner stages. Three pens per stage holding c. 35 pigs each, six pens in total, were video recorded and 2.2 h of videos per pen were selected for video analysis. Two ethograms were developed, one for the biter and one for the bitten pig, to describe their behavioural repertoire. Behaviours were audio-visually labelled using ELAN and afterwards the resulting labels were processed using MATLAB® 2014. For the video data, duration and frequency of the observed behavioural interactions were quantified. Six behaviours were identified for the biter pig and a total of 710 interactions were observed: chewing (215 cases), quick bite (138 cases), pulling ear (97 cases), shaking head (11 cases), gentle manipulation (129 cases) and attempt to EB (93 cases). When the behaviour observed was not certain, it was classified as doubt (27 cases). Seven behaviours were identified for the bitten pig in response to the biters behaviour and were divided in: four non-vocal behaviours described as biting (40 cases), head knocking (209 cases), shaking/moving head (225 cases) or moving away (156 cases); and three vocal behaviours identified as scream (74 cases), grunt (166 cases), and squeal (125 cases). Vocal behaviours were classified using a verified set of features yielding a precision of 83.2%. A significant difference in duration was found between all the behaviours (P < 0.001), except between gentle manipulation and chewing where no difference in duration was found (P < 0.338). The results illustrate the heterogeneity of EB behaviours, which may be used to better understand this poorly studied damaging behaviour. They also indicate potential for the development of a PLF tool to automatically, continuously monitor such behaviour on farm by combining the behaviour of the biter pig and the bitten pigs responses

    Advanced Signal Processing in Wearable Sensors for Health Monitoring

    Get PDF
    Smart, wearables devices on a miniature scale are becoming increasingly widely available, typically in the form of smart watches and other connected devices. Consequently, devices to assist in measurements such as electroencephalography (EEG), electrocardiogram (ECG), electromyography (EMG), blood pressure (BP), photoplethysmography (PPG), heart rhythm, respiration rate, apnoea, and motion detection are becoming more available, and play a significant role in healthcare monitoring. The industry is placing great emphasis on making these devices and technologies available on smart devices such as phones and watches. Such measurements are clinically and scientifically useful for real-time monitoring, long-term care, and diagnosis and therapeutic techniques. However, a pertaining issue is that recorded data are usually noisy, contain many artefacts, and are affected by external factors such as movements and physical conditions. In order to obtain accurate and meaningful indicators, the signal has to be processed and conditioned such that the measurements are accurate and free from noise and disturbances. In this context, many researchers have utilized recent technological advances in wearable sensors and signal processing to develop smart and accurate wearable devices for clinical applications. The processing and analysis of physiological signals is a key issue for these smart wearable devices. Consequently, ongoing work in this field of study includes research on filtration, quality checking, signal transformation and decomposition, feature extraction and, most recently, machine learning-based methods

    Embedding a Grid of Load Cells into a Dining Table for Automatic Monitoring and Detection of Eating Events

    Get PDF
    This dissertation describes a “smart dining table” that can detect and measure consumption events. This work is motivated by the growing problem of obesity, which is a global problem and an epidemic in the United States and Europe. Chapter 1 gives a background on the economic burden of obesity and its comorbidities. For the assessment of obesity, we briefly describe the classic dietary assessment tools and discuss their drawback and the necessity of using more objective, accurate, low-cost, and in-situ automatic dietary assessment tools. We explain in short various technologies used for automatic dietary assessment such as acoustic-, motion-, or image-based systems. This is followed by a literature review of prior works related to the detection of weights and locations of objects sitting on a table surface. Finally, we state the novelty of this work. In chapter 2, we describe the construction of a table that uses an embedded grid of load cells to sense the weights and positions of objects. The main challenge is aligning the tops of adjacent load cells to within a few micrometer tolerance, which we accomplish using a novel inversion process during construction. Experimental tests found that object weights distributed across 4 to 16 load cells could be measured with 99.97±0.1% accuracy. Testing the surface for flatness at 58 points showed that we achieved approximately 4.2±0.5 um deviation among adjacent 2x2 grid of tiles. Through empirical measurements we determined that the table has a 40.2 signal-to-noise ratio when detecting the smallest expected intake amount (0.5 g) from a normal meal (approximate total weight is 560 g), indicating that a tiny amount of intake can be detected well above the noise level of the sensors. In chapter 3, we describe a pilot experiment that tests the capability of the table to monitor eating. Eleven human subjects were video recorded for ground truth while eating a meal on the table using a plate, bowl, and cup. To detect consumption events, we describe an algorithm that analyzes the grid of weight measurements in the format of an image. The algorithm segments the image into multiple objects, tracks them over time, and uses a set of rules to detect and measure individual bites of food and drinks of liquid. On average, each meal consisted of 62 consumption events. Event detection accuracy was very high, with an F1-score per subject of 0.91 to 1.0, and an F1 score per container of 0.97 for the plate and bowl, and 0.99 for the cup. The experiment demonstrates that our device is capable of detecting and measuring individual consumption events during a meal. Chapter 4 compares the capability of our new tool to monitor eating against previous works that have also monitored table surfaces. We completed a literature search and identified the three state-of-the-art methods to be used for comparison. The main limitation of all previous methods is that they used only one load cell for monitoring, so only the total surface weight can be analyzed. To simulate their operations, the weights of our grid of load cells were summed up to use the 2D data as 1D. Data were prepared according to the requirements of each method. Four metrics were used to evaluate the comparison: precision, recall, accuracy, and F1-score. Our method scored the highest in recall, accuracy, and F1-score; compared to all other methods, our method scored 13-21% higher for recall, 8-28% higher for accuracy, and 10-18% higher for F1-score. For precision, our method scored 97% that is just 1% lower than the highest precision, which was 98%. In summary, this dissertation describes novel hardware, a pilot experiment, and a comparison against current state-of-the-art tools. We also believe our methods could be used to build a similar surface for other applications besides monitoring consumption

    Epileptic multi-seizure type classification using electroencephalogram signals from the Temple University Hospital Seizure Corpus:A review

    Get PDF
    Epilepsy is one of the most paramount neurological diseases, affecting about 1% of the world's population. Seizure detection and classification are difficult tasks and are ongoing challenges in biomedical signal processing to enhance medical diagnosis. This paper presents and highlights the unique frequency and amplitude information found within multiple seizure types, including their morphologies, to aid the development of future seizure classification algorithms. Whilst many published works in the literature have reported on seizure detection using electroencephalogram (EEG), there has yet to be an exhaustive review detailing multi-seizure type classification using EEG. Therefore, this paper also includes a detailed review of multi-seizure type classification performance based on the Temple University Hospital Seizure Corpus (TUSZ) dataset for focal and generalised classification, and multi-seizure type classification. Deep learning techniques have a higher overall average performance for focal and generalised classification compared to machine learning techniques, whereas hybrid deep learning approaches have the highest overall average performance for multi-seizure type classification. Finally, this paper also highlights the limitations of the TUSZ dataset and suggests some future work, including the curation of a standardised training and testing dataset from the TUSZ that would allow a proper comparison of classification methods and spur advancement in the field.</p

    Wearable in-ear pulse oximetry: theory and applications

    Get PDF
    Wearable health technology, most commonly in the form of the smart watch, is employed by millions of users worldwide. These devices generally exploit photoplethysmography (PPG), the non-invasive use of light to measure blood volume, in order to track physiological metrics such as pulse and respiration. Moreover, PPG is commonly used in hospitals in the form of pulse oximetry, which measures light absorbance by the blood at different wavelengths of light to estimate blood oxygen levels (SpO2). This thesis aims to demonstrate that despite its widespread usage over many decades, this sensor still possesses a wealth of untapped value. Through a combination of advanced signal processing and harnessing the ear as a location for wearable sensing, this thesis introduces several novel high impact applications of in-ear pulse oximetry and photoplethysmography. The aims of this thesis are accomplished through a three pronged approach: rapid detection of hypoxia, tracking of cognitive workload and fatigue, and detection of respiratory disease. By means of the simultaneous recording of in-ear and finger pulse oximetry at rest and during breath hold tests, it was found that in-ear SpO2 responds on average 12.4 seconds faster than the finger SpO2. This is likely due in part to the ear being in close proximity to the brain, making it a priority for oxygenation and thus making wearable in-ear SpO2 a good proxy for core blood oxygen. Next, the low latency of in-ear SpO2 was further exploited in the novel application of classifying cognitive workload. It was found that in-ear pulse oximetry was able to robustly detect tiny decreases in blood oxygen during increased cognitive workload, likely caused by increased brain metabolism. This thesis demonstrates that in-ear SpO2 can be used to accurately distinguish between different levels of an N-back memory task, representing different levels of mental effort. This concept was further validated through its application to gaming and then extended to the detection of driver related fatigue. It was found that features derived from SpO2 and PPG were predictive of absolute steering wheel angle, which acts as a proxy for fatigue. The strength of in-ear PPG for the monitoring of respiration was investigated with respect to the finger, with the conclusion that in-ear PPG exhibits far stronger respiration induced intensity variations and pulse amplitude variations than the finger. All three respiratory modes were harnessed through multivariate empirical mode decomposition (MEMD) to produce spirometry-like respiratory waveforms from PPG. It was discovered that these PPG derived respiratory waveforms can be used to detect obstruction to breathing, both through a novel apparatus for the simulation of breathing disorders and through the classification of chronic obstructive pulmonary disease (COPD) in the real world. This thesis establishes in-ear pulse oximetry as a wearable technology with the potential for immense societal impact, with applications from the classification of cognitive workload and the prediction of driver fatigue, through to the detection of chronic obstructive pulmonary disease. The experiments and analysis in this thesis conclusively demonstrate that widely used pulse oximetry and photoplethysmography possess a wealth of untapped value, in essence teaching the old PPG sensor new tricks.Open Acces

    EventNet: Detecting Events in EEG

    Full text link
    Neurologists are often looking for various "events of interest" when analyzing EEG. To support them in this task various machine-learning-based algorithms have been developed. Most of these algorithms treat the problem as classification, thereby independently processing signal segments and ignoring temporal dependencies inherent to events of varying duration. At inference time, the predicted labels for each segment then have to be post processed to detect the actual events. We propose an end-to-end event detection approach (EventNet), based on deep learning, that directly works with events as learning targets, stepping away from ad-hoc postprocessing schemes to turn model outputs into events. We compare EventNet with a state-of-the-art approach for artefact and and epileptic seizure detection, two event types with highly variable durations. EventNet shows improved performance in detecting both event types. These results show the power of treating events as direct learning targets, instead of using ad-hoc postprocessing to obtain them. Our event detection framework can easily be extended to other event detection problems in signal processing, since the deep learning backbone does not depend on any task-specific features.Comment: This work has been submitted to the IEEE for possible publicatio

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Detection and Prediction of Epileptic Seizures

    Get PDF

    Towards understanding the role of central processing in release from masking

    Get PDF
    People with normal hearing have the ability to listen to a desired target sound while filtering out unwanted sounds in the background. However, most patients with hearing impairment struggle in noisy environments, a perceptual deficit which current hearing aids and cochlear implants cannot resolve. Even though peripheral dysfunction of the ears undoubtedly contribute to this deficit, surmounting evidence has implicated central processing in the inability to detect sounds in background noise. Therefore, it is essential to better understand the underlying neural mechanisms by which target sounds are dissociated from competing maskers. This research focuses on two phenomena that help suppress background sounds: 1) dip-listening, and 2) directional hearing. When background noise fluctuates slowly over time, both humans and animals can listen in the dips of the noise envelope to detect target sound, a phenomenon referred to as dip-listening. Detection of target sound is facilitated by a central neuronal mechanism called envelope locking suppression. At both positive and negative signal-to-noise ratios (SNRs), the presence of target energy can suppress the strength by which neurons in auditory cortex track background sound, at least in anesthetized animals. However, in humans and animals, most of the perceptual advantage gained by listening in the dips of fluctuating noise emerges when a target is softer than the background sound. This raises the possibility that SNR shapes the reliance on different processing strategies, a hypothesis tested here in awake behaving animals. Neural activity of Mongolian gerbils is measured by chronic implantation of silicon probes in the core auditory cortex. Using appetitive conditioning, gerbils detect target tones in the presence of temporally fluctuating amplitude-modulated background noise, called masker. Using rate- vs. timing-based decoding strategies, analysis of single-unit activity show that both mechanisms can be used for detecting tones at positive SNR. However, only temporal decoding provides an SNR-invariant readout strategy that is viable at both positive and negative SNRs. In addition to dip-listening, spatial cues can facilitate the dissociation of target sounds from background noise. Specifically, an important cue for computing sound direction is the time difference in arrival of acoustic energy reaching each ear, called interaural time difference (ITD). ITDs allow localization of low frequency sounds from left to right inside the listener\u27s head, also called sound lateralization. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here, two prevalent theories of sound localization are observed to make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. In this research, through behavioral experiments on sound lateralization, the computation of sound location with ITDs is tested. Four groups of normally hearing listeners lateralize sounds based on ITDs as a function of sound intensity, exposure hemisphere, and stimulus history. Stimuli consists of low-frequency band-limited white noise. Statistical analysis, which partial out overall differences between listeners, is inconsistent with the place-coding scheme of sound localization, and supports the hypothesis that human sound localization is instead encoded through a population rate-code
    • …
    corecore