18 research outputs found

    The Forest Service: A Study in Public Land Management. By Glen O. Robinson

    Get PDF
      This report will provide an overview of climate modeling from a mathematical perspective, particularly with respect to the use of partial differential equations. A visit to the Swedish Meterological and Hydrological Institute's Rossby Center for climate research in Norrkoping, Sweden, is at the foundation of our investigations. An introduction and a brief history section will be followed by a description of the Navier-Stokes equations, which are at the heart of climate-related mathematics, as well as a survey of many of the popular approximations and modeling techniques in use by climate researchers today. Subsequently, a boundary value problem based on the one dimensional compressible Euler equations will be discussed from an analytical as well as a numerical point of view, especially with concern to the well-posedness of the same.

    Predictive Modeling of Equine Activity Budgets Using a 3D Skeleton Reconstructed from Surveillance Recordings

    Full text link
    In this work, we present a pipeline to reconstruct the 3D pose of a horse from 4 simultaneous surveillance camera recordings. Our environment poses interesting challenges to tackle, such as limited field view of the cameras and a relatively closed and small environment. The pipeline consists of training a 2D markerless pose estimation model to work on every viewpoint, then applying it to the videos and performing triangulation. We present numerical evaluation of the results (error analysis), as well as show the utility of the achieved poses in downstream tasks of selected behavioral predictions. Our analysis of the predictive model for equine behavior showed a bias towards pain-induced horses, which aligns with our understanding of how behavior varies across painful and healthy subjects.Comment: 3rd Workshop on CV4Animals: Computer Vision for Animal Behavior Tracking and Modeling (in conjunction with CVPR 2023) [POSTER

    Dynamics are Important for the Recognition of Equine Pain in Video

    Full text link
    A prerequisite to successfully alleviate pain in animals is to recognize it, which is a great challenge in non-verbal species. Furthermore, prey animals such as horses tend to hide their pain. In this study, we propose a deep recurrent two-stream architecture for the task of distinguishing pain from non-pain in videos of horses. Different models are evaluated on a unique dataset showing horses under controlled trials with moderate pain induction, which has been presented in earlier work. Sequential models are experimentally compared to single-frame models, showing the importance of the temporal dimension of the data, and are benchmarked against a veterinary expert classification of the data. We additionally perform baseline comparisons with generalized versions of state-of-the-art human pain recognition methods. While equine pain detection in machine learning is a novel field, our results surpass veterinary expert performance and outperform pain detection results reported for other larger non-human species.Comment: CVPR 2019: IEEE Conference on Computer Vision and Pattern Recognitio

    Sharing Pain: Using Pain Domain Transfer for Video Recognition of Low Grade Orthopedic Pain in Horses

    Get PDF
    Orthopedic disorders are common among horses, often leading to euthanasia, which often could have been avoided with earlier detection. These conditions often create varying degrees of subtle long-term pain. It is challenging to train a visual pain recognition method with video data depicting such pain, since the resulting pain behavior also is subtle, sparsely appearing, and varying, making it challenging for even an expert human labeller to provide accurate ground-truth for the data. We show that a model trained solely on a dataset of horses with acute experimental pain (where labeling is less ambiguous) can aid recognition of the more subtle displays of orthopedic pain. Moreover, we present a human expert baseline for the problem, as well as an extensive empirical study of various domain transfer methods and of what is detected by the pain recognition method trained on clean experimental pain in the orthopedic dataset. Finally, this is accompanied with a discussion around the challenges posed by real-world animal behavior datasets and how best practices can be established for similar fine-grained action recognition tasks. Our code is available at https://github.com/sofiabroome/painface-recognition

    Going deeper than tracking: a survey of computer-vision based recognition of animal pain and emotions

    Get PDF
    Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go ‘deeper’ than tracking, and address automated recognition of animals’ internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic—classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research

    Learning Spatiotemporal Features in Low-Data and Fine-Grained Action Recognition with an Application to Equine Pain Behavior

    No full text
    Recognition of pain in animals is important because pain compromises animal welfare and can be a manifestation of disease. This is a difficult task for veterinarians and caretakers, partly because horses, being prey animals, display subtle pain behavior, and because they cannot verbalize their pain. An automated video-based system has a large potential to improve the consistency and efficiency of pain predictions. Video recording is desirable for ethological studies because it interferes minimally with the animal, in contrast to more invasive measurement techniques, such as accelerometers. Moreover, to be able to say something meaningful about animal behavior, the subject needs to be studied for longer than the exposure of single images. In deep learning, we have not come as far for video as we have for single images, and even more questions remain regarding what types of architectures should be used and what these models are actually learning. Collecting video data with controlled moderate pain labels is both laborious and involves real animals, and the amount of such data should therefore be limited. The low-data scenario, in particular, is under-explored in action recognition, in favor of the ongoing exploration of how well large models can learn large datasets. The first theme of the thesis is automated recognition of equine pain. Here, we propose a method for end-to-end equine pain recognition from video, finding, in particular, that the temporal modeling ability of the artificial neural network is important to improve the classification. We surpass veterinarian experts on a dataset with horses undergoing well-defined moderate experimental pain induction.  Next, we investigate domain transfer to another type of pain in horses: less defined, longer-acting and lower-grade orthopedic pain. We find that a smaller, recurrent video model is more robust to domain shift on a target dataset than a large, pre-trained, 3D CNN, having equal performance on a source dataset. We also discuss challenges with learning video features on real-world datasets. Motivated by questions arisen within the application area, the second theme of the thesis is empirical properties of deep video models. Here, we study the spatiotemporal features that are learned by deep video models in end-to-end video classification and propose an explainability method as a tool for such investigations. Further, the question of whether different approaches to frame dependency treatment in video models affect their cross-domain generalization ability is explored through empirical study. We also propose new datasets for light-weight temporal modeling and to investigate texture bias within action recognition.QC 20220616</p

    Learning Spatiotemporal Features in Low-Data and Fine-Grained Action Recognition with an Application to Equine Pain Behavior

    No full text
    Recognition of pain in animals is important because pain compromises animal welfare and can be a manifestation of disease. This is a difficult task for veterinarians and caretakers, partly because horses, being prey animals, display subtle pain behavior, and because they cannot verbalize their pain. An automated video-based system has a large potential to improve the consistency and efficiency of pain predictions. Video recording is desirable for ethological studies because it interferes minimally with the animal, in contrast to more invasive measurement techniques, such as accelerometers. Moreover, to be able to say something meaningful about animal behavior, the subject needs to be studied for longer than the exposure of single images. In deep learning, we have not come as far for video as we have for single images, and even more questions remain regarding what types of architectures should be used and what these models are actually learning. Collecting video data with controlled moderate pain labels is both laborious and involves real animals, and the amount of such data should therefore be limited. The low-data scenario, in particular, is under-explored in action recognition, in favor of the ongoing exploration of how well large models can learn large datasets. The first theme of the thesis is automated recognition of equine pain. Here, we propose a method for end-to-end equine pain recognition from video, finding, in particular, that the temporal modeling ability of the artificial neural network is important to improve the classification. We surpass veterinarian experts on a dataset with horses undergoing well-defined moderate experimental pain induction.  Next, we investigate domain transfer to another type of pain in horses: less defined, longer-acting and lower-grade orthopedic pain. We find that a smaller, recurrent video model is more robust to domain shift on a target dataset than a large, pre-trained, 3D CNN, having equal performance on a source dataset. We also discuss challenges with learning video features on real-world datasets. Motivated by questions arisen within the application area, the second theme of the thesis is empirical properties of deep video models. Here, we study the spatiotemporal features that are learned by deep video models in end-to-end video classification and propose an explainability method as a tool for such investigations. Further, the question of whether different approaches to frame dependency treatment in video models affect their cross-domain generalization ability is explored through empirical study. We also propose new datasets for light-weight temporal modeling and to investigate texture bias within action recognition.QC 20220616</p

    Objektiv igenkänning av mänsklig aktivitet från accelerometerdata med (mer eller mindre) djupa neurala nätverk

    No full text
    This thesis concerns the application of different artificial neural network architectures on the classification of multivariate accelerometer time series data into activity classes such as sitting, lying down, running, or walking. There is a strong correlation between increased health risks in children and their amount of daily screen time (as reported in questionnaires). The dependency is not clearly understood, as there are no such dependencies reported when the sedentary (idle) time is measured objectively. Consequently, there is an interest from the medical side to be able to perform such objective measurements. To enable large studies the measurement equipment should ideally be low-cost and non-intrusive. The report investigates how well these movement patterns can be distinguished given a certain measurement setup and a certain network structure, and how well the networks generalise to noisier data. Recurrent neural networks are given extra attention among the different networks, since they are considered well suited for data of sequential nature. Close to state-of-the-art results (95% weighted F1-score) are obtained for the tasks with 4 and 5 classes, which is notable since a considerably smaller number of sensors is used than in the previously published results. Another contribution of this thesis is that a new labeled dataset with 12 activity categories is provided, consisting of around 6 hours of recordings, comparable in number of samples to benchmarking datasets. The data collection was made in collaboration with the Department of Public Health at Karolinska Institutet.Inom ramen för uppsatsen testas hur väl rörelsemönster kan urskiljas ur accelerometerdatamed hjälp av den gren av maskininlärning som kallas djupinlärning; där djupa artificiellaneurala nätverk av noder funktionsapproximerar mappandes från domänen av sensordatatill olika fördefinerade kategorier av aktiviteter så som gång, stående, sittande eller liggande.Det finns ett intresse från den medicinska sidan att kunna mäta fysisk aktivitet objektivt,bland annat eftersom det visats att det finns en korrelation mellan ökade hälsorisker hosbarn och deras mängd daglig skärmtid. Denna typ av mätningar ska helst kunna göras medicke-invasiv utrustning till låg kostnad för att kunna göra större studier.Enklare nätverksarkitekturer samt återimplementeringar av bästa möjliga teknik inomområdet Mänsklig aktivitetsigenkänning (HAR) testas både på ett benchmarkingdataset ochpå egeninhämtad data i samarbete med Institutet för Folkhälsovetenskap på Karolinska Institutetoch resultat redovisas för olika val av möjliga klassificeringar och olika antal dimensionerper mätpunkt. De uppnådda resultaten (95% F1-score) på ett 4- och 5-klass-problem ärjämförbara med de bästa tidigare publicerade resultaten för aktivitetsigenkänning, vilket äranmärkningsvärt då då betydligt färre accelerometrar har använts här än i de åsyftade studierna.Förutom klassificeringsresultaten som redovisas bidrar det här arbetet med ett nyttinhämtat och kategorimärkt dataset; KTH-KI-AA. Det är jämförbart i antal datapunkter medspridda benchmarkingdataset inom HAR-området
    corecore