26 research outputs found

    SniffyArt: The Dataset of Smelling Persons

    Full text link
    Smell gestures play a crucial role in the investigation of past smells in the visual arts yet their automated recognition poses significant challenges. This paper introduces the SniffyArt dataset, consisting of 1941 individuals represented in 441 historical artworks. Each person is annotated with a tightly fitting bounding box, 17 pose keypoints, and a gesture label. By integrating these annotations, the dataset enables the development of hybrid classification approaches for smell gesture recognition. The datasets high-quality human pose estimation keypoints are achieved through the merging of five separate sets of keypoint annotations per person. The paper also presents a baseline analysis, evaluating the performance of representative algorithms for detection, keypoint estimation, and classification tasks, showcasing the potential of combining keypoint estimation with smell gesture classification. The SniffyArt dataset lays a solid foundation for future research and the exploration of multi-task approaches leveraging pose keypoints and person boxes to advance human gesture and olfactory dimension analysis in historical artworks.Comment: 10 pages, 8 figure

    InfoStrom: Learning information infrastructures for crisis management in case of medium to large electrical power breakdowns

    Get PDF
    One of the most important infrastructures in modern industrialized societies is the electricity network. Due to its fundamental role for many aspects of our everyday life, power infrastructures manifest a strong dependence between power suppliers and customers. Customers take the infrastructure for granted; it appears mostly invisible to them as long as it works, but in the case of breakdowns in power supply customers become aware of the dependence on electricity. They join professional actors in the recovery and coping work with regard to the electricity breakdown: Maintenance workers of the power provider, police, firefighters, red cross, etc. These institutions are professionalized for dealing with such situations, but the people affected by a power outage also need to be considered as actors

    Spotting Human Activities and Gestures in Continuous Data Streams

    Get PDF
    In this thesis we use algorithms on data from body-worn sensors to detect physical gestures and activities. While gesture recognition is a promising and upcoming alternative to explicitly interact with computers in a mobile setting, the user’s activity is considered an important part of his/her context which can help computer applications adapt automatically to the user’s situation. Numerous context-aware applications can be found ranging from industrial to medical to educational domains. A particular emphasis of this thesis is the recognition of short activities or quick actions, which often occur amid large quantities of irrelevant data. Embedded in different application scenarios, we focus on four challenges in gesture and activity recognition: multiple types and diversity of activities, high variance in performance and user independence, continuous data stream with large background and finally activity recognition on different levels. We make several contributions to overcome these challenges. We start with a method for activity recognition using short fixed positions of the wrist to extract activities from a continuous data stream. Postures are used to recognize short activities in continuous recordings. In order to evaluate the distinctiveness of gestures in continuous recordings of gestures in daily life, we present a new approach for the important and challenging problem of user-independent gesture recognition. Beyond the recognition aspects, we pay particular attention to the social acceptability of the evaluated gestures. We performed user interviews in order to find adequate control gestures for five scenarios. Activity recognition is typically challenged by spotting a large number of activities amid irrelevant data in a user-independent manner. We present a model-based approach using joint boosting to enable the automatic discovery of important high-level primitives that are derived from the human body-model. Subsequently, we systematically analyze the benefit of body-model derived primitives in different sensor settings for multi activity recognition. Furthermore, we propose a new body-model based approach using accelerometer sensors thereby reducing the sensor requirements significantly. The proposed methods to recognize ‘atomic’ activities such as drilling, handshaking, or walking do not scale well for high-level tasks composed of multiple activities. A prohibitive amount of training would be required to cover the high variability and the large number of possibilities to execute high-level tasks. To this end, an approach considering temporal constraints encoded in UML diagrams enables a reliable recognition of composed activities or high-level tasks without requiring large amounts of training data. We show the validity of the approach by introducing a realistic and challenging data set

    THE FISH AND WILDLIFE SERVICE-EXTENSION CONNECTION

    Get PDF
    We go back a long, long way! When the U.S. Fish and Wildlife Service (FWS) first established an office to cooperate directly with the Extension System\u27s fish and wildlife component, I was a mere lad of 49. I became involved 4 years later, still a very young man! Now that the program and I have matured it\u27s a good time to reflect on past accomplishments and associations and to look ahead to a continuing productive relationship

    Comment on ‘‘Valence-bond theory and the evaluation of electronic energy matrix elements between nonorthogonal Slater determinants’’

    Get PDF
    In a recent article [Phys. Rev. A 31, 2107 (1985)] Leasure and Balint-Kurti claim to give a more efficient algorithm than any previously available for determining matrix elements of the Hamiltonian in valence-bond calculations. Actually, an algorithm of no significant difference and the same efficiency has been available since 1972 and has been applied to valence-bond calculations

    Development of a machine learning-based method for the analysis of microplastics in environmental samples using µ-Raman spectroscopy

    No full text
    Abstract This research project investigates the potential of machine learning for the analysis of microplastic Raman spectra in environmental samples. Based on a data set of > 64,000 Raman spectra (10.7% polymer spectra) from 47 environmental or waste water samples, two methods of deep learning (one single model and one model per class) with the Rectified Linear Unit function (ReLU) (hidden layer) as the activation function and the sigmoid function as the output layer were evaluated and compared to human-only annotation. Based on the one-model-per-class algorithm, an approach for human–machine teaming was developed. This method makes it possible to analyze microplastic (polyethylene, polypropylene, polystyrene, polyvinyl chloride, and polyethylene terephthalate) spectra with high recall (≥ 99.4%) and precision (≥ 97.1%). Compared to human-only spectra annotation, the human–machine teaming reduces the researchers’ time required per sample from several hours to less than one hour
    corecore