12 research outputs found

    Reconocimiento automático de la actividad de vacunos en pastoreo

    Get PDF
    The use of collars, pedometers or activity tags is expensive to record cattle's behavior in short periods (e.g. 24h). Under this particular situation, the development of low-cost and easy-to-use technologies is relevant. Similar to smartphone apps for human activity recognition, which analyzes data from embedded triaxial accelerometer sensors, we develop an Android app to record activity in cattle. Four main steps were followed: a) data acquisition for model training, b) model training, c) app deploy, and d) app utilization. For data acquisition, we developed a system in which three components were used: two smartphones and a Google Firebase account for data storage. For model training, the generated database was used to train a recurrent neural network. The performance of training was assessed by the confusion matrix. For all actual activities, the trained model provided a high prediction (> 96 %). The trained model was used to deploy an Android app by using the TensorFlow API. Finally, three cell phones (LG gm730) were used to test the app and record the activity of six Holstein cows (3 lactating and 3 non-lactating). Direct and non-systematic observations of the animals were made to contrast the activities recorded by the device. Our results show consistency between the direct observations and the activity recorded by our Android app.El uso de podómetros o collares para registrar el comportamiento del ganado en períodos cortos de tiempo (e.g. 24 h) es costoso. En esta situación particular, el desarrollo de tecnologías de bajo costo y fáciles de usar es relevante. Al igual que las aplicaciones de teléfonos inteligentes para el reconocimiento de la actividad humana, las cuales analizan datos de sensores de aceleración integrados, en este trabajo desarrollamos una aplicación de Android para registrar la actividad del ganado. Para el desarrollo de esta aplicación, se siguieron cuatro pasos principales: a) adquisición de datos para el entrenamiento del modelo, b) entrenamiento del modelo, c) desarrollo de la aplicación y d) utilización de la aplicación. Para la adquisición de datos, desarrollamos un sistema en el que se utilizaron tres componentes: dos teléfonos inteligentes (uno en la vaca y otro para el observador) y una cuenta de Google Firebase para el almacenamiento de datos. Para el entrenamiento del modelo, la base de datos generada se utilizó para entrenar una red neuronal recurrente. El rendimiento del entrenamiento se evaluó mediante la matriz de confusión. Para todas las actividades, el modelo entrenado proporcionó una predicción alta (> 96 %). El modelo entrenado se utilizó para desarrollar una aplicación de Android con la API de TensorFlow. Finalmente, se utilizaron tres teléfonos celulares (LG gm730) para registrar la actividad de seis vacas Holstein (3 en producción y 3 secas). Se realizaron observaciones directas y no sistemáticas de los animales para contrastar las actividades registradas por el dispositivo. Los resultados mostraron coherencia entre las observaciones directas y la actividad registrada por el dispositivo

    Physical activity recognition by utilising smartphone sensor signals

    Get PDF
    Human physical motion activity identification has many potential applications in various fields, such as medical diagnosis, military sensing, sports analysis, and human-computer security interaction. With the recent advances in smartphones and wearable technologies, it has become common for such devices to have embedded motion sensors that are able to sense even small body movements. This study collected human activity data from 60 participants across two different days for a total of six activities recorded by gyroscope and accelerometer sensors in a modern smartphone. The paper investigates to what extent different activities can be identified by utilising machine learning algorithms using approaches such as majority algorithmic voting. More analyses are also provided that reveal which time and frequency domain-based features were best able to identify individuals’ motion activity types. Overall, the proposed approach achieved a classification accuracy of 98% in identifying four different activities: walking, walking upstairs, walking downstairs, and sitting (on a chair) while the subject is calm and doing a typical desk-based activity

    Physical activity recognition by utilising smartphone sensor signals

    Get PDF
    Human physical motion activity identification has many potential applications in various fields, such as medical diagnosis, military sensing, sports analysis, and human-computer security interaction. With the recent advances in smartphones and wearable technologies, it has become common for such devices to have embedded motion sensors that are able to sense even small body movements. This study collected human activity data from 60 participants across two different days for a total of six activities recorded by gyroscope and accelerometer sensors in a modern smartphone. The paper investigates to what extent different activities can be identified by utilising machine learning algorithms using approaches such as majority algorithmic voting. More analyses are also provided that reveal which time and frequency domain-based features were best able to identify individuals' motion activity types. Overall, the proposed approach achieved a classification accuracy of 98% in identifying four different activities: walking, walking upstairs, walking downstairs, and sitting (on a chair) while the subject is calm and doing a typical desk-based activity

    Transparent Authentication Utilising Gait Recognition

    Get PDF
    Securing smartphones has increasingly become inevitable due to their massive popularity and significant storage and access to sensitive information. The gatekeeper of securing the device is authenticating the user. Amongst the many solutions proposed, gait recognition has been suggested to provide a reliable yet non-intrusive authentication approach – enabling both security and usability. While several studies exploring mobile-based gait recognition have taken place, studies have been mainly preliminary, with various methodological restrictions that have limited the number of participants, samples, and type of features; in addition, prior studies have depended on limited datasets, actual controlled experimental environments, and many activities. They suffered from the absence of real-world datasets, which lead to verify individuals incorrectly. This thesis has sought to overcome these weaknesses and provide, a comprehensive evaluation, including an analysis of smartphone-based motion sensors (accelerometer and gyroscope), understanding the variability of feature vectors during differing activities across a multi-day collection involving 60 participants. This framed into two experiments involving five types of activities: standard, fast, with a bag, downstairs, and upstairs walking. The first experiment explores the classification performance in order to understand whether a single classifier or multi-algorithmic approach would provide a better level of performance. The second experiment investigated the feature vector (comprising of a possible 304 unique features) to understand how its composition affects performance and for a comparison a more particular set of the minimal features are involved. The controlled dataset achieved performance exceeded the prior work using same and cross day methodologies (e.g., for the regular walk activity, the best results EER of 0.70% and EER of 6.30% for the same and cross day scenarios respectively). Moreover, multi-algorithmic approach achieved significant improvement over the single classifier approach and thus a more practical approach to managing the problem of feature vector variability. An Activity recognition model was applied to the real-life gait dataset containing a more significant number of gait samples employed from 44 users (7-10 days for each user). A human physical motion activity identification modelling was built to classify a given individual's activity signal into a predefined class belongs to. As such, the thesis implemented a novel real-world gait recognition system that recognises the subject utilising smartphone-based real-world dataset. It also investigates whether these authentication technologies can recognise the genuine user and rejecting an imposter. Real dataset experiment results are offered a promising level of security particularly when the majority voting techniques were applied. As well as, the proposed multi-algorithmic approach seems to be more reliable and tends to perform relatively well in practice on real live user data, an improved model employing multi-activity regarding the security and transparency of the system within a smartphone. Overall, results from the experimentation have shown an EER of 7.45% for a single classifier (All activities dataset). The multi-algorithmic approach achieved EERs of 5.31%, 6.43% and 5.87% for normal, fast and normal and fast walk respectively using both accelerometer and gyroscope-based features – showing a significant improvement over the single classifier approach. Ultimately, the evaluation of the smartphone-based, gait authentication system over a long period of time under realistic scenarios has revealed that it could provide a secured and appropriate activities identification and user authentication system

    Adversarial Cross-Domain Action Recognition with Co-Attention

    Full text link
    Action recognition has been a widely studied topic with a heavy focus on supervised learning involving sufficient labeled videos. However, the problem of cross-domain action recognition, where training and testing videos are drawn from different underlying distributions, remains largely under-explored. Previous methods directly employ techniques for cross-domain image recognition, which tend to suffer from the severe temporal misalignment problem. This paper proposes a Temporal Co-attention Network (TCoN), which matches the distributions of temporally aligned action features between source and target domains using a novel cross-domain co-attention mechanism. Experimental results on three cross-domain action recognition datasets demonstrate that TCoN improves both previous single-domain and cross-domain methods significantly under the cross-domain setting.Comment: AAAI 202

    Game Theory Solutions in Sensor-Based Human Activity Recognition: A Review

    Full text link
    The Human Activity Recognition (HAR) tasks automatically identify human activities using the sensor data, which has numerous applications in healthcare, sports, security, and human-computer interaction. Despite significant advances in HAR, critical challenges still exist. Game theory has emerged as a promising solution to address these challenges in machine learning problems including HAR. However, there is a lack of research work on applying game theory solutions to the HAR problems. This review paper explores the potential of game theory as a solution for HAR tasks, and bridges the gap between game theory and HAR research work by suggesting novel game-theoretic approaches for HAR problems. The contributions of this work include exploring how game theory can improve the accuracy and robustness of HAR models, investigating how game-theoretic concepts can optimize recognition algorithms, and discussing the game-theoretic approaches against the existing HAR methods. The objective is to provide insights into the potential of game theory as a solution for sensor-based HAR, and contribute to develop a more accurate and efficient recognition system in the future research directions

    Cross-modal learning from visual information for activity recognition on inertial sensors

    Get PDF
    The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming and error-prone. As a result, public inertial HAR datasets are small in terms of number of subjects, activity classes, hours of recorded data, and variation in recorded environments. Machine learning models, developed using these small datasets, are effectively blind to the diverse expressions of activities performed by wide-ranging populations in the real world, and progress in wearable inertial sensing is held back by this bottleneck for activity understanding. . But just as Internet-scale text, image and audio data have pushed their respective pattern recognition fields to systems reliable enough for everyday use, easy access to large quantities of data can push forward the field of inertial HAR, and by extension wearable sensing. To this end, this thesis pioneers the idea of exploiting the visual modality as a source domain for cross-modal learning, such that data and knowledge can be transferred across to benefit the target domain of inertial HAR. . This thesis makes three contributions to inertial HAR through cross-modal approaches. First, to overcome the barrier of expensive inertial data collection and annotation, we contribute a novel pipeline that automatically extracts virtual accelerometer data from videos of human activities, which are readily annotated and accessible in large quantities. Second, we propose acquiring transferable representations about activities, from HAR models trained using large quantities of visual data to enrich the development of inertial HAR models. Finally, the third contribution exposes HAR models to the challenging setting of zero-shot learning; we propose mechanisms that leverage cross-modal correspondence to enable inference on previously unseen classes. . Unlike prior approaches, this body of work pushes forward the state of the art in HAR not by exhausting resources concentrated in the inertial domain, but by exploiting an existing, resourceful, intuitive, and informative source, the visual domain. These contributions represent a new line of cross-modal thinking in inertial HAR, and suggest important future directions for inertial-based wearable sensing research
    corecore