60 research outputs found

    STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION

    Get PDF
    To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration

    Autonomous Systems: Autonomous Systems: Indoor Drone Navigation

    Full text link
    Drones are a promising technology for autonomous data collection and indoor sensing. In situations when human-controlled UAVs may not be practical or dependable, such as in uncharted or dangerous locations, the usage of autonomous UAVs offers flexibility, cost savings, and reduced risk. The system creates a simulated quadcopter capable of autonomously travelling in an indoor environment using the gazebo simulation tool and the ros navigation system framework known as Navigaation2. While Nav2 has successfully shown the functioning of autonomous navigation in terrestrial robots and vehicles, the same hasn't been accomplished with unmanned aerial vehicles and still has to be done. The goal is to use the slam toolbox for ROS and the Nav2 navigation system framework to construct a simulated drone that can move autonomously in an indoor (gps-less) environment

    Comparative study on Judgment Text Classification for Transformer Based Models

    Full text link
    This work involves the usage of various NLP models to predict the winner of a particular judgment by the means of text extraction and summarization from a judgment document. These documents are useful when it comes to legal proceedings. One such advantage is that these can be used for citations and precedence reference in Lawsuits and cases which makes a strong argument for their case by the ones using it. When it comes to precedence, it is necessary to refer to an ample number of documents in order to collect legal points with respect to the case. However, reviewing these documents takes a long time to analyze due to the complex word structure and the size of the document. This work involves the comparative study of 6 different self-attention-based transformer models and how they perform when they are being tweaked in 4 different activation functions. These models which are trained with 200 judgement contexts and their results are being judged based on different benchmark parameters. These models finally have a confidence level up to 99% while predicting the judgment. This can be used to get a particular judgment document without spending too much time searching relevant cases and reading them completely.Comment: 28 pages with 9 figure

    Cashew dataset generation using augmentation and RaLSGAN and a transfer learning based tinyML approach towards disease detection

    Full text link
    Cashew is one of the most extensively consumed nuts in the world, and it is also known as a cash crop. A tree may generate a substantial yield in a few months and has a lifetime of around 70 to 80 years. Yet, in addition to the benefits, there are certain constraints to its cultivation. With the exception of parasites and algae, anthracnose is the most common disease affecting trees. When it comes to cashew, the dense structure of the tree makes it difficult to diagnose the disease with ease compared to short crops. Hence, we present a dataset that exclusively consists of healthy and diseased cashew leaves and fruits. The dataset is authenticated by adding RGB color transformation to highlight diseased regions, photometric and geometric augmentations, and RaLSGAN to enlarge the initial collection of images and boost performance in real-time situations when working with a constrained dataset. Further, transfer learning is used to test the classification efficiency of the dataset using algorithms such as MobileNet and Inception. TensorFlow lite is utilized to develop these algorithms for disease diagnosis utilizing drones in real-time. Several post-training optimization strategies are utilized, and their memory size is compared. They have proven their effectiveness by delivering high accuracy (up to 99%) and a decrease in memory and latency, making them ideal for use in applications with limited resources

    Cultivating Insight: Detecting Autism Spectrum Disorder through Residual Attention Network in Facial Image Analysis

    Get PDF
    Revolutionizing Autism Spectrum Disorder Identification through Deep Learning: Unveiling Facial Activation Patterns. In this study, our primary objective is to harness the power of deep learning algorithms for the precise identification of individuals with autism spectrum disorder (ASD) solely from facial image datasets. Our investigation centers around the utilization of face activation patterns, aiming to uncover novel insights into the distinctive facial features of ASD patients. To accomplish this, we meticulously examined facial imaging data from a global and multidisciplinary repository known as the Autism Face Imaging Data Exchange. Autism spectrum disorder is characterized by inherent social deficits and manifests in a spectrum of diverse symptomatic scenarios. Recent data from the Centers for Disease Control (CDC) underscores the significance of this disorder, indicating that approximately 1 in 54 children are impacted by ASD, according to estimations from the CDC's Autism and Developmental Disabilities Monitoring Network (ADDM). Our research delved into the intricate functional connectivity patterns that objectively distinguish ASD participants, focusing on their facial imaging data. Through this investigation, we aimed to uncover the latent facial patterns that play a pivotal role in the classification of ASD cases. Our approach introduces a novel module that enhances the discriminative potential of standard convolutional neural networks (CNNs), such as ResNet-50, thus significantly advancing the state-of-the-art. Our model achieved an impressive accuracy rate of 99% in distinguishing between ASD patients and control subjects within the dataset. Our findings illuminate the specific facial expression domains that contribute most significantly to the differentiation of ASD cases from typically developing individuals, as inferred from our deep learning methodology. To validate our approach, we conducted real-time video testing on diverse children, achieving an outstanding accuracy score of 99.90% and an F1 score of 99.67%. Through this pioneering work, we not only offer a cutting-edge approach to ASD identification but also contribute to the understanding of the underlying facial activation patterns that hold potential for transforming the diagnostic landscape of autism spectrum disorder

    A UK survey of COVID‐19 related social support closures and their effects on older people, people with dementia, and carers

    Get PDF
    Abstract Objectives The aim of this national survey was to explore the impact of COVID‐19 public health measures on access to social support services and the effects of closures of services on the mental well‐being of older people and those affected by dementia. Methods A UK‐wide online and telephone survey was conducted with older adults, people with dementia, and carers between April and May 2020.The survey captured demographic and postcode data, social support service usage before and after COVID‐19 public health measures, current quality of life, depression, and anxiety. Multiple linear regression analysis was used to explore the relationship between social support service variations and anxiety and well‐being. Results 569 participants completed the survey (61 people with dementia, 285 unpaid carers, and 223 older adults). Paired samples t‐tests and X2‐tests showed that the mean hour of weekly social support service usage and the number of people having accessed various services was significantly reduced post COVID‐19. Multiple regression analyses showed that higher variations in social support service hours significantly predicted increased levels of anxiety in people with dementia and older adults, and lower levels of mental well‐being in unpaid carers and older adults. Conclusions Being unable to access social support services due to COVID contributed to worse quality of life and anxiety in those affected by dementia and older adults across the UK. Social support services need to be enabled to continue providing support in adapted formats, especially in light of continued public health restrictions for the foreseeable future. This article is protected by copyright. All rights reserved

    COVID-19-related social support service closures and mental well-being in older adults and those affected by dementia: a UK longitudinal survey

    Get PDF
    Background: The COVID-19 pandemic has had a major impact on delivery of social support services. This might be expected to particularly affect older adults and people living with dementia (PLWD), and to reduce their well-being. Aims: To explore how social support service use by older adults, carers and PLWD, and their mental well-being changed over the first 3 months since the pandemic outbreak. Methods: Unpaid dementia carers, PLWD and older adults took part in a longitudinal online or telephone survey collected between April and May 2020, and at two subsequent timepoints 6 and 12 weeks after baseline. Participants were asked about their social support service usage in a typical week prior to the pandemic (at baseline), and in the past week at each of the three timepoints. They also completed measures of levels of depression, anxiety and mental well-being. Results: 377 participants had complete data at all three timepoints. Social support service usage dropped shortly after lockdown measures were imposed at timepoint 1 (T1), to then increase again by T3. The access to paid care was least affected by COVID-19. Cases of anxiety dropped significantly across the study period, while cases of depression rose. Well-being increased significantly for older adults and PLWD from T1 to T3. Conclusions: Access to social support services has been significantly affected by the pandemic, which is starting to recover slowly. With mental well-being differently affected across groups, support needs to be put in place to maintain better well-being across those vulnerable groups during the ongoing pandemic

    Clonage gestuel expressif

    No full text
    Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human’s appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a personLes environnements virtuels permettent de représenter des personnes par des humains virtuels ou avatars. Le sentiment de présence virtuelle entre utilisateurs est renforcé lorsque l’avatar ressemble à la personne qu’il représente. L’avatar est alors classiquement un clone de l’utilisateur qui reproduit son apparence et sa voix. Toutefois, la possibilité de cloner l’expressivité des gestes d’une personne a reçu peu d’attention jusqu’ici. Expressivité gestuelle combine le style et l’humeur d’une personne. Des paramètres décrivant l’expressivité ont été proposés dans des travaux antérieurs pour animer les agents conversationnels. Dans ce travail, nous nous intéressons à l’expressivité des mouvements du poignet. Tout d’abord, nous proposons des algorithmes pour estimer trois paramètres d’expressivité à partir des trajectoires dans l’espace du poignet : la répétition, l’étendue spatiale et l’étendue temporelle. Puis, nous avons mené une étude perceptive sur la pertinence de l’expressivité des gestes pour reconnaître des personnes. Nous avons animé un agent virtuel en utilisant l’expressivité estimée de personnes réelles, et évalué si des utilisateurs peuvent reconnaître ces personnes à partir des animations. Nous avons constaté que des gestes répétitifs dans l’animation constituent une caractéristique discriminante pour reconnaître les personnes, tandis que l’absence de répétition est associée à des personnes qui répètent des gestes ou non. Plus important, nous avons trouvé que 75% ou plus des utilisateurs peuvent reconnaître une personne (parmi deux proposée) à partir d’animations virtuelles qui ne diffèrent que par leurs étendues spatiales et temporelles. L’expressivité gestuelle apparaît donc comme un nouvel indice pertinent pour le clonage d’une personn

    Testing Weibull as a viable statistical strength distribution for Nacre

    No full text
    Nacre, a composite layer present in sea-shells, exhibits a remarkable combination of toughness, strength, and stiffness through its brick–mortar micro-structure, acting as a template for novel materials. Strength, one of nacre's important properties is highly variable due to distribution of underlying material's properties as well as various defects present in its micro-structure. Presently, researchers assume the Weakest-link hypothesis and consequently use the Weibull distribution to model this variability. However, this assumption is theoretically unproven for biological materials such as Nacre and extrapolation of the same to predict rare but catastrophic behavior would be incorrect. In this article, the suitability of Weibull distribution to account for the variability of strength in nacre is tested, using multi-scale models, developed using FEM. Through Monte Carlo based numerical experiments and Renormalization Group (RG) based arguments, it is shown that the Weakest-link hypothesis, which is commonly used to justify the use of the Weibull distribution, does not seem to hold for nacre. Micromechanics and non-local homogenization based distributions such as the one suggested by Luo and Bazant (2019) might be more appropriate for accurate extrapolation to the low probability tail
    corecore