3,092 research outputs found

    A MICROCONTROLLER-DRIVEN ENTRANCE GATE TO COMBAT RESPIRATORY VIRUS SPREAD

    Get PDF
    Respiratory illnesses, including COVID-19, continue to be a significant public health concern worldwide. In this regard, public health organizations provide preventative measures, such as wearing masks, and practicing good hand hygiene, to help control the spread of respiratory illnesses. However, existing preventive measures may not be fully effective in ensuring compliance, especially in dispersed economic conditions, leading to continued risks of respiratory virus spread in public spaces. To address this challenge, this study proposes a microcontroller-driven system designed to monitor and regulate entry into public spaces, aiming to reduce the transmission of respiratory illnesses. The system employs a camera, a temperature sensor and an ultrasonic sensor to detect face mask usage, measure body temperature, and track the distance of hands from the sensor for automatic handwashing. Using deep learning method to measure accuracy rates of 0.90, 0.89, 0.89, and 0.89 for detecting face masks, precision, recall, and F1 score, respectively, and an accuracy of 99.18% in measuring body temperature. The system has the potential to enhance public safety significantly. The automatic door opening feature, triggered only when a person is wearing a mask, has an average body temperature, and has washed their hands automatically, adds to the system's efficacy. The system's ability to detect and respond to non-compliance with safety measures can help promote adherence to public health guidelines and reduce the risk of infection. This study's findings demonstrate the developed system's high potential to contribute to public safety in the era of respiratory viruses

    Solar-Powered Deep Learning-Based Recognition System of Daily Used Objects and Human Faces for Assistance of the Visually Impaired

    Get PDF
    This paper introduces a novel low-cost solar-powered wearable assistive technology (AT) device, whose aim is to provide continuous, real-time object recognition to ease the finding of the objects for visually impaired (VI) people in daily life. The system consists of three major components: a miniature low-cost camera, a system on module (SoM) computing unit, and an ultrasonic sensor. The first is worn on the user’s eyeglasses and acquires real-time video of the nearby space. The second is worn as a belt and runs deep learning-based methods and spatial algorithms which process the video coming from the camera performing objects’ detection and recognition. The third assists on positioning the objects found in the surrounding space. The developed device provides audible descriptive sentences as feedback to the user involving the objects recognized and their position referenced to the user gaze. After a proper power consumption analysis, a wearable solar harvesting system, integrated with the developed AT device, has been designed and tested to extend the energy autonomy in the dierent operating modes and scenarios. Experimental results obtained with the developed low-cost AT device have demonstrated an accurate and reliable real-time object identification with an 86% correct recognition rate and 215 ms average time interval (in case of high-speed SoM operating mode) for the image processing. The proposed system is capable of recognizing the 91 objects oered by the Microsoft Common Objects in Context (COCO) dataset plus several custom objects and human faces. In addition, a simple and scalable methodology for using image datasets and training of Convolutional Neural Networks (CNNs) is introduced to add objects to the system and increase its repertory. It is also demonstrated that comprehensive trainings involving 100 images per targeted object achieve 89% recognition rates, while fast trainings with only 12 images achieve acceptable recognition rates of 55%

    An Approach to the Use of Depth Cameras for Weed Volume Estimation

    Get PDF
    The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them.The Spanish Ministry of Economy and Competitiveness has provided support for this research via projects AGL2014-52465-C4-3-R and AGL2014-52465-C4-1-R, and Bosch Foundation. We acknowledge support by the CSIC Open Access Publication Initiative through its Unit of Information Resources for Research (URICI)

    Non-destructive technologies for fruit and vegetable size determination - a review

    Get PDF
    Here, we review different methods for non-destructive horticultural produce size determination, focusing on electronic technologies capable of measuring fruit volume. The usefulness of produce size estimation is justified and a comprehensive classification system of the existing electronic techniques to determine dimensional size is proposed. The different systems identified are compared in terms of their versatility, precision and throughput. There is general agreement in considering that online measurement of axes, perimeter and projected area has now been achieved. Nevertheless, rapid and accurate volume determination of irregular-shaped produce, as needed for density sorting, has only become available in the past few years. An important application of density measurement is soluble solids content (SSC) sorting. If the range of SSC in the batch is narrow and a large number of classes are desired, accurate volume determination becomes important. A good alternative for fruit three-dimensional surface reconstruction, from which volume and surface area can be computed, is the combination of height profiles from a range sensor with a two-dimensional object image boundary from a solid-state camera (brightness image) or from the range sensor itself (intensity image). However, one of the most promising technologies in this field is 3-D multispectral scanning, which combines multispectral data with 3-D surface reconstructio

    Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    Get PDF
    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results
    corecore