128,297 research outputs found

    赤外線アレイセンサGrid-EYEを用いた体温の遠隔検出

    Get PDF
    Recently physical and health conditions in everyday life have very important significance for COVID-19. Temperature is an important indicator that can reflect the pathophysiological state of the human body. The far-infrared thermal image can represent human body surface temperature. This paper presents remote detection of body temperature using an infrared array sensor, Grid-EYE. Object detection was implemented with a Convolutional Neural Network (CNN) based on the You Only Look Once (YOLO) model. Using Grid-EYE, we can notice fever of humans, dogs, and cats. The experiment results showed that the proposed system is utilized in indoor remote monitoring system to detect heat stroke in a room

    Fusion of thermal and visible imagery for effective detection and tracking of salient objects in videos

    Get PDF
    In this paper, we present an efficient approach to detect and track salient objects from videos. In general, colored visible image in red-green-blue (RGB) has better distinguishability in human visual perception, yet it suffers from the effect of illumination noise and shadows. On the contrary, thermal image is less sensitive to these noise effects though its distinguishability varies according to environmental settings. To this end, fusion of these two modalities provides an effective solution to tackle this problem. First, a background model is extracted followed by background-subtraction for foreground detection in visible images. Meanwhile, adaptively thresholding is applied for foreground detection in thermal domain as human objects tend to be of higher temperature thus brighter than the background. To deal with cases of occlusion, prediction based forward tracking and backward tracking are employed to identify separate objects even the foreground detection fails. The proposed method is evaluated on OTCBVS, a publicly available color-thermal benchmark dataset. Promising results have shown that the proposed fusion based approach can successfully detect and track multiple human objects

    Activity Recognition in Residential Spaces with Internet of Things Devices and Thermal Imaging

    Get PDF
    In this paper, we design algorithms for indoor activity recognition and 3D thermal model generation using thermal images, RGB images, captured from external sensors, and the internet of things setup. Indoor activity recognition deals with two sub-problems: Human activity and household activity recognition. Household activity recognition includes the recognition of electrical appliances and their heat radiation with the help of thermal images. A FLIR ONE PRO camera is used to capture RGB-thermal image pairs for a scene. Duration and pattern of activities are also determined using an iterative algorithm, to explore kitchen safety situations. For more accurate monitoring of hazardous events such as stove gas leakage, a 3D reconstruction approach is proposed to determine the temperature of all points in the 3D space of a scene. The 3D thermal model is obtained using the stereo RGB and thermal images for a particular scene. Accurate results are observed for activity detection, and a significant improvement in the temperature estimation is recorded in the 3D thermal model compared to the 2D thermal image. Results from this research can find applications in home automation, heat automation in smart homes, and energy management in residential spaces

    A Deep Learning-Based Tool for Face Mask Detection and Body Temperature Measurement

    Get PDF
    Due to the COVID-19 pandemic outbreak, wearing a mask and ensuring normal body temperature in overcrowded areas such as workplaces have become obligatory. In this paper, a deep learning-based tool for automatic mask detection and temperature measurement at the entrance of workplaces was developed to save costs of manual supervision and reduce human contact for safety concerns. Using Python, image/video processing techniques related to face and object detection are used to process image input from a webcam. A deep learning algorithm called MobileNetV2 was used to build the face mask detector model. Moreover, a non-contact thermal sensor, the MLX90614, along with Arduino, was employed to measure body temperature. The mask detection and temperature measurements are displayed correctly on a Graphical User Interface (GUI). Besides, an additional function related to the Internet of Things (IoT) was implemented, which sends high-temperature alerts to smartphones. It has been verified that the model can achieve an accuracy of about 98%. The developed system experiences a limitation when other objects are used to cover the mouth and nose in that they may still be classified as masks. However, compared to the mask detection systems available commercially, it can provide correct detection results when using the hand to pretend to be wearing a mask

    Depth estimation of inner wall defects by means of infrared thermography

    Get PDF
    There two common methods dealing with interpreting data from infrared thermography: qualitatively and quantitatively. On a certain condition, the first method would be sufficient, but for an accurate interpretation, one should undergo the second one. This report proposes a method to estimate the defect depth quantitatively at an inner wall of petrochemical furnace wall. Finite element method (FEM) is used to model multilayer walls and to simulate temperature distribution due to the existence of the defect. Five informative parameters are proposed for depth estimation purpose. These parameters are the maximum temperature over the defect area (Tmax-def), the average temperature at the right edge of the defect (Tavg-right), the average temperature at the left edge of the defect (Tavg-left), the average temperature at the top edge of the defect (Tavg-top), and the average temperature over the sound area (Tavg-so). Artificial Neural Network (ANN) was trained with these parameters for estimating the defect depth. Two ANN architectures, Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) network were trained for various defect depths. ANNs were used to estimate the controlled and testing data. The result shows that 100% accuracy of depth estimation was achieved for the controlled data. For the testing data, the accuracy was above 90% for the MLP network and above 80% for the RBF network. The results showed that the proposed informative parameters are useful for the estimation of defect depth and it is also clear that ANN can be used for quantitative interpretation of thermography data

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System

    Volcanic Hot-Spot Detection Using SENTINEL-2: A Comparison with MODIS−MIROVA Thermal Data Series

    Get PDF
    In the satellite thermal remote sensing, the new generation of sensors with high-spatial resolution SWIR data open the door to an improved constraining of thermal phenomena related to volcanic processes, with strong implications for monitoring applications. In this paper, we describe a new hot-spot detection algorithm developed for SENTINEL-2/MSI data that combines spectral indices on the SWIR bands 8a-11-12 (with a 20-meter resolution) with a spatial and statistical analysis on clusters of alerted pixels. The algorithm is able to detect hot-spot-contaminated pixels (S2Pix) in a wide range of environments and for several types of volcanic activities, showing high accuracy performances of about 1% and 94% in averaged omission and commission rates, respectively, underlining a strong reliability on a global scale. The S2-derived thermal trends, retrieved at eight key-case volcanoes, are then compared with the Volcanic Radiative Power (VRP) derived from MODIS (Moderate Resolution Imaging Spectroradiometer) and processed by the MIROVA (Middle InfraRed Observation of Volcanic Activity) system during an almost four-year-long period, January 2016 to October 2019. The presented data indicate an overall excellent correlation between the two thermal signals, enhancing the higher sensitivity of SENTINEL-2 to detect subtle, low-temperature thermal signals. Moreover, for each case we explore the specific relationship between S2Pix and VRP showing how different volcanic processes (i.e., lava flows, domes, lakes and open-vent activity) produce a distinct pattern in terms of size and intensity of the thermal anomaly. These promising results indicate how the algorithm here presented could be applicable for volcanic monitoring purposes and integrated into operational systems. Moreover, the combination of high-resolution (S2/MSI) and moderate-resolution (MODIS) thermal timeseries constitutes a breakthrough for future multi-sensor hot-spot detection systems, with increased monitoring capabilities that are useful for communities which interact with active volcanoes

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    Full text link
    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objective of employing fusion is to produce a fused image that provides more detailed and reliable information, which is capable to overcome the drawbacks present in the individual visual and thermal face images. Finally, those reduced fused images are classified using a multilayer perceptron neural network. The database used for the experiments conducted here is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal and visual face images. The second method has shown better performance, which is 95.71% (maximum) and on an average 93.81% as correct recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11 - 15, 201
    corecore