314,603 research outputs found

    Deep learning and internet of things for beach monitoring: An experimental study of beach attendance prediction at Castelldefels beach

    Get PDF
    Smart seaside cities can fully exploit the capabilities brought by Internet of Things (IoT) and artificial intelligence to improve the efficiency of city services in traditional smart city applications: smart home, smart healthcare, smart transportation, smart surveillance, smart environment, cyber security, etc. However, smart coastal cities are characterized by their specific application domain, namely, beach monitoring. Beach attendance prediction is a beach monitoring application of particular importance for coastal managers to successfully plan beach services in terms of security, rescue, health and environmental assistance. In this paper, an experimental study that uses IoT data and deep learning to predict the number of beach visitors at Castelldefels beach (Barcelona, Spain) was developed. Images of Castelldefels beach were captured by a video monitoring system. An image recognition software was used to estimate beach attendance. A deep learning algorithm (deep neural network) to predict beach attendance was developed. The experimental results prove the feasibility of Deep Neural Networks (DNNs) for beach attendance prediction. For each beach, a classification of occupancy was estimated, depending on the number of beach visitors. The proposed model outperforms other machine learning models (decision tree, k-nearest neighbors, and random forest) and can successfully classify seven beach occupancy levels with the Mean Absolute Error (MAE), accuracy, precision, recall and F1-score of 0.03, 92.7%, 92.9%, 92.7%, and 92.7%, respectively.Postprint (published version

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Computer Analysis of Architecture Using Automatic Image Understanding

    Full text link
    In the past few years, computer vision and pattern recognition systems have been becoming increasingly more powerful, expanding the range of automatic tasks enabled by machine vision. Here we show that computer analysis of building images can perform quantitative analysis of architecture, and quantify similarities between city architectural styles in a quantitative fashion. Images of buildings from 18 cities and three countries were acquired using Google StreetView, and were used to train a machine vision system to automatically identify the location of the imaged building based on the image visual content. Experimental results show that the automatic computer analysis can automatically identify the geographical location of the StreetView image. More importantly, the algorithm was able to group the cities and countries and provide a phylogeny of the similarities between architectural styles as captured by StreetView images. These results demonstrate that computer vision and pattern recognition algorithms can perform the complex cognitive task of analyzing images of buildings, and can be used to measure and quantify visual similarities and differences between different styles of architectures. This experiment provides a new paradigm for studying architecture, based on a quantitative approach that can enhance the traditional manual observation and analysis. The source code used for the analysis is open and publicly available
    • 

    corecore