252 research outputs found

    Crowd-based cognitive perception of the physical world: Towards the internet of senses

    Get PDF
    This paper introduces a possible architecture and discusses the research directions for the realization of the Cognitive Perceptual Internet (CPI), which is enabled by the convergence of wired and wireless communications, traditional sensor networks, mobile crowd-sensing, and machine learning techniques. The CPI concept stems from the fact that mobile devices, such as smartphones and wearables, are becoming an outstanding mean for zero-effort world-sensing and digitalization thanks to their pervasive diffusion and the increasing number of embedded sensors. Data collected by such devices provide unprecedented insights into the physical world that can be inferred through cognitive processes, thus originating a digital sixth sense. In this paper, we describe how the Internet can behave like a sensing brain, thus evolving into the Internet of Senses, with network-based cognitive perception and action capabilities built upon mobile crowd-sensing mechanisms. The new concept of hyper-map is envisioned as an efficient geo-referenced repository of knowledge about the physical world. Such knowledge is acquired and augmented through heterogeneous sensors, multi-user cooperation and distributed learning mechanisms. Furthermore, we indicate the possibility to accommodate proactive sensors, in addition to common reactive sensors such as cameras, antennas, thermometers and inertial measurement units, by exploiting massive antenna arrays at millimeter-waves to enhance mobile terminals perception capabilities as well as the range of new applications. Finally, we distillate some insights about the challenges arising in the realization of the CPI, corroborated by preliminary results, and we depict a futuristic scenario where the proposed Internet of Senses becomes true

    Improving object detection by exploiting semantic relations between objects

    Get PDF
    En col·laboració amb la Universitat de Barcelona (UB) i la Universitat Rovira i Virgili (URV)Object detection is a fundamental and challenging problem in computer vision. Detecting the objects visible in an image can give us a good understanding and description of the image. The extracted information can later be used to improve the results of other computer vision tasks like activity recognition, content-based image retrieval, scene recognition and more. As technology and internet connection are becoming more accessible, billions of people upload photos and videos every day. In order to make use of this enormous amount of data we need to be able to extract information from these images in a quick and yet reliable way. Convolutional neural networks (CNN) have made possible enormous progresses in object detection and classification in recent years and have already established themself as the state of the art approach for these problems. In this work, we try to improve object detection performances by employing a CNN approach able to exploit object co-occurrences in natural images. Typically, real world scenes often exhibit a coherent composition of object in terms of co-occurrence probability. For instance, in a restaurant we typically see dishes, bottles and glasses. We aim at using this type of knowledge as a cue for disambiguating object labels in a detection task

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    Internet of Things Architectures, Technologies, Applications, Challenges, and Future Directions for Enhanced Living Environments and Healthcare Systems: A Review

    Get PDF
    Internet of Things (IoT) is an evolution of the Internet and has been gaining increased attention from researchers in both academic and industrial environments. Successive technological enhancements make the development of intelligent systems with a high capacity for communication and data collection possible, providing several opportunities for numerous IoT applications, particularly healthcare systems. Despite all the advantages, there are still several open issues that represent the main challenges for IoT, e.g., accessibility, portability, interoperability, information security, and privacy. IoT provides important characteristics to healthcare systems, such as availability, mobility, and scalability, that o er an architectural basis for numerous high technological healthcare applications, such as real-time patient monitoring, environmental and indoor quality monitoring, and ubiquitous and pervasive information access that benefits health professionals and patients. The constant scientific innovations make it possible to develop IoT devices through countless services for sensing, data fusing, and logging capabilities that lead to several advancements for enhanced living environments (ELEs). This paper reviews the current state of the art on IoT architectures for ELEs and healthcare systems, with a focus on the technologies, applications, challenges, opportunities, open-source platforms, and operating systems. Furthermore, this document synthesizes the existing body of knowledge and identifies common threads and gaps that open up new significant and challenging future research directions.info:eu-repo/semantics/publishedVersio

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe
    • …
    corecore