145,031 research outputs found

    WSN and RFID integration to support intelligent monitoring in smart buildings using hybrid intelligent decision support systems

    Get PDF
    The real time monitoring of environment context aware activities is becoming a standard in the service delivery in a wide range of domains (child and elderly care and supervision, logistics, circulation, and other). The safety of people, goods and premises depends on the prompt reaction to potential hazards identified at an early stage to engage appropriate control actions. This requires capturing real time data to process locally at the device level or communicate to backend systems for real time decision making. This research examines the wireless sensor network and radio frequency identification technology integration in smart homes to support advanced safety systems deployed upstream to safety and emergency response. These systems are based on the use of hybrid intelligent decision support systems configured in a multi-distributed architecture enabled by the wireless communication of detection and tracking data to support intelligent real-time monitoring in smart buildings. This paper introduces first the concept of wireless sensor network and radio frequency identification technology integration showing the various options for the task distribution between radio frequency identification and hybrid intelligent decision support systems. This integration is then illustrated in a multi-distributed system architecture to identify motion and control access in a smart building using a room capacity model for occupancy and evacuation, access rights and a navigation map automatically generated by the system. The solution shown in the case study is based on a virtual layout of the smart building which is implemented using the capabilities of the building information model and hybrid intelligent decision support system.The Saudi High Education Ministry and Brunel University (UK

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Mobile qualified electronic signatures and certification on demand

    Get PDF
    Despite a legal framework being in place for several years, the market share of qualified electronic signatures is disappointingly low. Mobile Signatures provide a new and promising opportunity for the deployment of an infrastructure for qualified electronic signatures. We analyzed two possible signing approaches (server based and client based signatures) and conclude that SIM-based signatures are the most secure and convenient solution. However, using the SIM-card as a secure signature creation device (SSCD) raises new challenges, because it would contain the user’s private key as well as the subscriber identification. Combining both functions in one card raises the question who will have the control over the keys and certificates. We propose a protocol called Certification on Demand (COD) that separates certification services from subscriber identification information and allows consumers to choose their appropriate certification services and service providers based on their needs. We also present some of the constraints that still have to be addressed before qualified mobile signatures are possible

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    A multimodal smartphone interface for active perception by visually impaired

    Get PDF
    The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone's camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people
    corecore