111 research outputs found

    Towards a cloud‑based automated surveillance system using wireless technologies

    Get PDF
    Cloud Computing can bring multiple benefits for Smart Cities. It permits the easy creation of centralized knowledge bases, thus straightforwardly enabling that multiple embedded systems (such as sensor or control devices) can have a collaborative, shared intelligence. In addition to this, thanks to its vast computing power, complex tasks can be done over low-spec devices just by offloading computation to the cloud, with the additional advantage of saving energy. In this work, cloud’s capabilities are exploited to implement and test a cloud-based surveillance system. Using a shared, 3D symbolic world model, different devices have a complete knowledge of all the elements, people and intruders in a certain open area or inside a building. The implementation of a volumetric, 3D, object-oriented, cloud-based world model (including semantic information) is novel as far as we know. Very simple devices (orange Pi) can send RGBD streams (using kinect cameras) to the cloud, where all the processing is distributed and done thanks to its inherent scalability. A proof-of-concept experiment is done in this paper in a testing lab with multiple cameras connected to the cloud with 802.11ac wireless technology. Our results show that this kind of surveillance system is possible currently, and that trends indicate that it can be improved at a short term to produce high performance vigilance system using low-speed devices. In addition, this proof-of-concept claims that many interesting opportunities and challenges arise, for example, when mobile watch robots and fixed cameras would act as a team for carrying out complex collaborative surveillance strategies.Ministerio de Economía y Competitividad TEC2016-77785-PJunta de Andalucía P12-TIC-130

    Biometric Spoofing: A JRC Case Study in 3D Face Recognition

    Get PDF
    Based on newly available and affordable off-the-shelf 3D sensing, processing and printing technologies, the JRC has conducted a comprehensive study on the feasibility of spoofing 3D and 2.5D face recognition systems with low-cost self-manufactured models and presents in this report a systematic and rigorous evaluation of the real risk posed by such attacking approach which has been complemented by a test campaign. The work accomplished and presented in this report, covers theories, methodologies, state of the art techniques, evaluation databases and also aims at providing an outlook into the future of this extremely active field of research.JRC.G.6-Digital Citizen Securit

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    A smart home environment to support safety and risk monitoring for the elderly living independently

    Get PDF
    The elderly prefer to live independently despite vulnerability to age-related challenges. Constant monitoring is required in cases where the elderly are living alone. The home environment can be a dangerous environment for the elderly living independently due to adverse events that can occur at any time. The potential risks for the elderly living independently can be categorised as injury in the home, home environmental risks and inactivity due to unconsciousness. The main research objective was to develop a Smart Home Environment (SHE) that can support risk and safety monitoring for the elderly living independently. An unobtrusive and low cost SHE solution that uses a Raspberry Pi 3 model B, a Microsoft Kinect Sensor and an Aeotec 4-in-1 Multisensor was implemented. The Aeotec Multisensor was used to measure temperature, motion, lighting, and humidity in the home. Data from the multisensor was collected using OpenHAB as the Smart Home Operating System. The information was processed using the Raspberry Pi 3 and push notifications were sent when risk situations were detected. An experimental evaluation was conducted to determine the accuracy with which the prototype SHE detected abnormal events. Evaluation scripts were each evaluated five times. The results show that the prototype has an average accuracy, sensitivity and specificity of 94%, 96.92% and 88.93% respectively. The sensitivity shows that the chance of the prototype missing a risk situation is 3.08%, and the specificity shows that the chance of incorrectly classifying a non-risk situation is 11.07%. The prototype does not require any interaction on the part of the elderly. Relatives and caregivers can remotely monitor the elderly person living independently via the mobile application or a web portal. The total cost of the equipment used was below R3000

    Raspberry Based Hand Gesture Recognition Using Haar Cascade and Local Binary Pattern Histogram

    Get PDF
    Many companies and even public institutions for civil servants currently use photo-taking for the attendance. However, this strategy is still considered ineffective since the employees still can hack the attendance by making their own photos and put them in their desks. Therefore, an alternative that can complement the current face detection method is highly needed so that the employee’s attendance can be directly monitored. One of the methods that can be used to detect the attendance is hand gesture detection. This research aims to detect hand gestures made by the employees to ensure whether they really come to work or not. This research make  the chance for manipulation using photo or fake GPS is quite small. For the purpose of hand gesture recognition, this study utilized Local Binary Pattern Histogram algorithm. The hand gesture image was first taken using a raspberry pi camera and then processed by the device to examine whether it matches the registered ID or not. The results showed that ID recognition by using hand gestures is detectable. The number recognition in hand gestures includes numbers 1 to 10. The test results showed that for 5 trials, the average time required for reading hand gestures using a laptop was 9.2 seconds, while that of using raspberry was 14.2 seconds. The results of this research show that the system has not been able to distinguish which hand is read first, so numbers that have the same number are considered the same, such as 81 and 18. So, the motion reading using a raspberry takes longer than that of using a laptop because the laptop's performance is higher than that of a raspberry and system cannot distinguish between numbers consisting of the same number

    Biometric antispoofing methods: A survey in face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. J. Galbally, S. Marcel and J. Fierrez, "Biometric Antispoofing Methods", IEEE Access, vol.2, pp. 1530-1552, Dec. 2014In recent decades, we have witnessed the evolution of biometric technology from the rst pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as ngerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoo ng. Spoo ng, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing ef cient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging eld of antispoo ng, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active eld of research.This work was supported in part by the CAM under Project S2009/TIC-1485, in part by the Ministry of Economy and Competitiveness through the Bio-Shield Project under Grant TEC2012-34881, in part by the TABULA RASA Project under Grant FP7-ICT-257289, in part by the BEAT Project under Grant FP7-SEC-284989 through the European Union, and in part by the Cátedra Universidad Autónoma de Madrid-Telefónica

    Human Computer Interaction and Emerging Technologies

    Get PDF
    The INTERACT Conferences are an important platform for researchers and practitioners in the field of human-computer interaction (HCI) to showcase their work. They are organised biennially by the International Federation for Information Processing (IFIP) Technical Committee on Human–Computer Interaction (IFIP TC13), an international committee of 30 member national societies and nine Working Groups. INTERACT is truly international in its spirit and has attracted researchers from several countries and cultures. With an emphasis on inclusiveness, it works to lower the barriers that prevent people in developing countries from participating in conferences. As a multidisciplinary field, HCI requires interaction and discussion among diverse people with different interests and backgrounds. The 17th IFIP TC13 International Conference on Human-Computer Interaction (INTERACT 2019) took place during 2-6 September 2019 in Paphos, Cyprus. The conference was held at the Coral Beach Hotel Resort, and was co-sponsored by the Cyprus University of Technology and Tallinn University, in cooperation with ACM and ACM SIGCHI. This volume contains the Adjunct Proceedings to the 17th INTERACT Conference, comprising a series of selected papers from workshops, the Student Design Consortium and the Doctoral Consortium. The volume follows the INTERACT conference tradition of submitting adjunct papers after the main publication deadline, to be published by a University Press with a connection to the conference itself. In this case, both the Adjunct Proceedings Chair of the conference, Dr Usashi Chatterjee, and the lead Editor of this volume, Dr Fernando Loizides, work at Cardiff University which is the home of Cardiff University Press
    • …
    corecore