2,156 research outputs found

    Wi-Fi Sensing: Applications and Challenges

    Full text link
    Wi-Fi technology has strong potentials in indoor and outdoor sensing applications, it has several important features which makes it an appealing option compared to other sensing technologies. This paper presents a survey on different applications of Wi-Fi based sensing systems such as elderly people monitoring, activity classification, gesture recognition, people counting, through the wall sensing, behind the corner sensing, and many other applications. The challenges and interesting future directions are also highlighted

    Spatially Aware Computing for Natural Interaction

    Get PDF
    Spatial information refers to the location of an object in a physical or digital world. Besides, it also includes the relative position of an object related to other objects around it. In this dissertation, three systems are designed and developed. All of them apply spatial information in different fields. The ultimate goal is to increase the user friendliness and efficiency in those applications by utilizing spatial information. The first system is a novel Web page data extraction application, which takes advantage of 2D spatial information to discover structured records from a Web page. The extracted information is useful to re-organize the layout of a Web page to fit mobile browsing. The second application utilizes the 3D spatial information of a mobile device within a large paper-based workspace to implement interactive paper that combines the merits of paper documents and mobile devices. This application can overlay digital information on top of a paper document based on the location of a mobile device within a workspace. The third application further integrates 3D space information with sound detection to realize an automatic camera management system. This application automatically controls multiple cameras in a conference room, and creates an engaging video by intelligently switching camera shots among meeting participants based on their activities. Evaluations have been made on all three applications, and the results are promising. In summary, this dissertation comprehensively explores the usage of spatial information in various applications to improve the usability

    Multi-sensor fusion for human-robot interaction in crowded environments

    Get PDF
    For challenges associated with the ageing population, robot assistants are becoming a promising solution. Human-Robot Interaction (HRI) allows a robot to understand the intention of humans in an environment and react accordingly. This thesis proposes HRI techniques to facilitate the transition of robots from lab-based research to real-world environments. The HRI aspects addressed in this thesis are illustrated in the following scenario: an elderly person, engaged in conversation with friends, wishes to attract a robot's attention. This composite task consists of many problems. The robot must detect and track the subject in a crowded environment. To engage with the user, it must track their hand movement. Knowledge of the subject's gaze would ensure that the robot doesn't react to the wrong person. Understanding the subject's group participation would enable the robot to respect existing human-human interaction. Many existing solutions to these problems are too constrained for natural HRI in crowded environments. Some require initial calibration or static backgrounds. Others deal poorly with occlusions, illumination changes, or real-time operation requirements. This work proposes algorithms that fuse multiple sensors to remove these restrictions and increase the accuracy over the state-of-the-art. The main contributions of this thesis are: A hand and body detection method, with a probabilistic algorithm for their real-time association when multiple users and hands are detected in crowded environments; An RGB-D sensor-fusion hand tracker, which increases position and velocity accuracy by combining a depth-image based hand detector with Monte-Carlo updates using colour images; A sensor-fusion gaze estimation system, combining IR and depth cameras on a mobile robot to give better accuracy than traditional visual methods, without the constraints of traditional IR techniques; A group detection method, based on sociological concepts of static and dynamic interactions, which incorporates real-time gaze estimates to enhance detection accuracy.Open Acces

    Robotic Cameraman for Augmented Reality based Broadcast and Demonstration

    Get PDF
    In recent years, a number of large enterprises have gradually begun to use vari-ous Augmented Reality technologies to prominently improve the audiences’ view oftheir products. Among them, the creation of an immersive virtual interactive scenethrough the projection has received extensive attention, and this technique refers toprojection SAR, which is short for projection spatial augmented reality. However,as the existing projection-SAR systems have immobility and limited working range,they have a huge difficulty to be accepted and used in human daily life. Therefore,this thesis research has proposed a technically feasible optimization scheme so thatit can be practically applied to AR broadcasting and demonstrations. Based on three main techniques required by state-of-art projection SAR applica-tions, this thesis has created a novel mobile projection SAR cameraman for ARbroadcasting and demonstration. Firstly, by combining the CNN scene parsingmodel and multiple contour extractors, the proposed contour extraction pipelinecan always detect the optimal contour information in non-HD or blurred images.This algorithm reduces the dependency on high quality visual sensors and solves theproblems of low contour extraction accuracy in motion blurred images. Secondly, aplane-based visual mapping algorithm is introduced to solve the difficulties of visualmapping in these low-texture scenarios. Finally, a complete process of designing theprojection SAR cameraman robot is introduced. This part has solved three mainproblems in mobile projection-SAR applications: (i) a new method for marking con-tour on projection model is proposed to replace the model rendering process. Bycombining contour features and geometric features, users can identify objects oncolourless model easily. (ii) a camera initial pose estimation method is developedbased on visual tracking algorithms, which can register the start pose of robot to thewhole scene in Unity3D. (iii) a novel data transmission approach is introduced to establishes a link between external robot and the robot in Unity3D simulation work-space. This makes the robotic cameraman can simulate its trajectory in Unity3D simulation work-space and project correct virtual content. Our proposed mobile projection SAR system has made outstanding contributionsto the academic value and practicality of the existing projection SAR technique. Itfirstly solves the problem of limited working range. When the system is running ina large indoor scene, it can follow the user and project dynamic interactive virtualcontent automatically instead of increasing the number of visual sensors. Then,it creates a more immersive experience for audience since it supports the user hasmore body gestures and richer virtual-real interactive plays. Lastly, a mobile systemdoes not require up-front frameworks and cheaper and has provided the public aninnovative choice for indoor broadcasting and exhibitions

    Research on a modifeied RANSAC and its applications to ellipse detection from a static image and motion detection from active stereo video sequences

    Get PDF
    制度:新 ; 報告番号:甲3091号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2010/2/24 ; 早大学位記番号:新535

    Programming Robots by Demonstration using Augmented Reality

    Get PDF
    O mundo está a viver a quarta revolução industrial, a Indústria 4.0; marcada pela crescente inteligência e automação dos sistemas industriais. No entanto, existem tarefas que são muito complexas ou caras para serem totalmente automatizadas, seria mais eficiente se a máquina pudesse trabalhar com o ser humano, não apenas partilhando o mesmo espaço de trabalho, mas como colaboradores úteis. O foco da investigação para solucionar esse problema está em sistemas de interação homem-robô, percebendo em que aplicações podem ser úteis para implementar e quais são os desafios que enfrentam. Neste contexto, uma melhor interação entre as máquinas e os operadores pode levar a múltiplos benefícios, como menos, melhor e mais fácil treino, um ambiente mais seguro para o operador e a capacidade de resolver problemas mais rapidamente. O tema desta dissertação é relevante na medida em que é necessário aprender e implementar as tecnologias que mais contribuem para encontrar soluções para um trabalho mais simples e eficiente na indústria. Assim, é proposto o desenvolvimento de um protótipo industrial de um sistema de interação homem-máquina através de Realidade Estendida, no qual o objetivo é habilitar um operador industrial sem experiência em programação, a programar um robô colaborativo utilizando o Microsoft HoloLens 2. O sistema desenvolvido é dividido em duas partes distintas: o sistema de tracking, que regista o movimento das mãos do operador, e o sistema de tradução da programação por demonstração, que constrói o programa a ser enviado ao robô para que ele se mova. O sistema de monitorização e supervisão é executado pelo Microsoft HoloLens 2, utilizando a plataforma Unity e Visual Studio para programá-lo. A base do sistema de programação por demonstração foi desenvolvida em Robot Operating System (ROS). Os robôs incluídos nesta interface são Universal Robots UR5 (robô colaborativo) e ABB IRB 2600 (robô industrial). Adicionalmente, a interface foi construída para incorporar facilmente mais robôs.The world is living the fourth industrial revolution, Industry 4.0; marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machine were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human-robot interactions systems, understanding the applications where they can be helpful to implement and what are the challenges they face. In this context a better interaction between the machines and the operators can lead to multiples benefits, like less, better, and easier training, a safer environment for the operator and the capacity to solve problems quicker. The focus of this dissertation is relevant as it is necessary to learn and implement the technologies which most contribute to find solutions for a simpler and more efficient work in industry. This dissertation proposes the development of an industrial prototype of a human machine interaction system through Extended Reality (XR), in which the objective is to enable an industrial operator without any programming experience to program a collaborative robot using the Microsoft HoloLens 2. The system itself is divided into two different parts: the tracking system, which records the operator's hand movement, and the translator of the programming by demonstration system, which builds the program to be sent to the robot to execute the task. The monitoring and supervision system is executed by the Microsoft HoloLens 2, using the Unity platform and Visual Studio to program it. The programming by demonstration system's core was developed in Robot Operating System (ROS). The robots included in this interface are Universal Robots UR5 (collaborative robot) and ABB IRB 2600 (industrial robot). Moreover, the interface was built to easily add other robots

    Improving the Security of Mobile Devices Through Multi-Dimensional and Analog Authentication

    Get PDF
    Mobile devices are ubiquitous in today\u27s society, and the usage of these devices for secure tasks like corporate email, banking, and stock trading grows by the day. The first, and often only, defense against attackers who get physical access to the device is the lock screen: the authentication task required to gain access to the device. To date mobile devices have languished under insecure authentication scheme offerings like PINs, Pattern Unlock, and biometrics-- or slow offerings like alphanumeric passwords. This work addresses the design and creation of five proof-of-concept authentication schemes that seek to increase the security of mobile authentication without compromising memorability or usability. These proof-of-concept schemes demonstrate the concept of Multi-Dimensional Authentication, a method of using data from unrelated dimensions of information, and the concept of Analog Authentication, a method utilizing continuous rather than discrete information. Security analysis will show that these schemes can be designed to exceed the security strength of alphanumeric passwords, resist shoulder-surfing in all but the worst-case scenarios, and offer significantly fewer hotspots than existing approaches. Usability analysis, including data collected from user studies in each of the five schemes, will show promising results for entry times, in some cases on-par with existing PIN or Pattern Unlock approaches, and comparable qualitative ratings with existing approaches. Memorability results will demonstrate that the psychological advantages utilized by these schemes can lead to real-world improvements in recall, in some instances leading to near-perfect recall after two weeks, significantly exceeding the recall rates of similarly secure alphanumeric passwords

    Gestural Human-Machine-Interface (HMI) for an autonomous wheelchair for kids

    Get PDF
    El Trabajo de Fin de Master (TFM) se desarrolla a partir de una plataforma de ayuda a la movilidad destinada a niños. La arquitectura general de la plataforma se describe en anteriores trabajos. La plataforma consta de distintos nodos para suplir todas las funciones, alimentación, electrónica de potencia, control y navegación, interacción con el entorno e interfaz humano-maquina. Este TFM se centra en el nodo PC, el cual se basa en un ordenador con sistema operativo Linux y caracterizado por el uso de Robot Operating System (ROS). Sobre esta base se asienta la interfaz humano máquina gestual que se desarrolla en este trabajo. Este integra en el sistema existente una cámara RGBD Intel Realsense D435, ya que esta aplicación necesita tanto imagen RGB como imagen en profundidad. La información que proporciona la cámara se utiliza por medio de los paquetes que ofrece el fabricante de la cámara en ROS. Posteriormente se realiza la detección de personas. Para ello se utiliza una red neuronal entrenada para la detección de objetos basada en Tensorflow. A partir de los resultados de detección de la red, se obtiene la posición de las personas detectadas, transformando la posición en el plano de la persona su localización en el entorno virtual de la aplicación. Además se aplican técnicas de filtrado y tracking para mejorar esta localización. Por último, se implementa un sistema de reconocimiento de gestos, mediante el cual se pueda seleccionar fácilmente que usuario que desea interactuar con la plataforma y ejecutar una aplicación determinada. En el caso de este trabajo, la aplicacion elegida se basa en una estrategia denominada Follow Me, en la que la plataforma interactúe con el usuario y navegue por el entorno siguiéndole. La aplicación se incluye dentro del entorno de ROS, compatibilizando de esta forma su actuación con el resto de funciones de la plataforma.The Master's thesis is based on a mobility support platform for children. The general architecture of the platform is described in previous works. The platform consists of different nodes to provide all functions, power supply, power electronics, control and navigation, interaction with the environment and human-machine interface. This Master's thesis focuses on the PC node, which is based on a computer with a Linux operating system and characterised by the use of Robot Operating System (ROS). This is the basis for the gestural human-machine interface developed in this work. An Intel Realsense D435 RGBD camera is integrated into the existing system, as both RGB image and depth image are required for this application. The information provided by the camera is used by means of the packages offered by the camera manufacturer in ROS. Subsequently, the detection of persons is carried out. For this purpose, a neural network trained for object detection based on Tensorflow is used. From the detection results of the network, the position of the detected persons is obtained, transforming the position in the plane of the person to the location in the virtual environment of the application. In addition, f ltering and tracking techniques are applied to improve this localisation. Finally, a gesture recognition system is implemented, by means of which the user can easily select which user wants to interact with the platform and execute a given application. In the case of this work, the chosen application is based on a navigation strategy called Follow Me, in which the platform follows the user and navigates the environment in this way. The application is merged within the ROS environment, thus making it compatible with the rest of the platform's functions.Máster Universitario en Ingeniería Industrial (M141

    Wi-Fi Sensing: Applications and Challenges

    Get PDF
    Wi-Fi technology has strong potentials in indoor and outdoor sensing applications, it has several important features which makes it an appealing option compared to other sensing technologies. This paper presents a survey on different applications of Wi-Fi-based sensing systems such as elderly people monitoring, activity classification, gesture recognition, people counting, through the wall sensing, behind the corner sensing, and many other applications. The challenges and interesting future directions are also highlighted
    corecore