561 research outputs found

    Autonomous Navigation for an Unmanned Aerial Vehicle by the Decomposition Coordination Method

    Get PDF
    This paper introduces a new approach for solving the navigation  problem  of Unmanned Aerial  Vehicles (UAV) by studying its rotational and  translational dynamics and  then solving the nonlinear model by the Decomposition  Coordination method. The objective is to reach a destination goal by the mean of  an autonomous  computed   optimal path calculated   through optimal control sequence. Solving such complex systems often requires a great  amount of computation. However, the approach considered herein is based on  the Decomposition Coordination principle, which allows the nonlinearity to be treated at  a local level, thus offering a  low computing time. The stability of the method is discussed with sufficient conditions for convergence. A numerical application is given in consolidation  the theoretical results

    Detection and modelling of staircases using a wearable depth sensor

    Get PDF
    In this paper we deal with the perception task of a wearable navigation assistant. Specifically, we have focused on the detection of staircases because of the important role they play in indoor navigation due to the multi-floor reaching possibilities they bring and the lack of security they cause, specially for those who suffer from visual deficiencies. We use the depth sensing capacities of the modern RGB-D cameras to segment and classify the different elements that integrate the scene and then carry out the stair detection and modelling algorithm to retrieve all the information that might interest the user, i.e. the location and orientation of the staircase, the number of steps and the step dimensions. Experiments prove that the system is able to perform in real-time and works even under partial occlusions of the stairway

    Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics

    Full text link
    Drones are vital for urban emergency search and rescue (SAR) due to the challenges of navigating dynamic environments with obstacles like buildings and wind. This paper presents a method that combines multi-objective reinforcement learning (MORL) with a convolutional autoencoder to improve drone navigation in urban SAR. The approach uses MORL to achieve multiple goals and the autoencoder for cost-effective wind simulations. By utilizing imagery data of urban layouts, the drone can autonomously make navigation decisions, optimize paths, and counteract wind effects without traditional sensors. Tested on a New York City model, this method enhances drone SAR operations in complex urban settings.Comment: 47 page

    PlaceRaider: Virtual Theft in Physical Spaces with Smartphones

    Full text link
    As smartphones become more pervasive, they are increasingly targeted by malware. At the same time, each new generation of smartphone features increasingly powerful onboard sensor suites. A new strain of sensor malware has been developing that leverages these sensors to steal information from the physical environment (e.g., researchers have recently demonstrated how malware can listen for spoken credit card numbers through the microphone, or feel keystroke vibrations using the accelerometer). Yet the possibilities of what malware can see through a camera have been understudied. This paper introduces a novel visual malware called PlaceRaider, which allows remote attackers to engage in remote reconnaissance and what we call virtual theft. Through completely opportunistic use of the camera on the phone and other sensors, PlaceRaider constructs rich, three dimensional models of indoor environments. Remote burglars can thus download the physical space, study the environment carefully, and steal virtual objects from the environment (such as financial documents, information on computer monitors, and personally identifiable information). Through two human subject studies we demonstrate the effectiveness of using mobile devices as powerful surveillance and virtual theft platforms, and we suggest several possible defenses against visual malware

    Targeted Learning: A Hybrid Approach to Social Robot Navigation

    Full text link
    Empowering robots to navigate in a socially compliant manner is essential for the acceptance of robots moving in human-inhabited environments. Previously, roboticists have developed classical navigation systems with decades of empirical validation to achieve safety and efficiency. However, the many complex factors of social compliance make classical navigation systems hard to adapt to social situations, where no amount of tuning enables them to be both safe (people are too unpredictable) and efficient (the frozen robot problem). With recent advances in deep learning approaches, the common reaction has been to entirely discard classical navigation systems and start from scratch, building a completely new learning-based social navigation planner. In this work, we find that this reaction is unnecessarily extreme: using a large-scale real-world social navigation dataset, SCAND, we find that classical systems can be used safely and efficiently in a large number of social situations (up to 80%). We therefore ask if we can rethink this problem by leveraging the advantages of both classical and learning-based approaches. We propose a hybrid strategy in which we learn to switch between a classical geometric planner and a data-driven method. Our experiments on both SCAND and two physical robots show that the hybrid planner can achieve better social compliance in terms of a variety of metrics, compared to using either the classical or learning-based approach alone

    Web-based indoor positioning system using QR-codes as markers

    Get PDF
    Location tracking has been quite an important tool in our daily life. The outdoor location tracking can easily be supported by GPS. However, the technology of tracking smart device users indoor position is not at the same maturity level as outdoor tracking. AR technology could enable the tracking on users indoor location by scanning the AR marker with their smart devices. However, due to several limitations (capacity, error tolerance, etc.) AR markers are not widely adopted. Therefore, not serving as a good candidate to be a tracking marker. This paper carries out a research question whether QR code can replace the AR marker as the tracking marker to detect smart devices’ user indoor position. The paper has discussed the research question by researching the background of the QR code and AR technology. According to the research, QR code should be a suitable choice to implement as a tracking marker. Comparing to the AR marker, QR code has a better capacity, higher error tolerance, and widely adopted. Moreover, a web application has also been implemented as an experiment to support the research question. It utilized QR code as a tracking marker for AR technology which built a 3D model on the QR code. Hence, the position of the user can be estimated from the 3D model. This paper discusses the experiment result by comparing a pre-fixed target user’s position and real experiment position with three different QR code samples. The limitation of the experiment and improvement ideas have also been discussed in this paper. According to the experiment, the research question has being answered that a combination of QR code and AR technology could deliver a satisfying indoor location result in a smart device user

    Detección y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayoría de la gente, pero no resulta fácil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de cámaras, que se integran en el asistente. En este trabajo nos hemos centrado en el módulo de detección, dejando para otros trabajos el resto de módulos, como podría ser la interfaz entre la detección y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero también debería ser capaz de reconocer ciertas características del entorno para interactuar con ellas. En este trabajo resolvemos la detección de uno de los recursos más comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no sólo permite evitar posibles caídas sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que irá situado en el pecho del sujeto, y que permite captar de manera simultánea y sincronizada información de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captación de profundidad para encontrar el suelo y así orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentación y clasificación de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obstáculos". A continuación, el algoritmo de detección de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jerárquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la información de utilidad para el usuario: cómo esta posicionada con respecto a él, cuántos escalones se ven y cuáles son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegación humana en entornos de interior cuya mayor contribución es un algoritmo de detección y modelado de escaleras que determina toda la información de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de vídeo en distintos entornos, consiguiendo buenos resultados tanto en precisión como en tiempo de respuesta. Además se ha realizado una comparación de nuestros resultados con los extraídos de otras publicaciones, demostrando que no sólo se consigue una eciencia que iguala al estado de la materia sino que también se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obstáculos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicación aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentación tiene lugar el 12 de Septiembre de 2014 en Zúrich, Suiza

    SOCIALGYM 2.0: Simulator for Multi-Agent Social Robot Navigation in Shared Human Spaces

    Full text link
    We present SocialGym 2, a multi-agent navigation simulator for social robot research. Our simulator models multiple autonomous agents, replicating real-world dynamics in complex environments, including doorways, hallways, intersections, and roundabouts. Unlike traditional simulators that concentrate on single robots with basic kinematic constraints in open spaces, SocialGym 2 employs multi-agent reinforcement learning (MARL) to develop optimal navigation policies for multiple robots with diverse, dynamic constraints in complex environments. Built on the PettingZoo MARL library and Stable Baselines3 API, SocialGym 2 offers an accessible python interface that integrates with a navigation stack through ROS messaging. SocialGym 2 can be easily installed and is packaged in a docker container, and it provides the capability to swap and evaluate different MARL algorithms, as well as customize observation and reward functions. We also provide scripts to allow users to create their own environments and have conducted benchmarks using various social navigation algorithms, reporting a broad range of social navigation metrics. Projected hosted at: https://amrl.cs.utexas.edu/social_gym/index.htmlComment: Submitted to RSS 202

    Accuracy and precision of agents orientation in an indoor positioning system using multiple infrastructure lighting spotlights and a PSD sensor

    Get PDF
    In indoor localization there are applications in which the orientation of the agent to be located is as important as knowing the position. In this paper we present the results of the orientation estimation from a local positioning system based on position-sensitive device (PSD) sensors and the visible light emitted from the illumination of the room in which it is located. The orientation estimation will require that the PSD sensor receives signal from either 2 or 4 light sources simultaneously. As will be shown in the article, the error determining the rotation angle of the agent with the on-board sensor is less than 0.2 degrees for two emitters. On the other hand, by using 4 light sources the three Euler rotation angles are determined, with mean errors in the measurements smaller than 0.35◦ for the x- and y-axis and 0.16◦ for the z-axis. The accuracy of the measurement has been evaluated experimentally in a 2.5 m-high ceiling room over an area of 2.2 m2 using geodetic measurement tools to establish the reference ground truth values.Junta de Comunidades de Castilla-La Manch
    corecore