1,215 research outputs found

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Exploring the Use of Wearables to Enable Indoor Navigation for Blind Users

    Get PDF
    One of the challenges that people with visual impairments (VI) have to have to confront daily, is navigating independently through foreign or unfamiliar spaces.Navigating through unfamiliar spaces without assistance is very time consuming and leads to lower mobility. Especially in the case of indoor environments where the use of GPS is impossible, this task becomes even harder.However, advancements in mobile and wearable computing pave the path to new cheap assistive technologies that can make the lives of people with VI easier.Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. Smart watches and heads up displays (HUDs), in combination with smartphones, can provide a basis for development of advanced algorithms, capable of providing inexpensive solutions for navigation in indoor spaces. New interfaces are also introduced making the interaction between users who are blind and mo-bile devices more intuitive.This work presents a set of new systems and technologies created to help users with VI navigate indoor environments. The first system presented is an indoor navigation system for people with VI that operates by using sensors found in mo-bile devices and virtual maps of the environment. The second system presented helps users navigate large open spaces with minimum veering. Next a study is conducted to determine the accuracy of pedometry based on different body placements of the accelerometer sensors. Finally, a gesture detection system is introduced that helps communication between the user and mobile devices by using sensors in wearable devices

    The Role of Situation Awareness Metrics in the Assessment of Indoor Orientation Assistive Technologies that Aid Blind Individuals in Unfamiliar Indoor Environments

    Get PDF
    The importance of raising user\u27s situation awareness has proven to be an important factor in the successful use of systems that involve mission-critical tasks. Indoor Orientation Assistive Technology (OAT) that supports blind individuals is one of the systems that needs to be oriented to support user\u27s situation awareness. In the tasks involved in this system, blind individuals try to maintain their spatial understanding of the environment. The current evaluation methods of Orientation Assistive Technology that aids blind travelers within indoor environments rely on the performance metrics. When enhancing such systems, evaluators conduct qualitative studies to learn where to focus their efforts. The main purpose of this thesis is to investigate the use of an objective method to facilitate blind travelers situation awareness when traveling unfamiliar indoor environments. We investigate the use of in-task probes using the Situation Awareness Global Assessment Technique (SAGAT) method, and post self-reported questionnaire using the Situation Awareness Rating Technique (SART) method. The goal of this metric is to design an objective method that can highlight design areas that need improvements when evaluating such systems. Also, we investigate the relationship between user\u27s situation awareness and user\u27s confidence, satisfaction, and stress levels

    Acoustic Echo Estimation using the model-based approach with Application to Spatial Map Construction in Robotics

    Get PDF

    Cooperative strategies for the detection and localization of odorants with robots and artificial noses

    Full text link
    En este trabajo de investigación se aborda el diseño de una plataforma robótica orientada a la implementación de estrategias de búsqueda cooperativa bioinspiradas. En particular, tanto el proceso de diseño de la parte electrónica como hardware se han enfocado hacia la validación en entornos reales de algoritmos capaces de afrontar problemas de búsqueda con incertidumbre, como lo es la búsqueda de fuentes de olor que presentan variación espacial y temporal. Este tipo de problemas pueden ser resueltos de forma más eficiente con el empleo de enjambres con una cantidad razonable de robots, y por tanto la plataforma ha sido desarrollada utilizando componentes de bajo coste. Esto ha sido posible por la combinación de elementos estandarizados -como la placa controladora Arduino y otros sensores integrados- con piezas que pueden ser fabricadas mediante una impresora 3D atendiendo a la filosofía del hardware libre (open-source). Entre los requisitos de diseño se encuentran además la eficiencia energética -para maximizar el tiempo de funcionamiento de los robots-, su capacidad de posicionamiento en el entorno de búsqueda, y la integración multisensorial -con la inclusión de una nariz electrónica, sensores de luminosidad, distancia, humedad y temperatura, así como una brújula digital-. También se aborda el uso de una estrategia de comunicación adecuada basada en ZigBee. El sistema desarrollado, denominado GNBot, se ha validado tanto en los aspectos de eficiencia energética como en sus capacidades combinadas de posicionamiento espacial y de detección de fuentes de olor basadas en disoluciones de etanol. La plataforma presentada -formada por el GNBot, su placa electrónica GNBoard y la capa de abstracción software realizada en Python- simplificará por tanto el proceso de implementación y evaluación de diversas estrategias de detección, búsqueda y monitorización de odorantes, con la estandarización de enjambres de robots provistos de narices artificiales y otros sensores multimodales.This research work addresses the design of a robotic platform oriented towards the implementation of bio-inspired cooperative search strategies. In particular, the design processes of both the electronics and hardware have been focused towards the real-world validation of algorithms that are capable of tackling search problems that have uncertainty, such as the search of odor sources that have spatio-temporal variability. These kind of problems can be solved more efficiently with the use of swarms formed by a considerable amount of robots, and thus the proposed platform makes use of low cost components. This has been possible with the combination of standardized elements -as the Arduino controller board and other integrated sensors- with custom parts that can be manufactured with a 3D printer attending to the open-source hardware philosophy. Among the design requirements is the energy efficiency -in order to maximize the working range of the robots-, their positioning capability within the search environment, and multiple sensor integration -with the incorporation of an artificial nose, luminosity, distance, humidity and temperature sensors, as well as an electronic compass-. Another subject that is tackled is the use of an efficient wireless communication strategy based on ZigBee. The developed system, named GNBot, has also been validated in the aspects of energy efficiency and for its combined capabilities for autonomous spatial positioning and detection of ethanol-based odor sources. The presented platform -formed by the GNBot, the GNBoard electronics and the abstraction layer built in Python- will thus simplify the processes of implementation and evaluation of various strategies for the detection, search and monitoring of odorants with conveniently standardized robot swarms provided with artificial noses and other multimodal sensors

    Development of a ground robot for indoor SLAM using Low‐Cost LiDAR and remote LabVIEW HMI

    Get PDF
    The simultaneous localization and mapping problem (SLAM) is crucial to autonomous navigation and robot mapping. The main purpose of this thesis is to develop a ground robot that implements SLAM to test the performance of the low‐cost RPLiDAR A1M8 by DFRobot. The HectorSLAM package, available in ROS was used with a Raspberry Pi to implement SLAM and build maps. These maps are sent to a remote desktop via TCP/IP communication to be displayed on a LabVIEW HMI where the user can also control robot. The LabVIEW HMI and the project in its entirety is intended to be as easy to use as possible to the layman, with many processes being automated to make this possible. The quality of the maps created by HectorSLAM and the RPLiDAR were evaluated both qualitatively and quanitatively to determine how useful the low‐cost LiDAR can be for this application. It is hoped that the apparatus developed in this project will be used with drones in the future for 3D mapping

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data

    Guest Orientation, Assistance, and Telepresence Robot

    Get PDF
    The project was focused on a mobile research platform for autonomous navigation components and sensors vital to its autonomous interaction with its environment. The goal of this project was to create such a mobile robotic platform, which would in turn be capable of acting as a fully autonomous tour guide for the WPI campus. The project combined the robust capabilities of a Segway Robotic Mobility Platform with the cutting edge adaptability of the Robot Operating System software framework. The robot will work in conjunction with school staff to provide video tour information as part of an enhanced tour experience. The project is a highly visible representation of WPI\u27s unique MQP program and its ability to prepare engineers capable of solving real world problems
    corecore