118 research outputs found

    Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration

    Full text link
    In this paper we propose a real-time assistive localization approach to help blind and visually impaired people in navigating an indoor environment. The system consists of a mobile vision front end with a portable panoramic lens mounted on a smart phone, and a remote image feature-based database of the scene on a GPU-enabled server. Compact and elective omnidirectional image features are extracted and represented in the smart phone front end, and then transmitted to the server in the cloud. These features of a short video clip are used to search the database of the indoor environment via image-based indexing to find the location of the current view within the database, which is associated with floor plans of the environment. A median-filter-based multi-frame aggregation strategy is used for single path modeling, and a 2D multi-frame aggregation strategy based on the candidates’ distribution densities is used for multi-path environmental modeling to provide a final location estimation. To deal with the high computational cost in searching a large database for a realistic navigation application, data parallelism and task parallelism properties are identified in the database indexing process, and computation is accelerated by using multi-core CPUs and GPUs. User-friendly HCI particularly for the visually impaired is designed and implemented on an iPhone, which also supports system configurations and scene modeling for new environments. Experiments on a database of an eight-floor building are carried out to demonstrate the capacity of the proposed system, with real-time response (14 fps) and robust localization results

    Vision-based Assistive Indoor Localization

    Full text link
    An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate an indoor environment. In this thesis, a vision-based indoor localization solution is proposed and studied with algorithms and their implementations by maximizing the usage of the visual information surrounding the users for an optimal localization from multiple stages. The contributions of the work include the following: (1) Novel combinations of a daily-used smart phone with a low-cost lens (GoPano) are used to provide an economic, portable, and robust indoor localization service for visually impaired people. (2) New omnidirectional features (omni-features) extracted from 360 degrees field-of-view images are proposed to represent visual landmarks of indoor positions, and then used as on-line query keys when a user asks for localization services. (3) A scalable and light-weight computation and storage solution is implemented by transferring big database storage and computational heavy querying procedure to the cloud. (4) Real-time query performance of 14 fps is achieved with a Wi-Fi connection by identifying and implementing both data and task parallelism using many-core NVIDIA GPUs. (5) Rene localization via 2D-to-3D and 3D-to-3D geometric matching and automatic path planning for efficient environmental modeling by utilizing architecture AutoCAD floor plans. This dissertation first provides a description of assistive indoor localization problem with its detailed connotations as well as overall methodology. Then related work in indoor localization and automatic path planning for environmental modeling is surveyed. After that, the framework of omnidirectional-vision-based indoor assistive localization is introduced. This is followed by multiple refine localization strategies such as 2D-to-3D and 3D-to-3D geometric matching approaches. Finally, conclusions and a few promising future research directions are provided

    Information-theoretic environment modeling for mobile robot localization

    Full text link
    To enhance robotic computational efficiency without degenerating accuracy, it is imperative to fit the right and exact amount of information in its simplest form to the investigated task. This thesis conforms to this reasoning in environment model building and robot localization. It puts forth an approach towards building maps and localizing a mobile robot efficiently with respect to unknown, unstructured and moderately dynamic environments. For this, the environment is modeled on an information-theoretic basis, more specifically in terms of its transmission property. Subsequently, the presented environment model, which does not specifically adhere to classical geometric modeling, succeeds in solving the environment disambiguation effectively. The proposed solution lays out a two-level hierarchical structure for localization. The structure makes use of extracted features, which are stored in two different resolutions in a single hybrid feature-map. This enables dual coarse-topological and fine-geometric localization modalities. The first level in the hierarchy describes the environment topologically, where a defined set of places is described by a probabilistic feature representation. A conditional entropy-based criterion is proposed to quantify the transinformation between the feature and the place domains. This criterion provides a double benefit of pruning the large dimensional feature space, and at the same time selecting the best discriminative features that overcome environment aliasing problems. Features with the highest transinformation are filtered and compressed to form a coarse resolution feature-map (codebook). Localization at this level is conducted through place matching. In the second level of the hierarchy, the map is viewed in high-resolution, as consisting of non-compressed entropy-processed features. These features are additionally tagged with their position information. Given the identified topological place provided by the first level, fine localization corresponding to the second level is executed using feature triangulation. To enhance the triangulation accuracy, redundant features are used and two metric evaluating criteria are employ-ed; one for dynamic features and mismatches detection, and another for feature selection. The proposed approach and methods have been tested in realistic indoor environments using a vision sensor and the Scale Invariant Feature Transform local feature extraction. Through experiments, it is demonstrated that an information-theoretic modeling approach is highly efficient in attaining combined accuracy and computational efficiency performances for localization. It has also been proven that the approach is capable of modeling environments with a high degree of unstructuredness, perceptual aliasing, and dynamic variations (illumination conditions; scene dynamics). The merit of employing this modeling type is that environment features are evaluated quantitatively, while at the same time qualitative conclusions are generated about feature selection and performance in a robot localization task. In this way, the accuracy of localization can be adapted in accordance with the available resources. The experimental results also show that the hybrid topological-metric map provides sufficient information to localize a mobile robot on two scales, independent of the robot motion model. The codebook exhibits fast and accurate topological localization at significant compression ratios. The hierarchical localization framework demonstrates robustness and optimized space and time complexities. This, in turn, provides scalability to large environments application and real-time employment adequacies

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    A Mobile Cyber-Physical System Framework for Aiding People with Visual Impairment

    Full text link
    It is a challenging problem for researchers and engineers in the assistive technology (AT) community to provide suitable solutions for visually impaired people (VIPs) through AT to meet orientation, navigation and mobility (ONM) needs. Given the spectrum of assistive technologies currently available for the purposes of aiding VIPs with ONM, our literature review and survey have shown that there is a reluctance to adopt these technological solutions in the VIP community. Motivated by these findings, we think it critical to re-examine and rethink the approaches that have been taken. It is our belief that we need to take a different and innovative approach to solving this problem. We propose an integrated mobile cyber-physical system framework (MCPSF) with an \u27agent\u27 and a \u27smart environment\u27 to address VIP\u27s ONM needs in urban settings. For example, one of the essential needs for VIPs is to make street navigation easier and safer for them as pedestrians. In a busy city neighborhood, crossing a street is problematic for VIPs: knowing if it is safe; knowing when to cross; and being sure to remain on path and not collide or interfere with objects and people. These remain issues keeping VIPs from a truly independent lifestyle. In this dissertation, we propose a framework based on mobile cyber-physical systems (MCPS) to address VIP\u27s ONM needs. The concept of mobile cyber-physical systems is intended to bridge the physical space we live in with a cyberspace filled with unique information coming from IoT devices (Internet of Things) which are part of Smart City infrastructure. The devices in the IoT may be embedded in different kinds of physical structures. People with vision loss or other special needs may have difficulties in comprehending or perceiving signals directly in the physical space, but they can make such connections in cyberspace. These cyber connections and real-time information exchanges will enable and enhance their interactions in the physical space and help them better navigate through city streets and street crossings. As part of the dissertation work, we designed and implemented a proof of concept prototype with essential functions to aid VIP’s for their ONM needs. We believe our research and prototype experience opened a new approach to further research areas that might enhance ONM functions beyond our prototype with potential commercial product development

    Deep Learning Localization for Self-driving Cars

    Get PDF
    Identifying the location of an autonomous car with the help of visual sensors can be a good alternative to traditional approaches like Global Positioning Systems (GPS) which are often inaccurate and absent due to insufficient network coverage. Recent research in deep learning has produced excellent results in different domains leading to the proposition of this thesis which uses deep learning to solve the problem of localization in smart cars with visual data. Deep Convolutional Neural Networks (CNNs) were used to train models on visual data corresponding to unique locations throughout a geographic location. In order to evaluate the performance of these models, multiple datasets were created from Google Street View as well as manually by driving a golf cart around the campus while collecting GPS tagged frames. The efficacy of the CNN models was also investigated across different weather/light conditions. Validation accuracies as high as 98% were obtained from some of these models, proving that this novel method has the potential to act as an alternative or aid to traditional GPS based localization methods for cars. The root mean square (RMS) precision of Google Maps is often between 2-10m. However, the precision required for the navigation of self-driving cars is between 2-10cm. Empirically, this precision has been achieved with the help of different error-correction systems on GPS feedback. The proposed method was able to achieve an approximate localization precision of 25 cm without the help of any external error correction system

    Body-relative navigation guidance using uncalibrated cameras

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-97) and index.The ability to navigate through the world is an essential capability to humans. In a variety of situations, people do not have the time, the opportunity or the capability to learn the layout of the environment before visiting an area. Examples include soldiers in the field entering an unknown building, firefighters responding to an emergency, or a visually impaired person walking through the city. In absence of external source of localization (such as GPS), the system must rely on internal sensing to provide navigation guidance to the user. In order to address real-world situations, the method must provide spatially extended, temporally consistent navigation guidance, through cluttered and dynamic environments. While recent research has largely focused on metric methods based on calibrated cameras, the work presented in this thesis demonstrates a novel approach to navigation using uncalibrated cameras. During the first visit of the environment, the method builds a topological representation of the user's exploration path, which we refer to as the place graph. The method then provides navigation guidance from any place to any other in the explored environment. On one hand, a localization algorithm determines the location of the user in the graph. On the other hand, a rotation guidance algorithm provides a directional cue towards the next graph node in the user's body frame. Our method makes little assumption about the environment except that it contains descriptive visual features. It requires no intrinsic or extrinsic camera calibration, and relies instead on a method that learns the correlation between user rotation and feature correspondence across cameras. We validate our approach using several ground truth datasets. In addition, we show that our approach is capable of guiding a robot equipped with a local obstacle avoidance capability through real, cluttered environments. Finally, we validate our system with nine untrained users through several kilometers of indoor environments.by Olivier Koch.Ph.D

    Electronic Imaging & the Visual Arts. EVA 2013 Florence

    Get PDF
    Important Information Technology topics are presented: multimedia systems, data-bases, protection of data, access to the content. Particular reference is reserved to digital images (2D, 3D) regarding Cultural Institutions (Museums, Libraries, Palace – Monuments, Archaeological Sites). The main parts of the Conference Proceedings regard: Strategic Issues, EC Projects and Related Networks & Initiatives, International Forum on “Culture & Technology”, 2D – 3D Technologies & Applications, Virtual Galleries – Museums and Related Initiatives, Access to the Culture Information. Three Workshops are related to: International Cooperation, Innovation and Enterprise, Creative Industries and Cultural Tourism
    • …
    corecore