29 research outputs found

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment

    Augmented reality device for first response scenarios

    Get PDF
    A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*. 1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer

    Remote Collaborative BIM-based Mixed Reality Approach for Supporting Facilities Management Field Tasks

    Get PDF
    Facilities Management (FM) day-to-day tasks require suitable methods to facilitate work orders and improve performance by better collaboration between the office and the field. Building Information Modeling (BIM) provides opportunities to support collaboration and to improve the efficiency of Computerized Maintenance Management Systems (CMMSs) by sharing building information between different applications/users throughout the lifecycle of the facility. However, manual retrieval of building element information can be challenging and time consuming for field workers during FM operations. Mixed Reality (MR) is a visualization technique that can be used to improve the visual perception of the facility by superimposing 3D virtual objects and textual information on top of the view of real-world building objects. The objectives of this research are: (1) investigating an automated method to capture and record task-related data (e.g., defects) with respect to a georeferenced BIM model and share them directly with the remote office based on the field worker point of view in mobile situations; (2) investigating the potential of using MR, BIM, and sensory data for FM tasks to provide improved visualization and perception that satisfy the needs of the facility manager at the office and the field workers with less visual and mental disturbance; and (3) developing an effective method for interactive visual collaboration to improve FM field tasks. This research discusses the development of a collaborative BIM-based MR approach to support facilities field tasks. The research framework integrates multisource facilities information, BIM models, and hybrid tracking in an MR-based setting to retrieve information based on time (e.g., inspection schedule) and the location of the field worker, visualize inspection and maintenance operations, and support remote collaboration and visual communication between the field worker and the manager at the office. The field worker uses an Augmented Reality (AR) application installed on his/her tablet. The manager at the office uses an Immersive Augmented Virtuality (IAV) application installed on a desktop computer. Based on the field worker location, as well as the inspection or maintenance schedule, the field worker is assigned work orders and instructions from the office. Other sensory data (e.g., infrared thermography) can provide additional layers of information by augmenting the actual view of the field worker and supporting him/her in making effective decisions about existing and potential problems while communicating with the office in an Interactive Virtual Collaboration (IVC) mode. The contributions of this research are (1) developing a MR framework for facilities management which has a field AR module and an office IAV module. These modules can be used independently or combined using remote IVC, (2) developing visualization methods for MR including the virtual hatch and multilayer views to enhance visual depth and context perception, (3) developing methods for AR and IAV modeling including BIM-based data integration and customization suitable for each MR method, and (4) enhancing indoor tracking for AR FM systems by developing a hybrid tracking method. To investigate the applicability of the research method, a prototype system called Collaborative BIM-based Markerless Mixed Reality Facility Management System (CBIM3R-FMS) is developed and tested in a case study. The usability testing and validation show that the proposed methods have high potential to improve FM field tasks

    Proceedings. 9th 3DGeoInfo Conference 2014, [11-13 November 2014, Dubai]

    Get PDF
    It is known that, scientific disciplines such as geology, geophysics, and reservoir exploration intrinsically use 3D geo-information in their models and simulations. However, 3D geo-information is also urgently needed in many traditional 2D planning areas such as civil engineering, city and infrastructure modeling, architecture, environmental planning etc. Altogether, 3DGeoInfo is an emerging technology that will greatly influence the market within the next few decades. The 9th International 3DGeoInfo Conference aims at bringing together international state-of-the-art researchers and practitioners facilitating the dialogue on emerging topics in the field of 3D geo-information. The conference in Dubai offers an interdisciplinary forum of sub- and above-surface 3D geo-information researchers and practitioners dealing with data acquisition, modeling, management, maintenance, visualization, and analysis of 3D geo-information

    Creating cohesive video with the narrative-informed use of ubiquitous wearable and imaging sensor networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Page 232 blank. Cataloged from PDF version of thesis.Includes bibliographical references (p. 222-231).In today's digital era, elements of anyone's life can be captured, by themselves or others, and be instantly broadcast. With little or no regulation on the proliferation of camera technology and the increasing use of video for social communication, entertainment, and education, we have undoubtedly entered the age of ubiquitous media. A world permeated by connected video devices promises a more democratized approach to mass-media culture, enabling anyone to create and distribute personalized content. While these advancements present a plethora of possibilities, they are not without potential negative effects, particularly with regard to privacy, ownership, and the general decrease in quality associated with minimal barriers to entry. This dissertation presents a first-of-its-kind research platform designed to investigate the world of ubiquitous video devices in order to confront inherent problems and create new media applications. This system takes a novel approach to the creation of user-generated, documentary video by augmenting a network of video cameras integrated into the environment with on-body sensing. The distributed video camera network can record the entire life of anyone within its coverage range and it will be shown that it, almost instantly, records more audio and video than can be viewed without prohibitive human resource cost.(cont.) This drives the need to develop a mechanism to automatically understand the raw audiovisual information in order to create a cohesive video output that is understandable, informative, and/or enjoyable to its human audience. We address this need with the SPINNER system. As humans, we are inherently able to transform disconnected occurrences and ideas into cohesive narratives as a method to understand, remember, and communicate meaning. The design of the SPINNER application and ubiquitous sensor platform is informed by research into narratology, in other words how stories are created from fragmented events. The SPINNER system maps low level sensor data from the wearable sensors to higher level social signal and body language information. This information is used to label the raw video data. The SPINNER system can then build a cohesive narrative by stitching together the appropriately labeled video segments. The results from three test runs are shown, each resulting in one or more automatically edited video piece. The creation of these videos is evaluated through review by their intended audience and by comparing the system to a human trying to perform similar actions. In addition, the mapping of the wearable sensor data to meaningful information is evaluated by comparing the calculated results to those from human observation of the actual video.by Mathew Laibowitz.Ph.D
    corecore