195 research outputs found

    An intelligent multi-floor mobile robot transportation system in life science laboratories

    Get PDF
    In this dissertation, a new intelligent multi-floor transportation system based on mobile robot is presented to connect the distributed laboratories in multi-floor environment. In the system, new indoor mapping and localization are presented, hybrid path planning is proposed, and an automated doors management system is presented. In addition, a hybrid strategy with innovative floor estimation to handle the elevator operations is implemented. Finally the presented system controls the working processes of the related sub-system. The experiments prove the efficiency of the presented system

    Motion Compatibility for Indoor Localization

    Get PDF
    Indoor localization -- a device's ability to determine its location within an extended indoor environment -- is a fundamental enabling capability for mobile context-aware applications. Many proposed applications assume localization information from GPS, or from WiFi access points. However, GPS fails indoors and in urban canyons, and current WiFi-based methods require an expensive, and manually intensive, mapping, calibration, and configuration process performed by skilled technicians to bring the system online for end users. We describe a method that estimates indoor location with respect to a prior map consisting of a set of 2D floorplans linked through horizontal and vertical adjacencies. Our main contribution is the notion of "path compatibility," in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for agreement with the prior map. Path compatibility is encoded in an HMM-based matching model, from which the method recovers the user s location trajectory from the low-level motion estimates. To recognize user motions, we present a motion labeling algorithm, extracting fine-grained user motions from sensor data of handheld mobile devices. We propose "feature templates," which allows the motion classifier to learn the optimal window size for a specific combination of a motion and a sensor feature function. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our motion labeling algorithm classifies user motions with 94.5% accuracy, and our trajectory matching algorithm can recover the user's location to within 5 meters on average after one minute of movements from an unknown starting location. Prior information, such as a known starting floor, further decreases the time required to obtain precise location estimate

    Enabling smart city resilience: Post-disaster response and structural health monitoring

    Get PDF
    The concept of Smart Cities has been introduced to categorize a vast area of activities to enhance the quality of life of citizens. A central feature of these activities is the pervasive use of Information and Communication Technologies (ICT), helping cities to make better use of limited resources. Indeed, the ASCE Vision for Civil Engineering in 2025 (ASCE 2007) portends a future in which engineers will rely on and leverage real-time access to a living database, sensors, diagnostic tools, and other advanced technologies to ensure that informed decisions are made. However, these advances in technology take place against a backdrop of the deterioration of infrastructure, in addition to natural and human-made disasters. Moreover, recent events constantly remind us of the tremendous devastation that natural and human-made disasters can wreak on society. As such, emergency response procedures and resilience are among the crucial dimensions of any Smart City plan. The U.S. Department of Homeland Security (DHS) has recently launched plans to invest $50 million to develop cutting-edge emergency response technologies for Smart Cities. Furthermore, after significant disasters have taken place, it is imperative that emergency facilities and evacuation routes, including bridges and highways, be assessed for safety. The objective of this research is to provide a new framework that uses commercial off-the-shelf (COTS) devices such as smartphones, digital cameras, and unmanned aerial vehicles to enhance the functionality of Smart Cities, especially with respect to emergency response and civil infrastructure monitoring/assessment. To achieve this objective, this research focuses on post-disaster victim localization and assessment, first responder tracking and event localization, and vision-based structural monitoring/assessment, including the use of unmanned aerial vehicles (UAVs). This research constitutes a significant step toward the realization of Smart City Resilience.National Science Foundation Grant No. 1030454Association of American RailroadsOpe

    Enabling smart city resilience: post-disaster response and structural health monitoring

    Get PDF
    The concept of Smart Cities has been introduced to categorize a vast area of activities to enhance the quality of life of citizens. A central feature of these activities is the pervasive use of Information and Communication Technologies (ICT), helping cities to make better use of limited resources. Indeed, the ASCE Vision for Civil Engineering in 2025 (ASCE 2007) portends a future in which engineers will rely on and leverage real-time access to a living database, sensors, diagnostic tools, and other advanced technologies to ensure that informed decisions are made. However, these advances in technology take place against a backdrop of the deterioration of infrastructure, in addition to natural and human-made disasters. Moreover, recent events constantly remind us of the tremendous devastation that natural and human-made disasters can wreak on society. As such, emergency response procedures and resilience are among the crucial dimensions of any Smart City plan. The U.S. Department of Homeland Security (DHS) has recently launched plans to invest $50 million to develop cutting-edge emergency response technologies for Smart Cities. Furthermore, after significant disasters have taken place, it is imperative that emergency facilities and evacuation routes, including bridges and highways, be assessed for safety. The objective of this research is to provide a new framework that uses commercial off-the-shelf (COTS) devices such as smartphones, digital cameras, and unmanned aerial vehicles to enhance the functionality of Smart Cities, especially with respect to emergency response and civil infrastructure monitoring/assessment. To achieve this objective, this research focuses on post-disaster victim localization and assessment, first responder tracking and event localization, and vision-based structural monitoring/assessment, including the use of unmanned aerial vehicles (UAVs). This research constitutes a significant step toward the realization of Smart City Resilience

    Next Generation Emergency Call System with Enhanced Indoor Positioning

    Get PDF
    The emergency call systems in the United States and elsewhere are undergoing a transition from the PSTN-based legacy system to a new IP-based system. The new system is referred to as the Next Generation 9-1-1 (NG9-1-1) or NG112 system. We have built a prototype NG9-1-1 system which features media convergence and data integration that are unavailable in the current emergency calling system. The most important piece of information in the NG9-1-1 system is the caller's location. The caller's location is used for routing the call to the appropriate call center. The emergency responders use the caller's location to find the caller. Therefore, it is essential to determine the caller's location as precisely as possible to minimize delays in emergency response. Delays in response may result in loss of lives. When a person makes an emergency call outdoors using a mobile phone, the Global Positioning System (GPS) can provide the caller's location accurately. Indoor positioning, however, presents a challenge. GPS does not generally work indoors because satellite signals do not penetrate most buildings. Moreover, there is an important difference between determining location outdoors and indoors. Unlike outdoors, vertical accuracy is very important in indoor positioning because an error of few meters will send emergency responders to a different floor in a building, which may cause a significant delay in reaching the caller. This thesis presents a way to augment our NG9-1-1 prototype system with a new indoor positioning system. The indoor positioning system focuses on improving the accuracy of vertical location. Our goal is to provide floor-level accuracy with minimum infrastructure support. Our approach is to use a user's smartphone to trace her vertical movement inside buildings. We utilize multiple sensors available in today's smartphones to enhance positioning accuracy. This thesis makes three contributions. First, we present a hybrid architecture for floor localization with emergency calls in mind. The architecture combines beacon-based infrastructure and sensor-based dead reckoning, striking a balance between accurately determining a user's location and minimizing the required infrastructure. Second, we present the elevator module for tracking a user's movement in an elevator. The elevator module addresses three core challenges that make it difficult to accurately derive displacement from acceleration. Third, we present the stairway module which determines the number of floors a user has traveled on foot. Unlike previous systems that track users' foot steps, our stairway module uses a novel landing counting technique. Additionally, this thesis presents our work on designing and implementing an NG9-1-1 prototype system. We first demonstrate how emergency calls from various call origination devices are identified, routed to the proper Public Safety Answering Point (PSAP) based on the caller's location, and terminated by the call taker software at the PSAP. We then show how text communications such as Instant Messaging and Short Message Service can be integrated into the NG9-1-1 architecture. We also present GeoPS-PD, a polygon simplification algorithm designed to improve the performance of location-based routing. GeoPS-PD reduces the size of a polygon, which represents the service boundary of a PSAP in the NG9-1-1 system

    Indoor localization using place and motion signatures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesis.Includes bibliographical references (p. 141-153).Most current methods for 802.11-based indoor localization depend on either simple radio propagation models or exhaustive, costly surveys conducted by skilled technicians. These methods are not satisfactory for long-term, large-scale positioning of mobile devices in practice. This thesis describes two approaches to the indoor localization problem, which we formulate as discovering user locations using place and motion signatures. The first approach, organic indoor localization, combines the idea of crowd-sourcing, encouraging end-users to contribute place signatures (location RF fingerprints) in an organic fashion. Based on prior work on organic localization systems, we study algorithmic challenges associated with structuring such organic location systems: the design of localization algorithms suitable for organic localization systems, qualitative and quantitative control of user inputs to "grow" an organic system from the very beginning, and handling the device heterogeneity problem, in which different devices have different RF characteristics. In the second approach, motion compatibility-based indoor localization, we formulate the localization problem as trajectory matching of a user motion sequence onto a prior map. Our method estimates indoor location with respect to a prior map consisting of a set of 2D floor plans linked through horizontal and vertical adjacencies. To enable the localization system, we present a motion classification algorithm that estimates user motions from the sensors available in commodity mobile devices. We also present a route network generation method, which constructs a graph representation of all user routes from legacy floor plans. Given these inputs, our HMM-based trajectory matching algorithm recovers user trajectories. The main contribution is the notion of path compatibility, in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for metric/topological/semantic agreement with the prior map. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our method can recover the user's location to within several meters in one to two minutes after a "cold start."by Jun-geun Park.Ph.D

    The IPIN 2019 Indoor Localisation Competition - Description and Results

    Get PDF
    IPIN 2019 Competition, sixth in a series of IPIN competitions, was held at the CNR Research Area of Pisa (IT), integrated into the program of the IPIN 2019 Conference. It included two on-site real-time Tracks and three off-site Tracks. The four Tracks presented in this paper were set in the same environment, made of two buildings close together for a total usable area of 1000 m 2 outdoors and and 6000 m 2 indoors over three floors, with a total path length exceeding 500 m. IPIN competitions, based on the EvAAL framework, have aimed at comparing the accuracy performance of personal positioning systems in fair and realistic conditions: past editions of the competition were carried in big conference settings, university campuses and a shopping mall. Positioning accuracy is computed while the person carrying the system under test walks at normal walking speed, uses lifts and goes up and down stairs or briefly stops at given points. Results presented here are a showcase of state-of-the-art systems tested side by side in real-world settings as part of the on-site real-time competition Tracks. Results for off-site Tracks allow a detailed and reproducible comparison of the most recent positioning and tracking algorithms in the same environment as the on-site Tracks

    The IPIN 2019 Indoor Localisation Competition—Description and Results

    Get PDF
    IPIN 2019 Competition, sixth in a series of IPIN competitions, was held at the CNR Research Area of Pisa (IT), integrated into the program of the IPIN 2019 Conference. It included two on-site real-time Tracks and three off-site Tracks. The four Tracks presented in this paper were set in the same environment, made of two buildings close together for a total usable area of 1000 m 2 outdoors and and 6000 m 2 indoors over three floors, with a total path length exceeding 500 m. IPIN competitions, based on the EvAAL framework, have aimed at comparing the accuracy performance of personal positioning systems in fair and realistic conditions: past editions of the competition were carried in big conference settings, university campuses and a shopping mall. Positioning accuracy is computed while the person carrying the system under test walks at normal walking speed, uses lifts and goes up and down stairs or briefly stops at given points. Results presented here are a showcase of state-of-the-art systems tested side by side in real-world settings as part of the on-site real-time competition Tracks. Results for off-site Tracks allow a detailed and reproducible comparison of the most recent positioning and tracking algorithms in the same environment as the on-site Tracks

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy Sâ„¢ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tangoâ„¢ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices
    • …
    corecore