11,205 research outputs found

    Locating the LCROSS Impact Craters

    Get PDF
    The Lunar CRater Observations and Sensing Satellite (LCROSS) mission impacted a spent Centaur rocket stage into a permanently shadowed region near the lunar south pole. The Sheperding Spacecraft (SSC) separated \sim9 hours before impact and performed a small braking maneuver in order to observe the Centaur impact plume, looking for evidence of water and other volatiles, before impacting itself. This paper describes the registration of imagery of the LCROSS impact region from the mid- and near-infrared cameras onboard the SSC, as well as from the Goldstone radar. We compare the Centaur impact features, positively identified in the first two, and with a consistent feature in the third, which are interpreted as a 20 m diameter crater surrounded by a 160 m diameter ejecta region. The images are registered to Lunar Reconnaisance Orbiter (LRO) topographical data which allows determination of the impact location. This location is compared with the impact location derived from ground-based tracking and propagation of the spacecraft's trajectory and with locations derived from two hybrid imagery/trajectory methods. The four methods give a weighted average Centaur impact location of -84.6796\circ, -48.7093\circ, with a 1{\sigma} un- certainty of 115 m along latitude, and 44 m along longitude, just 146 m from the target impact site. Meanwhile, the trajectory-derived SSC impact location is -84.719\circ, -49.61\circ, with a 1{\sigma} uncertainty of 3 m along the Earth vector and 75 m orthogonal to that, 766 m from the target location and 2.803 km south-west of the Centaur impact. We also detail the Centaur impact angle and SSC instrument pointing errors. Six high-level LCROSS mission requirements are shown to be met by wide margins. We hope that these results facilitate further analyses of the LCROSS experiment data and follow-up observations of the impact region.Comment: Accepted for publication in Space Science Review. 24 pages, 9 figure

    A Survey of Positioning Systems Using Visible LED Lights

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe

    A Real-Time Video-based Eye Tracking Approach for Driver Attention Study

    Get PDF
    nowing the driver's point of gaze has significant potential to enhance driving safety, eye movements can be used as an indicator of the attention state of a driver; but the primary obstacle of integrating eye gaze into today's large scale real world driving attention study is the availability of a reliable, low-cost eye-tracking system. In this paper, we make an attempt to investigate such a real-time system to collect driver's eye gaze in real world driving environment. A novel eye-tracking approach is proposed based on low cost head mounted eye tracker. Our approach detects corneal reflection and pupil edge points firstly, and then fits the points with ellipse. The proposed approach is available in different illumination and driving environment from simple inexpensive head mounted eye tracker, which can be widely used in large scale experiments. The experimental results illustrate our approach can reliably estimate eye position with an accuracy of average 0.34 degree of visual angle in door experiment and 2--5 degrees in real driving environments

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Optical and hyperspectral image analysis for image-guided surgery

    Get PDF

    Keyframe-based monocular SLAM: design, survey, and future directions

    Get PDF
    Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality. Although filter-based monocular SLAM systems were common at some time, the more efficient keyframe-based solutions are becoming the de facto methodology for building a monocular SLAM system. The objective of this paper is threefold: first, the paper serves as a guideline for people seeking to design their own monocular SLAM according to specific environmental constraints. Second, it presents a survey that covers the various keyframe-based monocular SLAM systems in the literature, detailing the components of their implementation, and critically assessing the specific strategies made in each proposed solution. Third, the paper provides insight into the direction of future research in this field, to address the major limitations still facing monocular SLAM; namely, in the issues of illumination changes, initialization, highly dynamic motion, poorly textured scenes, repetitive textures, map maintenance, and failure recovery
    corecore