700 research outputs found

    NASA Office of Aeronautics and Space Technology Summer Workshop. Volume 3: Navigation, guidance and control panel

    Get PDF
    User technology requirements are identified in relation to needed technology advancement for future space missions in the areas of navigation, guidance, and control. Emphasis is placed on: reduction of mission support cost by 50% through autonomous operation, a ten-fold increase in mission output through improved pointing and control, and a hundred-fold increase in human productivity in space through large-scale teleoperator applications

    Investigation on space navigation techniques based on optical sensors

    Get PDF
    Space navigation deals with the determination of the kinematic state (position, velocity, attitude) of a spacecraft. The kinematic state can be obtained based on the output of suitable sensors and by means of appropriate computation. As far as it concerns the sensors, the optoelectronic ones – already present – are facing increasing interest and applications. The success of these sensors depends on the improved performance, the reduced cost (also as a result of a strong commercial growth for parent terrestrial products) as well as on the availability of computation resources required to efficiently process the images. It clearly appears that the interest for this sensing technology will continue in the foreseeable future, and their application will spread to all classes of space platforms, including the small cubesats. The thesis is devoted to investigate some aspects related to the use of the optoelectronic sensors – indeed the word techniques – on-board spacecraft. First, the focus is on the star tracker, deemed as the most accurate (and the most expensive) of the attitude sensors, and considered to be the flagship of the space optoelectronic instruments, with complex hardware and strong computational requirements. Star trackers are optoelectronic instruments providing the attitude of the satellites through star observations. The estimate of spacecraft attitude is obtained beginning with the measurements of star coordinates in the body reference frame and comparing these “observed” coordinates with the “known” star directions stored in the on-board star catalogue. A lot of studies have been done in order to improve the attitude estimate accuracy and to define faster algorithms capable to be implemented on-board of satellite. Three different methods will be presented for following applications: TRIAD, q-method and QUEST. Indeed, the two initial chapters are devoted to resume star tracker basics and to recall the attitude determination techniques. Then, the following part of the thesis (chapter 3) deals with some more original contribution considering the calibration of the star trackers. This is a topic of high current interest as it greatly affects the cost of the hardware. Instead of carrying on a long expensive test campaign at the facility, a simpler and faster two steps process with a “raw”, preparatory phase at the production site and a final, possibly autonomous, accurate calibration once in orbit can produce valuable results. In such a way it is possible to reduce the effort at the factory and to directly evaluate the performance once in orbit. Indeed, small deviations in the equipment occurring during the most critical condition, i.e. the launch phase, can be still corrected before real measurements campaign will begin. Clearly this approach becomes extremely interesting for the new instruments built in large batches for huge Low Earth Orbit formations’ satellites. The third chapter is focused on the calibration process (on-ground and on-orbit calibration) and reports the simulations and the findings for this proposed technique. Some more general discussion is required to introduce the last part of this dissertation (chapter 4). Space probes increasingly explore the solar system, up to faraway planets. Orbit determination of these probes, based on radio tracking from Earth, becomes clearly less accurate as the distance from Earth grows up. Above all, the time required for telemetry/navigation data downlink and tele-command uplink also increases with distance from Earth and therefore real-time manoeuvres and operations become impossible. As an example, the time needed to send a telecommand or receive telemetries in the Rosetta mission was about 20 minutes once the probe reached the target. When a spacecraft is close to a planetary target (or celestial body, including comets and asteroids), optical navigation – in use since the experiments with Mariner 6 and 7 missions to Mars (1969) – can nowadays ensures accurate estimates of the relative kinematics and allows to conceive manoeuvres computed on-board, autonomously and in real time. This technique, based on imaging and on the comparison with already known data as previously captured images, celestial catalogues or ephemerides, helps with the determination of the complete kinematic state of the spacecraft, relative to the target. Indeed, it is similar to attitude determination traditionally carried out by means of star trackers, where the spacecraft’s orientation is computed thanks to a priori information included in the star catalogue. The similarity in concept, with imaging process and comparison to stored information, introduces the question if star tracker’s and proximity cameras’ functions can be exploited by the same on-board hardware. The availability of a universal optical navigation sensor, sharing a large part of its expensive components, could really be an enabling technology for a more effective space exploration. The aim of this part of the work is to investigate and analyse the possibility of such a universal sensor, which is collecting more and more interest. The main issue is the identification of the sensor’s configuration – as an example beginning with multi-head star trackers with different optics and focal lengths – and algorithms allowing to improving star trackers performances and to exploit this twin use. This identification moves through a correct modelling of the sensor behaviour. The combination between star trackers and proximity cameras as position/attitude sensors could obviously allow a reduction in costs, and – probably more important at the current, preliminary status of this approach – provide a back-up solution in case of failures thanks a possible, even non-optimal redundancy. Furthermore, the interest of this study is not limited to deep space missions, and may be extended to other vehicles currently using star trackers and cameras as the planetary rovers. In the first part of chapter four will present a typical optical navigation system and the method used for the estimates of kinematics parameters. Then the discussion will be focused on the use of the star tracker as backup or in place of the navigation camera during the main phases of the mission: cruise, approach and fly-by or descent to the target. A simple case study, relevant to a low altitude lunar orbit, will be reported and its results will be presented and discussed. For that simulation, the star tracker is able to compute the position of the spacecraft with respect to the planet inertial reference frame using the landmarks catalogue, as for the estimate of attitude. The capability of a multi-head star tracker to estimate the relative position of the spacecraft with respect to a target, therefore acting as a navigation camera, opens the path to a universal optical sensor. The use of this sensor will be for sure limited to specific mission phases, due to the lenses’ limitations and to the threshold associated to the detector. As an example, the approach to deep space celestial bodies (asteroids, far planets) can be considered as a possible application regime. At least in these specific phases, the proposed solution has the potential to reduce the costs and/or offer a redundancy in case of failure of part of the instruments. Indeed, the analysis of this extended application of the star tracker is quite interesting for future deep space missions. Furthermore, the interest of the study is not limited to interplanetary navigation, and can be extended – by means of using multiple heads or specific filters - to other vehicles currently using star trackers and cameras as the planetary rovers

    The Industrial Track of EuroVR 2018:Proceedings of the 15th Annual EuroVR Conference

    Get PDF

    The Industrial Track of EuroVR 2018:Proceedings of the 15th Annual EuroVR Conference

    Get PDF

    A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations

    Get PDF
    Automated satellite proximity operations is an increasingly relevant area of mission operations for the US Air Force with potential to significantly enhance space situational awareness (SSA). Simultaneous localization and mapping (SLAM) is a computer vision method of constructing and updating a 3D map while keeping track of the location and orientation of the imaging agent inside the map. The main objective of this research effort is to design a monocular SLAM method customized for the space environment. The method developed in this research will be implemented in an indoor proximity operations simulation laboratory. A run-time analysis is performed, showing near real-time operation. The method is verified by comparing SLAM results to truth vertical rotation data from a CubeSat air bearing testbed. This work enables control and testing of simulated proximity operations hardware in a laboratory environment. Additionally, this research lays the foundation for autonomous satellite proximity operations with unknown targets and minimal additional size, weight, and power requirements, creating opportunities for numerous mission concepts not previously available

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    PULSAR: Testing the Technologies for On-Orbit Assembly of a Large Telescope

    Get PDF
    The EU project PULSAR (Prototype of an Ultra Large Structure Assembly Robot) carried out a feasibility analysis for a potential mission that could demonstrate robotic technology for autonomous assembly of a large space telescope. The project performed the analysis using two hardware demonstrators, one devoted to show the assembly of five segmented mirror tiles using a robotic manipulator, and another one showing extended mobility for assembling a large structure in low gravity conditions. The hardware demonstrators were complemented with a simulation analysis to demonstrate the operation of a fully integrated system and to address the challenges especially in the field of attitude and orbital control. The techniques developed in the project support the path toward In-Space Servicing, Assembly and Manufacturing (ISAM)

    Practical investigations in robot localization using ultra-wideband sensors

    Get PDF
    Robot navigation is rudimentary compared to the capabilities of humans and animals to move about their environments. One of the core processes of navigation is localization, the problem of answering where one is at the present time. Robot localization is the science of using various sensors to inform a robot of where it is within its environment. Ultra-wideband (UWB) radio is one such sensor technology that can return absolute position information. The algorithm to accomplish this is known as multilateration, which uses a collection of distance measurements between multiple robot tag and environment anchor pairs to calculate the tag’s position. UWB is especially suited to the task of returning precise distance measurements due to its capabilities of short duration, high amplitude pulse generation and detection. Decawave Ltd. has created an UWB integrated circuit to perform ranging and a suite of products to support this technology. Claimed and verified accuracies using this implementation are on the order of 10cm. This thesis describes various experiments carried out using Decawave technology for robot localization. The progression of the chapters starts with commercial product verification before moving into development and testing in various environments of an open-source driver package for the Robot Operating System (ROS), then the development of a novel phase difference of arrival (PDoA) sensor for three-dimensional robot localization without an UWB anchor mesh, before concluding with future research directions and commercialization potential of UWB. This thesis is designed as a compilation of all that the author has learned through primary and secondary research over the past three years of investigation. The primary contributions are: 1. A modular ROS UWB driver framework and series of ROS bags for offline experimentation with multilateration algorithms. 2. A robust ROS framework for comparing motion capture system (MoCap) ground truth vs sensor data for rigorous statistical analysis and characterization of multiple sensors. 3. Development of a novel UWB PDoA sensor array and data model to allow 3D localization of a target from a single point without the deployment of an antenna mesh

    DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation

    Get PDF
    Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity) between the GNSS transmitters and the receiver. However the performance achievable on-board, in real time, above LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker signals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy than CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board GNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth Orbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and monocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft also in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be tracked continuously. In this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the proposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative motion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated at the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher robustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV cameras

    I-Light Symposium 2005 Proceedings

    Get PDF
    I-Light was made possible by a special appropriation by the State of Indiana. The research described at the I-Light Symposium has been supported by numerous grants from several sources. Any opinions, findings and conclusions, or recommendations expressed in the 2005 I-Light Symposium Proceedings are those of the researchers and authors and do not necessarily reflect the views of the granting agencies.Indiana University Office of the Vice President for Research and Information Technology, Purdue University Office of the Vice President for Information Technology and CI
    • …
    corecore