6,776 research outputs found

    Autonomous Space Surveillance for Arbitrary Domains

    Get PDF
    Space is becoming increasingly congested every day and the task of accurately tracking satellites is paramount for the continued safe operation of both manned and unmanned space missions. In addition to new spacecraft launches, satellite break-up events and collisions generate large amounts of orbital debris dramatically increasing the number of orbiting objects with each such event. In order to prevent collisions and protect both life and property in orbit, accurate knowledge of the position of orbiting objects is necessary. Space Domain Awareness (SDA) used interchangeably with Space Situational Awareness (SSA), are the names given to the daunting task of tracking all orbiting objects. In addition to myriad objects in low-earth-orbit (LEO) up to Geostationary (GEO) orbit, there are a growing number of spacecraft in cislunar space expanding the task of cataloguing and tracking space objects to include the whole of the earth-moon system. This research proposes a series of algorithms to be used in autonomous SSA for earth-orbiting and cislunar objects. The algorithms are autonomous in the sense that once a set of raw measurements (images in this case) are input to the algorithms, no human in the loop input is required to produce an orbit estimate. There are two main components to this research, an image processing and satellite detection component, and a dynamics modeling component for three-body relative motion. For the image processing component, resident space objects, (commonly referred to as RSOs) which are satellites or orbiting debris are identified in optical images. Two methods of identifying RSOs in a set of images are presented. The first method autonomously builds a template image to match a constellation of satellites and proceeds to match RSOs across a set of images. The second method utilizes optical flow to use the image velocities of objects to differentiate between stars and RSOs. Once RSOs have been detected, measurements are generated from the detected RSO locations to estimate the orbit of the observed object. The orbit determination component includes multiple methods capable of handling both earth-orbiting and cislunar observations. The methods used include batch-least squares and unscented Kalman filtering for earth-orbiting objects. For cislunar objects, a novel application of a particle swarm optimizer (PSO) is used to estimate the observed satellite orbit. The PSO algorithm ingests a set of measurements and attempts to match a set of virtual particle measurements to the truth measurements. The PSO orbit determination method is tested using both MATLAB and Python implementations. The second main component of this research develops a novel linear dynamics model of relative motion for satellites in cislunar space. A set of novel linear relative equations of motion are developed with a semi-analytical matrix exponential method. The motion models are tested on various cislunar orbit geometries for both the elliptical restricted three-body problem (ER3BP) and the circular restricted three-body problem (CR3BP) through MATLAB simulations. The linear solution method\u27s accuracy is compared to the non-linear equations of relative motion and are seen to hold to meter level accuracy for deputy position for a variety of orbits and time-spans. Two applications of the linearized motion models are then developed. The first application defines a differential corrector to compute closed relative motion trajectories in a relative three-body frame. The second application uses the exponential matrix solution for the linearized equations of relative motion to develop a method of initial relative orbit determination (IROD) for the CR3BP

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Robust convex optimisation techniques for autonomous vehicle vision-based navigation

    Get PDF
    This thesis investigates new convex optimisation techniques for motion and pose estimation. Numerous computer vision problems can be formulated as optimisation problems. These optimisation problems are generally solved via linear techniques using the singular value decomposition or iterative methods under an L2 norm minimisation. Linear techniques have the advantage of offering a closed-form solution that is simple to implement. The quantity being minimised is, however, not geometrically or statistically meaningful. Conversely, L2 algorithms rely on iterative estimation, where a cost function is minimised using algorithms such as Levenberg-Marquardt, Gauss-Newton, gradient descent or conjugate gradient. The cost functions involved are geometrically interpretable and can statistically be optimal under an assumption of Gaussian noise. However, in addition to their sensitivity to initial conditions, these algorithms are often slow and bear a high probability of getting trapped in a local minimum or producing infeasible solutions, even for small noise levels. In light of the above, in this thesis we focus on developing new techniques for finding solutions via a convex optimisation framework that are globally optimal. Presently convex optimisation techniques in motion estimation have revealed enormous advantages. Indeed, convex optimisation ensures getting a global minimum, and the cost function is geometrically meaningful. Moreover, robust optimisation is a recent approach for optimisation under uncertain data. In recent years the need to cope with uncertain data has become especially acute, particularly where real-world applications are concerned. In such circumstances, robust optimisation aims to recover an optimal solution whose feasibility must be guaranteed for any realisation of the uncertain data. Although many researchers avoid uncertainty due to the added complexity in constructing a robust optimisation model and to lack of knowledge as to the nature of these uncertainties, and especially their propagation, in this thesis robust convex optimisation, while estimating the uncertainties at every step is investigated for the motion estimation problem. First, a solution using convex optimisation coupled to the recursive least squares (RLS) algorithm and the robust H filter is developed for motion estimation. In another solution, uncertainties and their propagation are incorporated in a robust L convex optimisation framework for monocular visual motion estimation. In this solution, robust least squares is combined with a second order cone program (SOCP). A technique to improve the accuracy and the robustness of the fundamental matrix is also investigated in this thesis. This technique uses the covariance intersection approach to fuse feature location uncertainties, which leads to more consistent motion estimates. Loop-closure detection is crucial in improving the robustness of navigation algorithms. In practice, after long navigation in an unknown environment, detecting that a vehicle is in a location it has previously visited gives the opportunity to increase the accuracy and consistency of the estimate. In this context, we have developed an efficient appearance-based method for visual loop-closure detection based on the combination of a Gaussian mixture model with the KD-tree data structure. Deploying this technique for loop-closure detection, a robust L convex posegraph optimisation solution for unmanned aerial vehicle (UAVs) monocular motion estimation is introduced as well. In the literature, most proposed solutions formulate the pose-graph optimisation as a least-squares problem by minimising a cost function using iterative methods. In this work, robust convex optimisation under the L norm is adopted, which efficiently corrects the UAV’s pose after loop-closure detection. To round out the work in this thesis, a system for cooperative monocular visual motion estimation with multiple aerial vehicles is proposed. The cooperative motion estimation employs state-of-the-art approaches for optimisation, individual motion estimation and registration. Three-view geometry algorithms in a convex optimisation framework are deployed on board the monocular vision system for each vehicle. In addition, vehicle-to-vehicle relative pose estimation is performed with a novel robust registration solution in a global optimisation framework. In parallel, and as a complementary solution for the relative pose, a robust non-linear H solution is designed as well to fuse measurements from the UAVs’ on-board inertial sensors with the visual estimates. The suggested contributions have been exhaustively evaluated over a number of real-image data experiments in the laboratory using monocular vision systems and range imaging devices. In this thesis, we propose several solutions towards the goal of robust visual motion estimation using convex optimisation. We show that the convex optimisation framework may be extended to include uncertainty information, to achieve robust and optimal solutions. We observed that convex optimisation is a practical and very appealing alternative to linear techniques and iterative methods

    Project Tech Top study of lunar, planetary and solar topography Final report

    Get PDF
    Data acquisition techniques for information on lunar, planetary, and solar topograph
    • …
    corecore