6,825 research outputs found

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation

    Precision Pointing Control System (PPCS) system design and analysis

    Get PDF
    The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target

    Application of inertial instruments for DSN antenna pointing and tracking

    Get PDF
    The feasibility of using inertial instruments to determine the pointing attitude of the NASA Deep Space Network antennas is examined. The objective is to obtain 1 mdeg pointing knowledge in both blind pointing and tracking modes to facilitate operation of the Deep Space Network 70 m antennas at 32 GHz. A measurement system employing accelerometers, an inclinometer, and optical gyroscopes is proposed. The initial pointing attitude is established by determining the direction of the local gravity vector using the accelerometers and the inclinometer, and the Earth's spin axis using the gyroscopes. Pointing during long-term tracking is maintained by integrating the gyroscope rates and augmenting these measurements with knowledge of the local gravity vector. A minimum-variance estimator is used to combine measurements to obtain the antenna pointing attitude. A key feature of the algorithm is its ability to recalibrate accelerometer parameters during operation. A survey of available inertial instrument technologies is also given

    High Accuracy Tracking of Space-Borne Non-Cooperative Targets

    Get PDF

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05∘^{\circ} and 0.18 m / 2.39∘^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201

    Development of a Low-Cost 6 DOF Brick Tracking System for Use in Advanced Gas-Cooled Reactor Model Tests

    Get PDF
    This paper presents the design of a low-cost, compact instrumentation system to enable six degree of freedom motion tracking of acetal bricks within an experimental model of a cracked Advanced Gas-Cooled Reactor (AGR) core. The system comprises optical and inertial sensors and capitalises on the advantages offered by data fusion techniques. The optical system tracks LED indicators, allowing a brick to be accurately located even in cluttered images. The LED positions are identified using a geometrical correspondence algorithm, which was optimised to be computationally efficient for shallow movements, and complex camera distortions are corrected using a versatile Incident Ray-Tracking calibration. Then, a Perspective-Ray-based Scaled Orthographic projection with Iteration (PRSOI) algorithm is applied to each LED position to determine the six degree of freedom pose. Results from experiments show that the system achieves a low Root Mean Squared (RMS) error of 0.2296 mm in x, 0.3943 mm in y, and 0.0703 mm in z. Although providing an accurate measurement solution, the optical tracking system has a low sample rate and requires the line of sight to be maintained throughout each test. To increase the robustness, accuracy, and sampling frequency of the system, the optical system can be augmented with an Inertial Measurement Unit (IMU). This paper presents a method to integrate the optical system and IMU data by accurately timestamping data from each set of sensors and aligning the two coordinate axes. Once miniaturised, the developed system will be used to track smaller components within the AGR models that cannot be tracked with current instrumentation, expanding reactor core modelling capabilities

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    Visual-Inertial Mapping with Non-Linear Factor Recovery

    Full text link
    Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches

    Fusion of Imaging and Inertial Sensors for Navigation

    Get PDF
    The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions
    • …
    corecore