73 research outputs found

    Method of optimization of the fundamental matrix by technique speeded up robust features application of different stress images

    Get PDF
    The purpose of determining the fundamental matrix (F) is to define the epipolar geometry and to relate two 2D images of the same scene or video series to find the 3D scenes. The problem we address in this work is the estimation of the localization error and the processing time. We start by comparing the following feature extraction techniques: Harris, features from accelerated segment test (FAST), scale invariant feature transform (SIFT) and speed-up robust features (SURF) with respect to the number of detected points and correct matches by different changes in images. Then, we merged the best chosen by the objective function, which groups the descriptors by different regions in order to calculate β€˜F’. Then, we applied the standardized eight-point algorithm which also automatically eliminates the outliers to find the optimal solution β€˜F’. The test of our optimization approach is applied on the real images with different scene variations. Our simulation results provided good results in terms of accuracy and the computation time of β€˜F’ does not exceed 900 ms, as well as the projection error of maximum 1 pixel, regardless of the modification

    Study of Computational Image Matching Techniques: Improving Our View of Biomedical Image Data

    Get PDF
    Image matching techniques are proven to be necessary in various fields of science and engineering, with many new methods and applications introduced over the years. In this PhD thesis, several computational image matching methods are introduced and investigated for improving the analysis of various biomedical image data. These improvements include the use of matching techniques for enhancing visualization of cross-sectional imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), denoising of retinal Optical Coherence Tomography (OCT), and high quality 3D reconstruction of surfaces from Scanning Electron Microscope (SEM) images. This work greatly improves the process of data interpretation of image data with far reaching consequences for basic sciences research. The thesis starts with a general notion of the problem of image matching followed by an overview of the topics covered in the thesis. This is followed by introduction and investigation of several applications of image matching/registration in biomdecial image processing: a) registration-based slice interpolation, b) fast mesh-based deformable image registration and c) use of simultaneous rigid registration and Robust Principal Component Analysis (RPCA) for speckle noise reduction of retinal OCT images. Moving towards a different notion of image matching/correspondence, the problem of view synthesis and 3D reconstruction, with a focus on 3D reconstruction of microscopic samples from 2D images captured by SEM, is considered next. Starting from sparse feature-based matching techniques, an extensive analysis is provided for using several well-known feature detector/descriptor techniques, namely ORB, BRIEF, SURF and SIFT, for the problem of multi-view 3D reconstruction. This chapter contains qualitative and quantitative comparisons in order to reveal the shortcomings of the sparse feature-based techniques. This is followed by introduction of a novel framework using sparse-dense matching/correspondence for high quality 3D reconstruction of SEM images. As will be shown, the proposed framework results in better reconstructions when compared with state-of-the-art sparse-feature based techniques. Even though the proposed framework produces satisfactory results, there is room for improvements. These improvements become more necessary when dealing with higher complexity microscopic samples imaged by SEM as well as in cases with large displacements between corresponding points in micrographs. Therefore, based on the proposed framework, a new approach is proposed for high quality 3D reconstruction of microscopic samples. While in case of having simpler microscopic samples the performance of the two proposed techniques are comparable, the new technique results in more truthful reconstruction of highly complex samples. The thesis is concluded with an overview of the thesis and also pointers regarding future directions of the research using both multi-view and photometric techniques for 3D reconstruction of SEM images

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Estimating intrinsic camera parameters from the fundamental matrix using an evolutionary approach

    Get PDF
    Calibration is the process of computing the intrinsic (internal) camera parameters from a series of images. Normally calibration is done by placing predefined targets in the scene or by having special camera motions, such as rotations. If these two restrictions do not hold, then this calibration process is called autocalibration because it is done automatically, without user intervention. Using autocalibration, it is possible to create 3D reconstructions from a sequence of uncalibrated images without having to rely on a formal camera calibration process. The fundamental matrix describes the epipolar geometry between a pair of images, and it can be calculated directly from 2D image correspondences. We show that autocalibration from a set of fundamental matrices can simply be transformed into a global minimization problem utilizing a cost function. We use a stochastic optimization approach taken from the field of evolutionary computing to solve this problem. A number of experiments are performed on published and standardized data sets that show the effectiveness of the approach. The basic assumption of this method is that the internal (intrinsic) camera parameters remain constant throughout the image sequence, that is, the images are taken from the same camera without varying such quantities as the focal length. We show that for the autocalibration of the focal length and aspect ratio, the evolutionary method achieves results comparable to published methods but is simpler to implement and is efficient enough to handle larger image sequences

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation
    • …
    corecore