25 research outputs found

    Blur aware metric depth estimation with multi-focus plenoptic cameras

    Full text link
    While a traditional camera only captures one point of view of a scene, a plenoptic or light-field camera, is able to capture spatial and angular information in a single snapshot, enabling depth estimation from a single acquisition. In this paper, we present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera. The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used. The main goal of our blur aware depth estimation (BLADE) approach is to improve disparity estimation for defocus stereo images by integrating both correspondence and defocus cues. We thus leverage blur information where it was previously considered a drawback. We explicitly derive an inverse projection model including the defocus blur providing depth estimates up to a scale factor. A method to calibrate the inverse model is then proposed. We thus take into account depth scaling to achieve precise and accurate metric depth estimates. Our results show that introducing defocus cues improves the depth estimation. We demonstrate the effectiveness of our framework and depth scaling calibration on relative depth estimation setups and on real-world 3D complex scenes with ground truth acquired with a 3D lidar scanner.Comment: 21 pages, 12 Figures, 3 Table

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Optical In-Process Measurement Systems

    Get PDF
    Information is key, which means that measurements are key. For this reason, this book provides unique insight into state-of-the-art research works regarding optical measurement systems. Optical systems are fast and precise, and the ongoing challenge is to enable optical principles for in-process measurements. Presented within this book is a selection of promising optical measurement approaches for real-world applications

    3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems

    Get PDF
    The aim of this thesis is the development of new concepts for environmental 3D reconstruction in automotive surround-view systems where information of the surroundings of a vehicle is displayed to a driver for assistance in parking and low-speed manouvering. The proposed driving assistance system represents a multi-disciplinary challenge combining techniques from both computer vision and computer graphics. This work comprises all necessary steps, namely sensor setup and image acquisition up to 3D rendering in order to provide a comprehensive visualization for the driver. Visual information is acquired by means of standard surround-view cameras with fish eye optics covering large fields of view around the ego vehicle. Stereo vision techniques are applied to these cameras in order to recover 3D information that is finally used as input for the image-based rendering. New camera setups are proposed that improve the 3D reconstruction around the whole vehicle, attending to different criteria. Prototypic realization was carried out that shows a qualitative measure of the results achieved and prove the feasibility of the proposed concept

    A NEW METHOD OF WAVELENGTH SCANNING INTERFEROMETRY FOR INSPECTING SURFACES WITH MULTI-SIDE HIGH-SLOPED FACETS

    Get PDF
    With the development of modern advanced manufacturing technologies, the requirements for ultra-precision structured surfaces are increasing rapidly for both high value-added products and scientific research. Examples of the components encompassing the structures include brightness enhancement film (BEF), optical gratings and so forth. Besides, specially designed structured surfaces, namely metamaterials can lead to specified desirable coherence, angular or spatial characteristics that the natural materials do not possess. This promising field attracts a large amount of funding and investments. However, owing to a lack of effective means of inspecting the structured surfaces, the manufacturing process is heavily reliant on the experience of fabrication operators adopting an expensive trial-and-error approach, resulting in high scrap rates up to 50-70% of the manufactured items. Therefore, overcoming this challenge becomes increasingly valuable. The thesis proposes a novel methodology to tackle this challenge by setting up an apparatus encompassing multiple measurement probes to attain the dataset for each facet of the structured surface and then blending the acquired datasets together, based on the relative location of the probes, which is achieved via the system calibration. The method relies on wavelength scanning interferometry (WSI), which can achieve areal measurement with axial resolutions approaching the nanometre without the requirement for the mechanical scanning of either the sample or optics, unlike comparable techniques such as coherence scanning interferometry (CSI). This lack of mechanical scanning opens up the possibility of using a multi-probe optics system to provide simultaneous measurement with multi adjacent fields of view. The thesis presents a proof-of-principle demonstration of a dual-probe wavelength scanning interferometry (DPWSI) system capable of measuring near-right-angle V-groove structures in a single measurement acquisition. The optical system comprises dual probes, with orthogonal measurement planes. For a given probe, a range of V-groove angles is measurable, limited by the acceptance angle of the objective lenses employed. This range can be expanded further by designing equivalent probe heads with varying angular separation. More complicated structured surfaces can be inspected by increasing the number of probes. The fringe analysis algorithms for WSI are discussed in detail, some improvements are proposed, and experimental validation is conducted. The scheme for calibrating the DPSWI system and obtaining the relative location between the probes to achieve the whole topography is implemented and presented in full. The appraisal of the DPWSI system is also carried out using a multi-step diamond-turned specimen and a sawtooth brightness enhancement film (BEF). The results showed that the proposed method could achieve the inspection of the near-right-angle V-groove structures with submicrometre scale vertical resolution and micrometre level lateral resolution

    Random access spectral imaging

    Get PDF
    A salient goal of spectral imaging is to record a so-called hyperspectral data-cube, consisting of two spatial and one spectral dimension. Traditional approaches are based on either time-sequential scanning in either the spatial or spectral dimension: spatial scanning involves passing a fixed aperture over a scene in the manner of a raster scan and spectral scanning is generally based on the use of a tuneable filter, where typically a series of narrow-band images of a fixed field of view are recorded and assembled into the data-cube. Such techniques are suitable only when the scene in question is static or changes slower than the scan rate. When considering dynamic scenes a time-resolved (snapshot) spectral imaging technique is required. Such techniques acquire the whole data-cube in a single measurement, but require a trade-off in spatial and spectral resolution. These trade-offs prevent current snapshot spectral imaging techniques from achieving resolutions on par with time-sequential techniques. Any snapshot device needs to have an optical architecture that allows it to gather light from the scene and map it to the detector in a way that allows the spatial and spectral components can be de-multiplexed to reconstruct the data-cube. This process results in the decreased resolution of snapshot devices as it becomes a problem of mapping a 3D data-cube onto a 2D detector. The sheer volume of data present in the data-cube also presents a processing challenge, particularly in the case of real-time processing. This thesis describes a prototype snapshot spectral imaging device that employs a random-spatial-access technique to record spectra only from the regions of interest in the scene, thus enabling maximisation of integration time and minimisation of data volume and recording rate. The aim of this prototype is to demonstrate how a particular optical architecture will allow for the effect of some of the above mentioned bottlenecks to be removed. Underpinning the basic concept is the fact that in all practical scenes most of the spectrally interesting information is contained in relatively few pixels. The prototype system uses random-spatial-access to multiple points in the scene considered to be of greatest interest. This enables time-resolved high resolution spectrometry to be made simultaneously at points across the full field of view. The enabling technology for the prototype was a digital micromirror device (DMD), which is an array of switchable mirrors that was used to create a two channel system. One channel was to a conventional imaging camera, while the other was to a spectrometer. The DMD acted as a dynamic aperture to the spectrometer and could be used to open and close slits in any part of the spectrometer aperture. The imaging channel was used to guide the selection of points of interest from the scene. An extensive geometric calibration was performed to determine the relationships between the DMD and two channels of the system. Two demonstrations of the prototype are given in this thesis: a dynamic biological scene and a static scene sampled using statistical sampling methods enabled by the dynamic aperture of the system. The dynamic scene consisted of red blood cells in motion and also undergoing a process of de-oxygenation which resulted in a change in the spectrum. Ten red blood cells were tracked across the scene and the expected change in spectrum was observed. For the second example the prototype was modified for Raman spectroscopy by adding laser illumination, a mineral sample was scanned and used to test statistical sampling methods. These methods exploited the re-configurable aperture of the system to sample the scene using blind random sampling and a grid based sampling approach. Other spectral imaging systems have a fixed aperture and cannot operate such sampling schemes

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector
    corecore