619 research outputs found

    Automatic Detection of Calibration Grids in Time-of-Flight Images

    Get PDF
    It is convenient to calibrate time-of-flight cameras by established methods, using images of a chequerboard pattern. The low resolution of the amplitude image, however, makes it difficult to detect the board reliably. Heuristic detection methods, based on connected image-components, perform very poorly on this data. An alternative, geometrically-principled method is introduced here, based on the Hough transform. The projection of a chequerboard is represented by two pencils of lines, which are identified as oriented clusters in the gradient-data of the image. A projective Hough transform is applied to each of the two clusters, in axis-aligned coordinates. The range of each transform is properly bounded, because the corresponding gradient vectors are approximately parallel. Each of the two transforms contains a series of collinear peaks; one for every line in the given pencil. This pattern is easily detected, by sweeping a dual line through the transform. The proposed Hough-based method is compared to the standard OpenCV detection routine, by application to several hundred time-of-flight images. It is shown that the new method detects significantly more calibration boards, over a greater variety of poses, without any overall loss of accuracy. This conclusion is based on an analysis of both geometric and photometric error.Comment: 11 pages, 11 figures, 1 tabl

    Boosted Random ferns for object detection

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we introduce the Boosted Random Ferns (BRFs) to rapidly build discriminative classifiers for learning and detecting object categories. At the core of our approach we use standard random ferns, but we introduce four main innovations that let us bring ferns from an instance to a category level, and still retain efficiency. First, we define binary features on the histogram of oriented gradients-domain (as opposed to intensity-), allowing for a better representation of intra-class variability. Second, both the positions where ferns are evaluated within the sliding window, and the location of the binary features for each fern are not chosen completely at random, but instead we use a boosting strategy to pick the most discriminative combination of them. This is further enhanced by our third contribution, that is to adapt the boosting strategy to enable sharing of binary features among different ferns, yielding high recognition rates at a low computational cost. And finally, we show that training can be performed online, for sequentially arriving images. Overall, the resulting classifier can be very efficiently trained, densely evaluated for all image locations in about 0.1 seconds, and provides detection rates similar to competing approaches that require expensive and significantly slower processing times. We demonstrate the effectiveness of our approach by thorough experimentation in publicly available datasets in which we compare against state-of-the-art, and for tasks of both 2D detection and 3D multi-view estimation.Peer ReviewedPostprint (author's final draft

    Low dispersion integrated michelson interferometer on silicon on insulator for optical coherence tomography

    Get PDF
    We present an integrated silicon Michelson interferometer for OCT fabricated with wafer scale deep UV lithography. Silicon waveguides of the interferometer are designed with GVD less than 50 ps/nm. km. The footprint of the device is 0.5 mm x 3 mm. The effect of sidewall roughness of silicon waveguides has been observed, possible solutions are discussed

    A Vision-Based Automatic Safe landing-Site Detection System

    Get PDF
    An automatic safe landing-site detection system is proposed for aircraft emergency landing, based on visible information acquired by aircraft-mounted cameras. Emergency landing is an unplanned event in response to emergency situations. If, as is unfortunately usually the case, there is no airstrip or airfield that can be reached by the un-powered aircraft, a crash landing or ditching has to be carried out. Identifying a safe landing-site is critical to the survival of passengers and crew. Conventionally, the pilot chooses the landing-site visually by looking at the terrain through the cockpit. The success of this vital decision greatly depends on the external environmental factors that can impair human vision, and on the pilot\u27s flight experience that can vary significantly among pilots. Therefore, we propose a robust, reliable and efficient detection system that is expected to alleviate the negative impact of these factors. In this study, we focus on the detection mechanism of the proposed system and assume that the image enhancement for increased visibility and image stitching for a larger field-of-view have already been performed on terrain images acquired by aircraft-mounted cameras. Specifically, we first propose a hierarchical elastic horizon detection algorithm to identify ground in rile image. Then the terrain image is divided into non-overlapping blocks which are clustered according to a roughness measure. Adjacent smooth blocks are merged to form potential landing-sites whose dimensions are measured with principal component analysis and geometric transformations. If the dimensions of a candidate region exceed the minimum requirement for safe landing, the potential landing-site is considered a safe candidate and highlighted on the human machine interface. At the end, the pilot makes the final decision by confirming one of the candidates, also considering other factors such as wind speed and wind direction, etc

    Comparison of Different Neural Networks for Iris Recognition: A Review

    Get PDF
    Biometrics is the science of verifying the identity of an individual through physiological measurements or behavioral traits. Since biometric identifiers are associated permanently with the user they are more reliable than token or knowledge based authentication methods. Among all the biometric modalities, iris has emerged as a popular choice due to its variability, stability and security. In this paper, we are presenting the various iris recognition techniques and its learning algorithm with neural network. Implementation of various techniques can be standardized on dedicated architectures and learning algorithm. It has been observed that SOM has stronger adaptive capacity and robustness. HSOM, which is based on hamming distance, has improved accuracy over LSOM. SANN model is more suitable in measuring the shape similarity, while cascaded FFBPNN are more reliable and efficient method for iris recognition. Key words: Biometrics, Iris recognition, Artificial Neural Networks

    An LBP based Iris Recognition System using Feed Forward Back Propagation Neural Network

    Get PDF
    An iris recognition system using LBP feature extraction technique with Feed Forward Back Propagation Neural Network is presented. For feature extraction from the eye images the iris localization and segmentation is very important task so in proposed work Hough circular transform (HCT) is used to segment the iris region from the eye mages. In this proposed work Local Binary Pattern (LBP) feature extraction technique is used to extract feature from the segmented iris region, then feed forward back propagation neural network is use as a classifier and in any classifier there to phases training and testing. The LBP feature extraction technique is a straightforward technique and every proficient feature operator which labels the pixels of an iris image by thresholding the neighbourhood of each pixel and considers the feature as a result in form of binary number. Due to its discriminative efficiency and computational simplicity the LBP feature extractor has become a popular approach in various recognition systems. This proposed method decreased the FAR as well as FRR, & has increases the system performance on the given dataset. The average accuracy of proposed iris recognition system is more than 97%

    High-Q Tuneable 10-GHz Bragg Resonator for Oscillator Applications

    Get PDF
    This paper describes the design, simulation, and measurement of a tuneable 9.365-GHz aperiodic Bragg resonator. The resonator utilizes an aperiodic arrangement of non (λ/4) low-loss alumina plates (εr = 9.75, loss tangent of 1×10−5 to 2 × 10−5) mounted in a cylindrical metal waveguide. Tuning is achieved by varying the length of the center section of the cavity. A multi-element bellows/probe assembly is presented. A tuning range of 130 MHz (1.39%) is demonstrated. The insertion loss S21 varies from −2.84 to −12.03 dB while the unloaded Q varies from 43 788 to 122 550 over this tuning range. At 10 of the 13 measurement points, the unloaded Q exceeds 100 000, and the insertion loss is above −7 dB. Two modeling techniques are discussed; these include a simple ABCD circuit model for rapid simulation and optimization and a 2.5-D field solver, which is used to plot the field distribution inside the cavity
    • …
    corecore