2,797 research outputs found

    Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery

    Full text link
    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established handcrafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar (SAR), optical images, remote sensing, data fusion, stereogrammetr

    Biometric security: A novel ear recognition approach using a 3D morphable ear model

    Get PDF
    Biometrics is a critical component of cybersecurity that identifies persons by verifying their behavioral and physical traits. In biometric-based authentication, each individual can be correctly recognized based on their intrinsic behavioral or physical features, such as face, fingerprint, iris, and ears. This work proposes a novel approach for human identification using 3D ear images. Usually, in conventional methods, the probe image is registered with each gallery image using computational heavy registration algorithms, making it practically infeasible due to the time-consuming recognition process. Therefore, this work proposes a recognition pipeline that reduces the one-to-one registration between probe and gallery. First, a deep learning-based algorithm is used for ear detection in 3D side face images. Second, a statistical ear model known as a 3D morphable ear model (3DMEM), was constructed to use as a feature extractor from the detected ear images. Finally, a novel recognition algorithm named you morph once (YMO) is proposed for human recognition that reduces the computational time by eliminating one-to-one registration between probe and gallery, which only calculates the distance between the parameters stored in the gallery and the probe. The experimental results show the significance of the proposed method for a real-time application

    Geometric Cross-Modal Comparison of Heterogeneous Sensor Data

    Full text link
    In this work, we address the problem of cross-modal comparison of aerial data streams. A variety of simulated automobile trajectories are sensed using two different modalities: full-motion video, and radio-frequency (RF) signals received by detectors at various locations. The information represented by the two modalities is compared using self-similarity matrices (SSMs) corresponding to time-ordered point clouds in feature spaces of each of these data sources; we note that these feature spaces can be of entirely different scale and dimensionality. Several metrics for comparing SSMs are explored, including a cutting-edge time-warping technique that can simultaneously handle local time warping and partial matches, while also controlling for the change in geometry between feature spaces of the two modalities. We note that this technique is quite general, and does not depend on the choice of modalities. In this particular setting, we demonstrate that the cross-modal distance between SSMs corresponding to the same trajectory type is smaller than the cross-modal distance between SSMs corresponding to distinct trajectory types, and we formalize this observation via precision-recall metrics in experiments. Finally, we comment on promising implications of these ideas for future integration into multiple-hypothesis tracking systems.Comment: 10 pages, 13 figures, Proceedings of IEEE Aeroconf 201

    3D Subject-Atlas Image Registration for Micro-Computed Tomography Based Characterization of Drug Delivery in the Murine Cochlea

    Get PDF
    A wide variety of hearing problems can potentially be treated with local drug delivery systems capable of delivering drugs directly to the cochlea over extended periods of time. Developing and testing such systems requires accurate quantification of drug concentration over time. A variety of techniques have been proposed for both direct and indirect measurement of drug pharmacokinetics; direct techniques are invasive, whereas many indirect techniques are imprecise because they rely on assumptions about the relationship between physiological response and drug concentrations. One indirect technique, however, is capable of quantifying drug pharmacokinetics very precisely: Micro-Computed tomography (micro-CT) can provide a non-invasive way to measure the concentration of a contrast agent in the cochlea over time. In this thesis, we propose a systematic approach for analyzing micro-CT images to measure concentrations of the contrast agent ioversol in mouse cochlea. This approach requires segmenting and classifying the intra-cochlea structures from micro-CT images, which is done via 3D atlas-subject registration to a published atlas of the mouse cochlea. Labels of each intra-cochlear structure in the atlas are propagated through the registration transformation to the corresponding structures in the micro-CT images. Pixel intensities are extracted from three key intra-cochlea structures: scala tympani (ST), scala vestibuli (SV), scala media (SM) in the micro-CT images, and these intensities are mapped into concentrations using a linear model between solution concentration and image intensity that is determined in a previous calibration step. To localize this analysis, the ST, SV, SM are divided into several discrete components, and the concentrations are estimated in each component using a weighted average with weights determined by solving a nonhomogeneous Poisson equation with Dirichlet boundary conditions on the component boundaries. We illustrate this entire system on a series of micro-CT images of an anesthetized mouse that include a baseline scan (with no contrast agent) and a series of scans after injection of ioversol into the cochlea

    Locally Orderless Registration

    Get PDF
    Image registration is an important tool for medical image analysis and is used to bring images into the same reference frame by warping the coordinate field of one image, such that some similarity measure is minimized. We study similarity in image registration in the context of Locally Orderless Images (LOI), which is the natural way to study density estimates and reveals the 3 fundamental scales: the measurement scale, the intensity scale, and the integration scale. This paper has three main contributions: Firstly, we rephrase a large set of popular similarity measures into a common framework, which we refer to as Locally Orderless Registration, and which makes full use of the features of local histograms. Secondly, we extend the theoretical understanding of the local histograms. Thirdly, we use our framework to compare two state-of-the-art intensity density estimators for image registration: The Parzen Window (PW) and the Generalized Partial Volume (GPV), and we demonstrate their differences on a popular similarity measure, Normalized Mutual Information (NMI). We conclude, that complicated similarity measures such as NMI may be evaluated almost as fast as simple measures such as Sum of Squared Distances (SSD) regardless of the choice of PW and GPV. Also, GPV is an asymmetric measure, and PW is our preferred choice.Comment: submitte

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    On Body Mass Index Analysis from Human Visual Appearance

    Get PDF
    In the past few decades, overweight and obesity are spreading widely like an epidemic. Generally, a person is considered overweight by body mass index (BMI). In addition to a body fat measurement, BMI is also a risk factor for many diseases, such as cardiovascular diseases, cancers and diabetes, etc. Therefore, BMI is important for personal health monitoring and medical research. Currently, BMI is measured in person with special devices. It is an urgent demand to explore conveniently preventive tools. This work investigates the feasibility of analyzing BMI from human visual appearances, including 2-dimensional (2D)/3-dimensional (3D) body and face data. Motivated by health science studies which have shown that anthropometric measures, such as waist-hip ratio, waist circumference, etc., are indicators for obesity, we analyze body weight from frontal view human body images. A framework is developed for body weight analysis from body images, along with the computation methods of five anthropometric features for body weight characterization. Then, we study BMI estimation from the 3D data by measuring the correlation between the estimated body volume and BMIs, and develop an efficient BMI computation method which consists of body weight and height estimation from normally dressed people in 3D space. We also intensively study BMI estimation from frontal view face images via two key aspects: facial representation extracting and BMI estimator learning. First, we investigate the visual BMI estimation problem from the aspect of the characteristics and performance of different facial representation extracting methods by three designed experiments. Then we study visual BMI estimation from facial images by a two-stage learning framework. BMI related facial features are learned in the first stage. To address the ambiguity of BMI labels, a label distribution based BMI estimator is proposed for the second stage. The experimental results show that this framework improves the performance step by step. Finally, to address the challenges caused by BMI data and labels, we integrate feature learning and estimator learning in one convolutional neural network (CNN). A label assignment matching scheme is proposed which successfully achieves an improvement in BMI estimation from face images
    • …
    corecore