12 research outputs found

    Comparative analysis of the variability of facial landmarks for forensics using CCTV images

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-53842-1_35Proceedings of the 6th Pacific-Rim Symposium, PSIVT 2013, Guanajuato, Mexico, October 28-November 1, 2013.This paper reports a study of the variability of facial landmarks in a forensic scenario using images acquired from CCTV images. This type of images presents a very low quality and a large range of variability factors such as differences in pose, expressions, occlusions, etc. Apart from this, the variability of facial landmarks is affected by the precision in which the landmarks are tagged. This process can be done manually or automatically depending on the application (e.g., forensics or automatic face recognition, respectively). This study is carried out comparing both manual and automatic procedures, and also 3 distances between the camera and the subjects. Results show that landmarks located in the outer part of the face (highest end of the head, ears and chin) present a higher level of variability compared to the landmarks located the inner face (eye region, and nose). This study shows that the landmark variability increases with the distance between subject and camera, and also the results of the manual and automatic approaches are similar for the inner facial landmarks.This work has been partially supported by a contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and “Catedra UAM-Telefonica”

    Motorcycles detection using Haar-like features and Support Vector Machine on CCTV camera image

    Get PDF
    Traffic monitoring system allows operators to monitor and analyze each traffic point via CCTV camera. However, it is difficult to monitor each traffic point all the time. This problem leads to the development of intelligent traffic monitoring system using computer vision technology which one of the features is vehicle detection. Vehicle detection still poses a challenge especially when dealing with motorcycles that occupy the majority of the road in Indonesia. In this research, a motorcycle detection method using Haar-like features and Support Vector Machine (SVM) on CCTV camera image is proposed. A set of preprocessing procedure is performed on the input image before Haar-like features extraction. The features then classified using trained SVM model via sliding window technique to detect motorcycles. The test result shows 0.0 log average miss rate and 0.9 average precision. From the low miss rate and high precision, the proposed method shows promising solution in detecting motorcycle from CCTV camera image

    Precise correction of lateral chromatic aberration in images

    Get PDF
    International audienceThis paper addresses the problem of lateral chromatic aberration correction in images through color planes warping. We aim at high precision (largely sub-pixel) realignment of color channels. This is achieved thanks to two ingredients: high precision keypoint detection, which in our case are disk centers, and more general correction model than what is commonly used in the literature, radial polynomial. Our setup is quite easy to implement, requiring a pattern of black disks on white paper and a single snapshot. We measure the errors in terms of geometry and of color and compare our method to three different software programs. Quantitative results on real images show that our method allows alignment of average 0.05 pixel of color channels and a residual color error divided by a factor 3 to 6

    A New A Contrario Approach for the Robust Determination of the Fundamental Matrix

    Get PDF
    International audienceThe fundamental matrix is a two-view tensor that plays a central role in Computer Vision geometry. We address its robust estimation given correspondences between image features. We use a non-parametric estimate of the distribution of image features, and then follow a probabilistic approach to select the best possible set of inliers among the given feature correspondences. The use of this perception-based \acontrario principle allows us to avoid the selection of a precision threshold as in RANSAC, since we provide a decision criterion that integrates all data and method parameters (total number of points, precision threshold, number of inliers given this threshold). Our proposal is analyzed in simulated and real data experiments; it yields a significant improvement of the ORSA method proposed in 2004, in terms of reprojection error and relative motion estimation, especially in situations of low inlier ratios

    DKiS: Decay weight invertible image steganography with private key

    Full text link
    Image steganography, defined as the practice of concealing information within another image, traditionally encounters security challenges when its methods become publicly known or are under attack. To address this, a novel private key-based image steganography technique has been introduced. This approach ensures the security of the hidden information, as access requires a corresponding private key, regardless of the public knowledge of the steganography method. Experimental evidence has been presented, demonstrating the effectiveness of our method and showcasing its real-world applicability. Furthermore, a critical challenge in the invertible image steganography process has been identified by us: the transfer of non-essential, or `garbage', information from the secret to the host pipeline. To tackle this issue, the decay weight has been introduced to control the information transfer, effectively filtering out irrelevant data and enhancing the performance of image steganography. The code for this technique is publicly accessible at https://github.com/yanghangAI/DKiS, and a practical demonstration can be found at http://yanghang.site/hidekey

    PRIS: Practical robust invertible network for image steganography

    Full text link
    Image steganography is a technique of hiding secret information inside another image, so that the secret is not visible to human eyes and can be recovered when needed. Most of the existing image steganography methods have low hiding robustness when the container images affected by distortion. Such as Gaussian noise and lossy compression. This paper proposed PRIS to improve the robustness of image steganography, it based on invertible neural networks, and put two enhance modules before and after the extraction process with a 3-step training strategy. Moreover, rounding error is considered which is always ignored by existing methods, but actually it is unavoidable in practical. A gradient approximation function (GAF) is also proposed to overcome the undifferentiable issue of rounding distortion. Experimental results show that our PRIS outperforms the state-of-the-art robust image steganography method in both robustness and practicability. Codes are available at https://github.com/yanghangAI/PRIS, demonstration of our model in practical at http://yanghang.site/hide/

    PyF2F: a robust and simplified fluorophore-to-fluorophore distance measurement tool for Protein interactions from Imaging Complexes after Translocation experiments

    Get PDF
    Structural knowledge of protein assemblies in their physiological environment is paramount to understand cellular functions at the molecular level. Protein interactions from Imaging Complexes after Translocation (PICT) is a live-cell imaging technique for the structural characterization of macromolecular assemblies in living cells. PICT relies on the measurement of the separation between labelled molecules using fluorescence microscopy and cell engineering. Unfortunately, the required computational tools to extract molecular distances involve a variety of sophisticated software programs that challenge reproducibility and limit their implementation to highly specialized researchers. Here we introduce PyF2F, a Python-based software that provides a workflow for measuring molecular distances from PICT data, with minimal user programming expertise. We used a published dataset to validate PyF2F’s performance

    A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies

    Get PDF
    RGB-D (Red Green Blue and Depth) sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.). In the research community, these devices have had good uptake due to their acceptable level of accuracy for many applications and their low cost, but in some cases, they work at the limit of their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements

    Low-resolution face alignment and recognition using mixed-resolution classifiers

    Get PDF
    A very common case for law enforcement is recognition of suspects from a long distance or in a crowd. This is an important application for low-resolution face recognition (in the authors' case, face region below 40 Ă— 40 pixels in size). Normally, high-resolution images of the suspects are used as references, which will lead to a resolution mismatch of the target and reference images since the target images are usually taken at a long distance and are of low resolution. Most existing methods that are designed to match high-resolution images cannot handle low-resolution probes well. In this study, they propose a novel method especially designed to compare low-resolution images with high-resolution ones, which is based on the log-likelihood ratio (LLR). In addition, they demonstrate the difference in recognition performance between real low-resolution images and images down-sampled from high-resolution ones. Misalignment is one of the most important issues in low-resolution face recognition. Two approaches - matching-score-based registration and extended training of images with various alignments - are introduced to handle the alignment problem. Their experiments on real low-resolution face databases show that their methods outperform the state-of-the-art

    Robust Self-calibration of Focal Lengths from the Fundamental Matrix

    Full text link
    The problem of self-calibration of two cameras from a given fundamental matrix is one of the basic problems in geometric computer vision. Under the assumption of known principal points and square pixels, the well-known Bougnoux formula offers a means to compute the two unknown focal lengths. However, in many practical situations, the formula yields inaccurate results due to commonly occurring singularities. Moreover, the estimates are sensitive to noise in the computed fundamental matrix and to the assumed positions of the principal points. In this paper, we therefore propose an efficient and robust iterative method to estimate the focal lengths along with the principal points of the cameras given a fundamental matrix and priors for the estimated camera parameters. In addition, we study a computationally efficient check of models generated within RANSAC that improves the accuracy of the estimated models while reducing the total computational time. Extensive experiments on real and synthetic data show that our iterative method brings significant improvements in terms of the accuracy of the estimated focal lengths over the Bougnoux formula and other state-of-the-art methods, even when relying on inaccurate priors
    corecore