336,718 research outputs found

    Unsupervised edge map scoring: a statistical complexity approach

    Full text link
    We propose a new Statistical Complexity Measure (SCM) to qualify edge maps without Ground Truth (GT) knowledge. The measure is the product of two indices, an \emph{Equilibrium} index E\mathcal{E} obtained by projecting the edge map into a family of edge patterns, and an \emph{Entropy} index H\mathcal{H}, defined as a function of the Kolmogorov Smirnov (KS) statistic. This new measure can be used for performance characterization which includes: (i)~the specific evaluation of an algorithm (intra-technique process) in order to identify its best parameters, and (ii)~the comparison of different algorithms (inter-technique process) in order to classify them according to their quality. Results made over images of the South Florida and Berkeley databases show that our approach significantly improves over Pratt's Figure of Merit (PFoM) which is the objective reference-based edge map evaluation standard, as it takes into account more features in its evaluation

    Computing contrast ratio in medical images using local content information

    Get PDF
    Rationale Image quality assessment in medical applications is often based on quantifying the visibility between a structure of interest such as a vessel, termed foreground (F) and its surrounding anatomical background (B), i.e., the contrast ratio. A high quality image is the one that is able to make diagnostically relevant details distinguishable from the background. Therefore, the computation of contrast ratio is an important task in automatic medical image quality assessment. Methods We estimate the contrast ratio by using Weber’s law in local image patches. A small image patch can contain a flat area, a textured area or an edge. Regions with edges are characterized by bimodal histograms representing B and F, and the local contrast ratio can be estimated using the ratio between mean intensity values of each mode of the histogram. B and F are identified by computing the mid-value between the modes using the ISODATA algorithm. This process is performed over the entire image with a sliding window resulting in a contrast ratio per pixel. Results We have tested our measure on two general purpose databases (TID2013 [1] and CSIQ [2]) to demonstrate that the proposed measure agrees with human preferences of quality. Since our measure is specifically designed for measuring contrast, only images exhibiting contrast changes are used. The difference between the maximum of the contrast ratios corresponding to the reference and processed images is used as a quality predictor. Human quality scores and our proposed measure are compared with the Pearson correlation coefficient. Our experimental results show that our method is able to accurately predict changes of perceived quality due to contrast decrements (Pearson correlations higher than 90%). Additionally, this method can detect changes in contrast level in interventional x-ray images acquired with varying dose [3]. For instance, the resulting contrast maps demonstrate reduced contrast ratios for vessel edges on X-ray images acquired at lower dose settings, i.e., lower distinguishability from the background, compared to higher dose acquisitions. Conclusions We propose a measure to compute contrast ratio by using Weber’s law in local image patches. While the proposed contrast ratio is computationally simple, this approximation of local content has shown to be useful in measuring quality differences due to contrast decrements in images. Especially, changes in structures of interest due to low contrast ratio can be detected by using the contrast map making our method potentially useful in Xray imaging dose control. References [1] Ponomarenko N. et al., “A New Color Image Database TID2013: Innovations and Results,” Proceedings of ACIVS, 402-413 (2013). [2] Larson E. and Chandler D., "Most apparent distortion: full-reference image quality assessment and the role of strategy," Journal of Electronic Imaging, 19 (1), 2010. [3] Kumcu, A. et al., “Interventional x-ray image quality measure based on a psychovisual detectability model,” MIPS XVI, Ghent, Belgium, 2015

    Distributed Holistic Clustering on Linked Data

    Full text link
    Link discovery is an active field of research to support data integration in the Web of Data. Due to the huge size and number of available data sources, efficient and effective link discovery is a very challenging task. Common pairwise link discovery approaches do not scale to many sources with very large entity sets. We here propose a distributed holistic approach to link many data sources based on a clustering of entities that represent the same real-world object. Our clustering approach provides a compact and fused representation of entities, and can identify errors in existing links as well as many new links. We support a distributed execution of the clustering approach to achieve faster execution times and scalability for large real-world data sets. We provide a novel gold standard for multi-source clustering, and evaluate our methods with respect to effectiveness and efficiency for large data sets from the geographic and music domains

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    A 10-Gb/s two-dimensional eye-opening monitor in 0.13-ÎĽm standard CMOS

    Get PDF
    An eye-opening monitor (EOM) architecture that can capture a two-dimensional (2-D) map of the eye diagram of a high-speed data signal has been developed. Two single-quadrant phase rotators and one digital-to-analog converter (DAC) are used to generate rectangular masks with variable sizes and aspect ratios. Each mask is overlapped with the received eye diagram and the number of signal transitions inside the mask is recorded as error. The combination of rectangular masks with the same error creates error contours that overall provide a 2-D map of the eye. The authors have implemented a prototype circuit in 0.13-ÎĽm standard CMOS technology that operates up to 12.5 Gb/s at 1.2-V supply. The EOM maps the input eye to a 2-D error diagram with up to 68-dB mask error dynamic range. The left and right halves of the eyes are monitored separately to capture horizontally asymmetric eyes. The chip consumes 330 mW and operates reliably with supply voltages as low as 1 V at 10 Gb/s. The authors also present a detailed analysis that verifies if the measurements are in good agreement with the expected results

    A New Fast Motion Estimation and Mode Decision algorithm for H.264 Depth Maps encoding in Free Viewpoint TV

    Get PDF
    In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric

    Meshed Up: Learnt Error Correction in 3D Reconstructions

    Full text link
    Dense reconstructions often contain errors that prior work has so far minimised using high quality sensors and regularising the output. Nevertheless, errors still persist. This paper proposes a machine learning technique to identify errors in three dimensional (3D) meshes. Beyond simply identifying errors, our method quantifies both the magnitude and the direction of depth estimate errors when viewing the scene. This enables us to improve the reconstruction accuracy. We train a suitably deep network architecture with two 3D meshes: a high-quality laser reconstruction, and a lower quality stereo image reconstruction. The network predicts the amount of error in the lower quality reconstruction with respect to the high-quality one, having only view the former through its input. We evaluate our approach by correcting two-dimensional (2D) inverse-depth images extracted from the 3D model, and show that our method improves the quality of these depth reconstructions by up to a relative 10% RMSE.Comment: Accepted for the International Conference on Robotics and Automation (ICRA) 201
    • …
    corecore