35,361 research outputs found

    Exact Histogram Specification Optimized for Structural Similarity

    Full text link
    An exact histogram specification (EHS) method modifies its input image to have a specified histogram. Applications of EHS include image (contrast) enhancement (e.g., by histogram equalization) and histogram watermarking. Performing EHS on an image, however, reduces its visual quality. Starting from the output of a generic EHS method, we maximize the structural similarity index (SSIM) between the original image (before EHS) and the result of EHS iteratively. Essential in this process is the computationally simple and accurate formula we derive for SSIM gradient. As it is based on gradient ascent, the proposed EHS always converges. Experimental results confirm that while obtaining the histogram exactly as specified, the proposed method invariably outperforms the existing methods in terms of visual quality of the result. The computational complexity of the proposed method is shown to be of the same order as that of the existing methods. Index terms: histogram modification, histogram equalization, optimization for perceptual visual quality, structural similarity gradient ascent, histogram watermarking, contrast enhancement

    A Convex Model for Edge-Histogram Specification with Applications to Edge-preserving Smoothing

    Full text link
    The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem [M. Mignotte. An energy-based model for the image edge-histogram specification problem. IEEE Transactions on Image Processing, 21(1):379--386, 2012]. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently

    Self-consistent method for density estimation

    Full text link
    The estimation of a density profile from experimental data points is a challenging problem, usually tackled by plotting a histogram. Prior assumptions on the nature of the density, from its smoothness to the specification of its form, allow the design of more accurate estimation procedures, such as Maximum Likelihood. Our aim is to construct a procedure that makes no explicit assumptions, but still providing an accurate estimate of the density. We introduce the self-consistent estimate: the power spectrum of a candidate density is given, and an estimation procedure is constructed on the assumption, to be released \emph{a posteriori}, that the candidate is correct. The self-consistent estimate is defined as a prior candidate density that precisely reproduces itself. Our main result is to derive the exact expression of the self-consistent estimate for any given dataset, and to study its properties. Applications of the method require neither priors on the form of the density nor the subjective choice of parameters. A cutoff frequency, akin to a bin size or a kernel bandwidth, emerges naturally from the derivation. We apply the self-consistent estimate to artificial data generated from various distributions and show that it reaches the theoretical limit for the scaling of the square error with the dataset size.Comment: 21 pages, 5 figure

    Downward Nominal Wage Rigidity in Europe: An Analysis of European Micro Data from the ECHP 1994-2001

    Get PDF
    This paper substantially extends the limited available evidence on existence and extent of downward nominal wage rigidity in the European Union and the Euro Area. For this purpose we develop an econometric multi-country model based on Kahn�s (1997) histogram-location approach and apply it to employee micro data from the European Community Household Panel (ECHP) for twelve of the EU�s current member states. Our estimates for the degree of downward nominal wage rigidity on the national as well as the EU-wide level point to marked downward nominal wage rigidity within the European Union. Dieser Beitrag liefert eine erhebliche Erweiterung der bislang nur spärlich vorhandenen Evidenz zum Ausmaß der Abwärtsnominallohnstarrheit in der Europäischen Union und im Eurogebiet. Zu diesem Zweck wird auf Basis des Histogram-Location-Approach von Kahn (1997) ein ökonometrisches Mehrländermodell entwickelt und auf die Daten des Europäischen Haushaltspanels für zwölf EU-Mitgliedsländer angewandt. Die Schätzergebnisse deuten sowohl auf nationaler als auch auf europäischer Ebene auf ausgeprägte Abwärtsnominallohnstarrheit hin.Lohnstarrheit ; Europäische Union ; Europäische Union / Währungsunion / Mitgliedsstaaten ; Lohnbildung ; Nominallohn ; Rigidität; Abwärtsnominallohnstarrheit ; Europäisches Haushaltspanel ; ECHP ; Lohnrigidität ; Histogram-Location-Ansatz; Downward Nominal Wage Rigidity ; Wage Stickiness ; European Comunity Household Panel ; European Union ; Euro Area

    Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies

    Full text link
    Privacy definitions provide ways for trading-off the privacy of individuals in a statistical database for the utility of downstream analysis of the data. In this paper, we present Blowfish, a class of privacy definitions inspired by the Pufferfish framework, that provides a rich interface for this trade-off. In particular, we allow data publishers to extend differential privacy using a policy, which specifies (a) secrets, or information that must be kept secret, and (b) constraints that may be known about the data. While the secret specification allows increased utility by lessening protection for certain individual properties, the constraint specification provides added protection against an adversary who knows correlations in the data (arising from constraints). We formalize policies and present novel algorithms that can handle general specifications of sensitive information and certain count constraints. We show that there are reasonable policies under which our privacy mechanisms for k-means clustering, histograms and range queries introduce significantly lesser noise than their differentially private counterparts. We quantify the privacy-utility trade-offs for various policies analytically and empirically on real datasets.Comment: Full version of the paper at SIGMOD'14 Snowbird, Utah US

    Energy-Aware Cloud Management through Progressive SLA Specification

    Full text link
    Novel energy-aware cloud management methods dynamically reallocate computation across geographically distributed data centers to leverage regional electricity price and temperature differences. As a result, a managed VM may suffer occasional downtimes. Current cloud providers only offer high availability VMs, without enough flexibility to apply such energy-aware management. In this paper we show how to analyse past traces of dynamic cloud management actions based on electricity prices and temperatures to estimate VM availability and price values. We propose a novel SLA specification approach for offering VMs with different availability and price values guaranteed over multiple SLAs to enable flexible energy-aware cloud management. We determine the optimal number of such SLAs as well as their availability and price guaranteed values. We evaluate our approach in a user SLA selection simulation using Wikipedia and Grid'5000 workloads. The results show higher customer conversion and 39% average energy savings per VM.Comment: 14 pages, conferenc
    corecore