282 research outputs found

    Acquisitions driven by stock overvaluation: are they good deals?

    Get PDF
    Theory and recent evidence suggest that overvalued firms can create value for shareholders if they exploit their overvaluation by using their stock as currency to purchase less overvalued firms. We challenge this idea and show that, in practice, overvalued acquirers significantly overpay for their targets. These acquisitions do not, in turn, lead to synergy gains. Moreover, these acquisitions seem to be concentrated among acquirers with the largest governance problems. CEO compensation, not shareholder value creation, appears to be the main motive behind acquisitions by overvalued acquirers

    Wavelet Deconvolution in a Periodic Setting Using Cross-Validation

    Get PDF
    The wavelet deconvolution method WaveD using band-limited wavelets offers both theoretical and computational advantages over traditional compactly supported wavelets. The translation-invariant WaveD with a fast algorithm improves further. The twofold cross-validation method for choosing the threshold parameter and the finest resolution level in WaveD is introduced. The algorithm’s performance is compared with the fixed constant tuning and the default tuning in WaveD

    Wavelet Reconstruction of Nonuniformly Sampled Signals

    Get PDF
    For the reconstruction of a nonuniformly sampled signal based on its noisy observations, we propose a level dependent l1 penalized wavelet reconstruction method. The LARS/Lasso algorithm is applied to solve the Lasso problem. The data adaptive choice of the regularization parameters is based on the AIC and the degrees of freedom is estimated by the number of nonzero elements in the Lasso solution. Simulation results conducted on some commonly used 1_D test signals illustrate that the proposed method possesses good empirical properties

    Calibration of the SNO+ experiment

    Get PDF
    The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of (130)Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.Peer Reviewe

    Inversion for Non-Smooth Models with Physical Bounds

    Get PDF
    Geological processes produce structures at multiple scales. A discontinuity in the subsurface can occur due to layering, tectonic activities such as faulting, folding and fractures. Traditional approaches to invert geophysical data employ smoothness constraints. Such methods produce smooth models and thefore sharp contrasts in the medium such as lithological boundaries are not easily discernible. The methods that are able to produce non-smooth models, can help interpret the geological discontinuity. In this paper we examine various approaches to obtain non-smooth models from a finite set of noisy data. Broadly they can be categorized into approaches: (1) imposing non-smooth regularization in the inverse problem and (2) solve the inverse problem in a domain that provides multi-scale resolution, such as wavelet domain. In addition to applying non-smooth constraints, we further constrain the inverse problem to obtain models within prescribed physical bounds. The optimization with non-smooth regularization and physical bounds is solved using an interior point method. We demonstrate the applicability and usefulness of these methods with realistic synthetic examples and provide a field example from crosswell radar data

    Electric Polarizability of Neutral Hadrons from Lattice QCD

    Full text link
    By simulating a uniform electric field on a lattice and measuring the change in the rest mass, we calculate the electric polarizability of neutral mesons and baryons using the methods of quenched lattice QCD. Specifically, we measure the electric polarizability coefficient from the quadratic response to the electric field for 10 particles: the vector mesons ρ0\rho^0 and K0K^{*0}; the octet baryons n, Σ0\Sigma^0, Λo0\Lambda_{o}^{0}, Λs0\Lambda_{s}^{0}, and Ξ0\Xi^0; and the decouplet baryons Δ0\Delta^0, Σ0\Sigma^{*0}, and Ξ0\Xi^{*0}. Independent calculations using two fermion actions were done for consistency and comparison purposes. One calculation uses Wilson fermions with a lattice spacing of a=0.10a=0.10 fm. The other uses tadpole improved L\"usher-Weiss gauge fields and clover quark action with a lattice spacing a=0.17a=0.17 fm. Our results for neutron electric polarizability are compared to experiment.Comment: 25 pages, 20 figure

    Metal-Supported Solid Oxide Fuel Cells

    Full text link

    The Reproducibility of Lists of Differentially Expressed Genes in Microarray Studies

    Get PDF
    Reproducibility is a fundamental requirement in scientific experiments and clinical contexts. Recent publications raise concerns about the reliability of microarray technology because of the apparent lack of agreement between lists of differentially expressed genes (DEGs). In this study we demonstrate that (1) such discordance may stem from ranking and selecting DEGs solely by statistical significance (P) derived from widely used simple t-tests; (2) when fold change (FC) is used as the ranking criterion, the lists become much more reproducible, especially when fewer genes are selected; and (3) the instability of short DEG lists based on P cutoffs is an expected mathematical consequence of the high variability of the t-values. We recommend the use of FC ranking plus a non-stringent P cutoff as a baseline practice in order to generate more reproducible DEG lists. The FC criterion enhances reproducibility while the P criterion balances sensitivity and specificity
    corecore