282 research outputs found
Acquisitions driven by stock overvaluation: are they good deals?
Theory and recent evidence suggest that overvalued firms can create value for shareholders if they exploit their overvaluation by using their stock as currency to purchase less overvalued firms. We challenge this idea and show that, in practice, overvalued acquirers significantly overpay for their targets. These acquisitions do not, in turn, lead to synergy gains. Moreover, these acquisitions seem to be concentrated among acquirers with the largest governance problems. CEO compensation, not shareholder value creation, appears to be the main motive behind acquisitions by overvalued acquirers
Wavelet Deconvolution in a Periodic Setting Using Cross-Validation
The wavelet deconvolution method WaveD using band-limited wavelets offers both theoretical and computational advantages over traditional compactly supported wavelets. The translation-invariant WaveD with a fast algorithm improves further. The twofold cross-validation method for choosing the threshold parameter and the finest resolution level in WaveD is introduced. The algorithm’s performance is compared with the fixed constant tuning and the default tuning in WaveD
Wavelet Reconstruction of Nonuniformly Sampled Signals
For the reconstruction of a nonuniformly sampled signal based on its noisy observations, we propose a level dependent l1 penalized wavelet reconstruction method. The LARS/Lasso algorithm is applied to solve the Lasso problem. The data adaptive choice of the regularization parameters is based on the AIC and the degrees of freedom is estimated by the number of nonzero elements in the Lasso solution. Simulation results conducted on some commonly used 1_D test signals illustrate that the proposed method possesses good empirical properties
Calibration of the SNO+ experiment
The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of (130)Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.Peer Reviewe
Inversion for Non-Smooth Models with Physical Bounds
Geological processes produce structures at multiple scales. A discontinuity in the subsurface can occur due to layering, tectonic activities such as faulting, folding and fractures. Traditional approaches to invert geophysical data employ smoothness constraints. Such methods produce smooth models and thefore sharp contrasts in the medium such as lithological boundaries are not easily discernible. The methods that are able to produce non-smooth models, can help interpret the geological discontinuity. In this paper we examine various approaches to obtain non-smooth models from a finite set of noisy data. Broadly they can be categorized into approaches: (1) imposing non-smooth regularization in the inverse problem and (2) solve the inverse problem in a domain that provides multi-scale resolution, such as wavelet domain. In addition to applying non-smooth constraints, we further constrain the inverse problem to obtain models within prescribed physical bounds. The optimization with non-smooth regularization and physical bounds is solved using an interior point method. We demonstrate the applicability and usefulness of these methods with realistic synthetic examples and provide a field example from crosswell radar data
Electric Polarizability of Neutral Hadrons from Lattice QCD
By simulating a uniform electric field on a lattice and measuring the change
in the rest mass, we calculate the electric polarizability of neutral mesons
and baryons using the methods of quenched lattice QCD. Specifically, we measure
the electric polarizability coefficient from the quadratic response to the
electric field for 10 particles: the vector mesons and ; the
octet baryons n, , , , and ;
and the decouplet baryons , , and .
Independent calculations using two fermion actions were done for consistency
and comparison purposes. One calculation uses Wilson fermions with a lattice
spacing of fm. The other uses tadpole improved L\"usher-Weiss gauge
fields and clover quark action with a lattice spacing fm. Our results
for neutron electric polarizability are compared to experiment.Comment: 25 pages, 20 figure
Recommended from our members
Posttraumatic Stress Disorder and Community Collective Efficacy following the 2004 Florida Hurricanes
There is a paucity of research investigating the relationship of community-level characteristics such as collective efficacy and posttraumatic stress following disasters. We examine the association of collective efficacy with probable posttraumatic stress disorder and posttraumatic stress disorder symptom severity in Florida public health workers (n = 2249) exposed to the 2004 hurricane season using a multilevel approach. Anonymous questionnaires were distributed electronically to all Florida Department of Health personnel nine months after the 2004 hurricane season. The collected data were used to assess posttraumatic stress disorder and collective efficacy measured at both the individual and zip code levels. The majority of participants were female (80.42%), and ages ranged from 20 to 78 years (median = 49 years); 73.91% were European American, 13.25% were African American, and 8.65% were Hispanic. Using multi-level analysis, our data indicate that higher community-level and individual-level collective efficacy were associated with a lower likelihood of having posttraumatic stress disorder (OR = 0.93, CI = 0.88–0.98; and OR = 0.94, CI = 0.92–0.97, respectively), even after adjusting for individual sociodemographic variables, community socioeconomic characteristic variables, individual injury/damage, and community storm damage. Higher levels of community-level collective efficacy and individual-level collective efficacy were also associated with significantly lower posttraumatic stress disorder symptom severity (b = −0.22, p<0.01; and b = −0.17, p<0.01, respectively), after adjusting for the same covariates. Lower rates of posttraumatic stress disorder are associated with communities with higher collective efficacy. Programs enhancing community collective efficacy may be an important part of prevention practices and possibly lead to a reduction in the rate of posttraumatic stress disorder post-disaster
The Reproducibility of Lists of Differentially Expressed Genes in Microarray Studies
Reproducibility is a fundamental requirement in scientific experiments and clinical contexts. Recent publications raise concerns about the reliability of microarray technology because of the apparent lack of agreement between lists of differentially expressed genes (DEGs). In this study we demonstrate that (1) such discordance may stem from ranking and selecting DEGs solely by statistical significance (P) derived from widely used simple t-tests; (2) when fold change (FC) is used as the ranking criterion, the lists become much more reproducible, especially when fewer genes are selected; and (3) the instability of short DEG lists based on P cutoffs is an expected mathematical consequence of the high variability of the t-values. We recommend the use of FC ranking plus a non-stringent P cutoff as a baseline practice in order to generate more reproducible DEG lists. The FC criterion enhances reproducibility while the P criterion balances sensitivity and specificity
- …