349 research outputs found

    Density-functional study of Cu atoms, monolayers, and coadsorbates on polar ZnO surfaces

    Full text link
    The structure and electronic properties of single Cu atoms, copper monolayers and thin copper films on the polar oxygen and zinc terminated surfaces of ZnO are studied using periodic density-functional calculations. We find that the binding energy of Cu atoms sensitively depends on how charge neutrality of the polar surfaces is achieved. Bonding is very strong if the surfaces are stabilized by an electronic mechanism which leads to partially filled surface bands. As soon as the surface bands are filled (either by partial Cu coverage, by coadsorbates, or by the formation of defects), the binding energy decreases significantly. In this case, values very similar to those found for nonpolar surfaces and for copper on finite ZnO clusters are obtained. Possible implications of these observations concerning the growth mode of copper on polar ZnO surfaces and their importance in catalysis are discussed.Comment: 6 pages with 2 postscript figures embedded. Uses REVTEX and epsf macro

    Structure and Magnetism of Neutral and Anionic Palladium Clusters

    Full text link
    The properties of neutral and anionic Pd_N clusters were investigated with spin-density-functional calculations. The ground state structures are three-dimensional for N>3 and they are magnetic with a spin-triplet for 2<=N<=7 and a spin nonet for N=13 neutral clusters. Structural- and spin-isomers were determined and an anomalous increase of the magnetic moment with temperature is predicted for a Pd_7 ensemble. Vertical electron detachment and ionization energies were calculated and the former agree well with measured values for anionic Pd_N clusters.Comment: 5 pages, 3 figures, fig. 2 in color, accepted to Phys. Rev. Lett. (2001

    Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    Get PDF
    We address the issue of finding an optimal detection method for a discontinuous or intermittent gravitational wave stochastic background. Such a signal might sound something like popcorn popping. We derive an appropriate version of the maximum likelihood detection statistic, and compare its performance to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. The maximum likelihood statistic performs better than the cross-correlation statistic when the background is sufficiently non-Gaussian. For both ground and space based detectors, this results in a gain factor, ranging roughly from 1 to 3, in the minimum gravitational-wave energy density necessary for detection, depending on the duty cycle of the background. Our analysis is exploratory, as we assume that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. Before this detection method can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.Comment: 25 pages, 12 figures, submitted to physical review D, added revisions in response to reviewers comment

    Inference with interference between units in an fMRI experiment of motor inhibition

    Full text link
    An experimental unit is an opportunity to randomly apply or withhold a treatment. There is interference between units if the application of the treatment to one unit may also affect other units. In cognitive neuroscience, a common form of experiment presents a sequence of stimuli or requests for cognitive activity at random to each experimental subject and measures biological aspects of brain activity that follow these requests. Each subject is then many experimental units, and interference between units within an experimental subject is likely, in part because the stimuli follow one another quickly and in part because human subjects learn or become experienced or primed or bored as the experiment proceeds. We use a recent fMRI experiment concerned with the inhibition of motor activity to illustrate and further develop recently proposed methodology for inference in the presence of interference. A simulation evaluates the power of competing procedures.Comment: Published by Journal of the American Statistical Association at http://www.tandfonline.com/doi/full/10.1080/01621459.2012.655954 . R package cin (Causal Inference for Neuroscience) implementing the proposed method is freely available on CRAN at https://CRAN.R-project.org/package=ci

    Combining support vector machines and segmentation algorithms for efficient anomaly detection: a petroleum industry application

    Get PDF
    Proceedings of: International Joint Conference SOCO’14-CISIS’14-ICEUTE’14, Bilbao, Spain, June 25th–27th, 2014, ProceedingsAnomaly detection is the problem of finding patterns in data that do not conform to expected behavior. Similarly, when patterns are numerically distant from the rest of sample, anomalies are indicated as outliers. Anomaly detection had recently attracted the attention of the research community for real-world applications. The petroleum industry is one of the application contexts where these problems are present. The correct detection of such types of unusual information empowers the decision maker with the capacity to act on the system in order to correctly avoid, correct, or react to the situations associated with them. In that sense, heavy extraction machines for pumping and generation operations like turbomachines are intensively monitored by hundreds of sensors each that send measurements with a high frequency for damage prevention. For dealing with this and with the lack of labeled data, in this paper we propose a combination of a fast and high quality segmentation algorithm with a one-class support vector machine approach for efficient anomaly detection in turbomachines. As result we perform empirical studies comparing our approach to other methods applied to benchmark problems and a real-life application related to oil platform turbomachinery anomaly detection.This work was partially funded by CNPq BJT Project 407851/2012-7 and CNPq PVE Project 314017/2013-

    A global descriptor of spatial pattern interaction in the galaxy distribution

    Full text link
    We present the function J as a morphological descriptor for point patterns formed by the distribution of galaxies in the Universe. This function was recently introduced in the field of spatial statistics, and is based on the nearest neighbor distribution and the void probability function. The J descriptor allows to distinguish clustered (i.e. correlated) from ``regular'' (i.e. anti-correlated) point distributions. We outline the theoretical foundations of the method, perform tests with a Matern cluster process as an idealised model of galaxy clustering, and apply the descriptor to galaxies and loose groups in the Perseus-Pisces Survey. A comparison with mock-samples extracted from a mixed dark matter simulation shows that the J descriptor can be profitably used to constrain (in this case reject) viable models of cosmic structure formation.Comment: Significantly enhanced version, 14 pages, LaTeX using epsf, aaspp4, 7 eps-figures, accepted for publication in the Astrophysical Journa

    Crude incidence in two-phase designs in the presence of competing risks.

    Get PDF
    BackgroundIn many studies, some information might not be available for the whole cohort, some covariates, or even the outcome, might be ascertained in selected subsamples. These studies are part of a broad category termed two-phase studies. Common examples include the nested case-control and the case-cohort designs. For two-phase studies, appropriate weighted survival estimates have been derived; however, no estimator of cumulative incidence accounting for competing events has been proposed. This is relevant in the presence of multiple types of events, where estimation of event type specific quantities are needed for evaluating outcome.MethodsWe develop a non parametric estimator of the cumulative incidence function of events accounting for possible competing events. It handles a general sampling design by weights derived from the sampling probabilities. The variance is derived from the influence function of the subdistribution hazard.ResultsThe proposed method shows good performance in simulations. It is applied to estimate the crude incidence of relapse in childhood acute lymphoblastic leukemia in groups defined by a genotype not available for everyone in a cohort of nearly 2000 patients, where death due to toxicity acted as a competing event. In a second example the aim was to estimate engagement in care of a cohort of HIV patients in resource limited setting, where for some patients the outcome itself was missing due to lost to follow-up. A sampling based approach was used to identify outcome in a subsample of lost patients and to obtain a valid estimate of connection to care.ConclusionsA valid estimator for cumulative incidence of events accounting for competing risks under a general sampling design from an infinite target population is derived

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    The A‐dependenc of ψ production in π− nucleus collisions at 530 GeV/c

    Full text link
    The E672/E706 Spectrometer, located in the MW beam at Fermilab, was used to collect data on events containing a pair of muons in the final state with large effective mass. The momentum of incident pions and protons was 530 GeV/c. Nuclear targets included Be, C, Al, Cu and Pb. We report on a preliminary measurement of the A‐dependence of the per nucleus cross section for forward J/ψ production. The apparatus also detected charged particles and γ’s produced in association with the muon pair. The expected physics results on the hadroproduction of χ states and beauty particles are discussed.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87663/2/624_1.pd
    corecore