136 research outputs found

    Alternative sampling for variational quantum Monte Carlo

    Full text link
    Expectation values of physical quantities may accurately be obtained by the evaluation of integrals within Many-Body Quantum mechanics, and these multi-dimensional integrals may be estimated using Monte Carlo methods. In a previous publication it has been shown that for the simplest, most commonly applied strategy in continuum Quantum Monte Carlo, the random error in the resulting estimates is not well controlled. At best the Central Limit theorem is valid in its weakest form, and at worst it is invalid and replaced by an alternative Generalised Central Limit theorem and non-Normal random error. In both cases the random error is not controlled. Here we consider a new `residual sampling strategy' that reintroduces the Central Limit Theorem in its strongest form, and provides full control of the random error in estimates. Estimates of the total energy and the variance of the local energy within Variational Monte Carlo are considered in detail, and the approach presented may be generalised to expectation values of other operators, and to other variants of the Quantum Monte Carlo method.Comment: 14 pages, 9 figure

    Docketology, District Courts, and Doctrine

    Get PDF
    Empirical legal scholars have traditionally modeled trial court judicial opinion writing by assuming that judges act rationally, seeking to maximize their influence by writing opinions in politically important cases. To test such views, we collected data from a thousand cases in four different jurisdictions. We recorded information about every judicial action over each case’s life, ranging from the demographic characteristics, workload, and experience of the writing judge; to information about the case, including its jurisdictional basis, complexity, attorney characteristics, and motivating legal theory; to information about the individual orders themselves, including the relevant procedural posture and the winning party. Our data reveal opinions to be rare events in the litigation process: only 3% of all orders, and only 17% of orders applying facts to law, are fully reasoned. Using a hierarchical linear model, we conclude that judges do not write opinions to curry favor with the public or with powerful audiences, nor do they write more when they are less experienced, seeking to advance their careers, or in more interesting case types. Instead, opinion writing is significantly affected by procedure: we predict that judges are three times more likely to write an opinion on a summary judgment motion than a discovery motion, all else held equal. Judges similarly write more in cases that are later appealed, and in commercial cases, while writing less in tort and prisoner cases. Finally, jurisdictional culture is very important. These findings challenge the conventional wisdom and suggest the need for further research on the behavioral aspects of opinion writing

    Invariance Properties of Schoenberg's Tone Row System

    Get PDF
    1 online resource (PDF, 24 pages

    Geometric Path Integrals. A Language for Multiscale Biology and Systems Robustness

    Full text link
    In this paper we suggest that, under suitable conditions, supervised learning can provide the basis to formulate at the microscopic level quantitative questions on the phenotype structure of multicellular organisms. The problem of explaining the robustness of the phenotype structure is rephrased as a real geometrical problem on a fixed domain. We further suggest a generalization of path integrals that reduces the problem of deciding whether a given molecular network can generate specific phenotypes to a numerical property of a robustness function with complex output, for which we give heuristic justification. Finally, we use our formalism to interpret a pointedly quantitative developmental biology problem on the allowed number of pairs of legs in centipedes

    Sparsest factor analysis for clustering variables: a matrix decomposition approach

    Get PDF
    We propose a new procedure for sparse factor analysis (FA) such that each variable loads only one common factor. Thus, the loading matrix has a single nonzero element in each row and zeros elsewhere. Such a loading matrix is the sparsest possible for certain number of variables and common factors. For this reason, the proposed method is named sparsest FA (SSFA). It may also be called FA-based variable clustering, since the variables loading the same common factor can be classified into a cluster. In SSFA, all model parts of FA (common factors, their correlations, loadings, unique factors, and unique variances) are treated as fixed unknown parameter matrices and their least squares function is minimized through specific data matrix decomposition. A useful feature of the algorithm is that the matrix of common factor scores is re-parameterized using QR decomposition in order to efficiently estimate factor correlations. A simulation study shows that the proposed procedure can exactly identify the true sparsest models. Real data examples demonstrate the usefulness of the variable clustering performed by SSFA

    Multi-Target Prediction: A Unifying View on Problems and Methods

    Full text link
    Multi-target prediction (MTP) is concerned with the simultaneous prediction of multiple target variables of diverse type. Due to its enormous application potential, it has developed into an active and rapidly expanding research field that combines several subfields of machine learning, including multivariate regression, multi-label classification, multi-task learning, dyadic prediction, zero-shot learning, network inference, and matrix completion. In this paper, we present a unifying view on MTP problems and methods. First, we formally discuss commonalities and differences between existing MTP problems. To this end, we introduce a general framework that covers the above subfields as special cases. As a second contribution, we provide a structured overview of MTP methods. This is accomplished by identifying a number of key properties, which distinguish such methods and determine their suitability for different types of problems. Finally, we also discuss a few challenges for future research

    Globular cluster systems in fossil groups: NGC6482, NGC1132 and ESO306-017

    Full text link
    We study the globular cluster (GC) systems in three representative fossil group galaxies: the nearest (NGC6482), the prototype (NGC1132) and the most massive known to date (ESO306-017). This is the first systematic study of GC systems in fossil groups. Using data obtained with the Hubble Space Telescope Advanced Camera for Surveys in the F475W and F850LP filters, we determine the GC color and magnitude distributions, surface number density profiles, and specific frequencies. In all three systems, the GC color distribution is bimodal, the GCs are spatially more extended than the starlight, and the red population is more concentrated than the blue. The specific frequencies seem to scale with the optical luminosities of the central galaxy and span a range similar to that of the normal bright elliptical galaxies in rich environments. We also analyze the galaxy surface brightness distributions to look for deviations from the best-fit S\'ersic profiles; we find evidence of recent dynamical interaction in all three fossil group galaxies. Using X-ray data from the literature, we find that luminosity and metallicity appear to correlate with the number of GCs and their mean color, respectively. Interestingly, although NGC6482 has the lowest mass and luminosity in our sample, its GC system has the reddest mean color, and the surrounding X-ray gas has the highest metallicity.Comment: 16 pages, 13 figures. Accepted for publication in A&

    Current measures of metabolic heterogeneity within cervical cancer do not predict disease outcome

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A previous study evaluated the intra-tumoral heterogeneity observed in the uptake of F-18 fluorodeoxyglucose (FDG) in pre-treatment positron emission tomography (PET) scans of cancers of the uterine cervix as an indicator of disease outcome. This was done via a novel statistic which ostensibly measured the spatial variations in intra-tumoral metabolic activity. In this work, we argue that statistic is intrinsically <it>non</it>-spatial, and that the apparent delineation between unsuccessfully- and successfully-treated patient groups via that statistic is spurious.</p> <p>Methods</p> <p>We first offer a straightforward mathematical demonstration of our argument. Next, we recapitulate an assiduous re-analysis of the originally published data which was derived from FDG-PET imagery. Finally, we present the results of a principal component analysis of FDG-PET images similar to those previously analyzed.</p> <p>Results</p> <p>We find that the previously published measure of intra-tumoral heterogeneity is intrinsically non-spatial, and actually is only a surrogate for tumor volume. We also find that an optimized linear combination of more canonical heterogeneity quantifiers does not predict disease outcome.</p> <p>Conclusions</p> <p>Current measures of intra-tumoral metabolic activity are not predictive of disease outcome as has been claimed previously. The implications of this finding are: clinical categorization of patients based upon these statistics is invalid; more sophisticated, and perhaps innately-geometric, quantifications of metabolic activity are required for predicting disease outcome.</p

    Piecewise polynomial approximation of probability density functions with application to uncertainty quantification for stochastic PDEs

    Full text link
    The probability density function (PDF) associated with a given set of samples is approximated by a piecewise-linear polynomial constructed with respect to a binning of the sample space. The kernel functions are a compactly supported basis for the space of such polynomials, i.e. finite element hat functions, that are centered at the bin nodes rather than at the samples, as is the case for the standard kernel density estimation approach. This feature naturally provides an approximation that is scalable with respect to the sample size. On the other hand, unlike other strategies that use a finite element approach, the proposed approximation does not require the solution of a linear system. In addition, a simple rule that relates the bin size to the sample size eliminates the need for bandwidth selection procedures. The proposed density estimator has unitary integral, does not require a constraint to enforce positivity, and is consistent. The proposed approach is validated through numerical examples in which samples are drawn from known PDFs. The approach is also used to determine approximations of (unknown) PDFs associated with outputs of interest that depend on the solution of a stochastic partial differential equation
    corecore