161 research outputs found

    Inverse Ising inference using all the data

    Full text link
    We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1l_1-regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.Comment: 5 pages, 2 figures. Accepted versio

    Modelling Noise and Imprecision in Individual Decisions

    Get PDF
    When individuals take part in decision experiments, their answers are typically subject to some degree of noise / error / imprecision. There are different ways of modelling this stochastic element in the data, and the interpretation of the data can be altered radically, depending on the assumptions made about the stochastic specification. This paper presents the results of an experiment which gathered data of a kind that has until now been in short supply. These data strongly suggest that the 'usual' (Fechnerian) assumptions about errors are inappropriate for individual decision experiments. Moreover, they provide striking evidence that core preferences display systematic departures from transitivity which cannot be attributed to any 'error' story.Error Imprecision Preferences Transitivity

    Data Fusion for QRS Complex Detection in Multi-Lead Electrocardiogram Recordings

    Get PDF
    Heart diseases are the main cause of death worldwide. The first step in the diagnose of these diseases is the analysis of the electrocardiographic (ECG) signal. In turn, the ECG analysis begins with the detection of the QRS complex, which is the one with the most energy in the cardiac cycle. Numerous methods have been proposed in the bibliography for QRS complex detection, but few authors have analyzed the possibility of taking advantage of the information redundancy present in multiple ECG leads (simultaneously acquired) to produce accurate QRS detection. In our previous work we presented such an approach, proposing various data fusion techniques to combine the detections made by an algorithm on multiple ECG leads. In this paper we present further studies that show the advantages of this multi-lead detection approach, analyzing how many leads are necessary in order to observe an improvement in the detection performance. A well known QRS detection algorithm was used to test the fusion techniques on the St. Petersburg Institute of Cardiological Technics database. Results show improvement in the detection performance with as little as three leads, but the reliability of these results becomes interesting only after using seven or more leads. Results were evaluated using the detection error rate (DER). The multi-lead detection approach allows an improvement from DER = 3:04% to DER = 1:88%. Further works are to be made in order to improve the detection performance by implementing further fusion steps

    Are visual cortex maps optimized for coverage?

    Get PDF
    The elegant regularity of maps of variables such as ocular dominance, orientation, and spatial frequency in primary visual cortex has prompted many people to suggest that their structure could be explained by an optimization principle. Up to now, the standard way to test this hypothesis has been to generate artificial maps by optimizing a hypothesized objective function and then to compare these artificial maps with real maps using a variety of quantitative criteria. If the artificial maps are similar to the real maps, this provides some evidence that the real cortex may be optimizing a similar function to the one hypothesized. Recently, a more direct method has been proposed for testing whether real maps represent local optima of an objective function (Swindale, Shoham, Grinvald, Bonhoeffer, & Hilbener, 2000). In this approach, the value of the hypothesized function is calculated for a real map, and then the real map is perturbed in certain ways and the function recalculated. If each of these perturbations leads to a worsening of the function, it is tempting to conclude that the real map is quite likely to represent a local optimum of that function. In this article, we argue that such perturbation results provide only weak evidence in favor of the optimization hypothesis

    Clinical and pathologic characteristics of T-cell lymphoma with a leukemic phase in a raccoon dog (Nyctereutes Procyonoides)

    Get PDF
    A 7.5-year-old raccoon dog (Nyctereutes procyonoides) from the Henry Doorly Zoo (Omaha, Nebraska) presented to the veterinary hospital for lethargy and weight loss. On physical examination, splenomegaly and hepatomegaly were noted on palpation and were confirmed by radiographic evaluation. Radiography also demonstrated a mass in the cranial mediastinum. A complete blood cell count revealed marked leukocytosis (115,200 cells/microl), with a predominance of lymphoid cells. The animal was euthanized due to a poor prognosis. Necropsy revealed splenomegaly, hepatomegaly, and a large multiloculated mass in the cranial mediastinum. The histologic and immunohistochemical diagnosis was multicentric T-cell lymphoma with a leukemic phase.published_or_final_versio

    A latent-variable modelling approach to the acoustic-to-articulatory mapping problem. I

    Get PDF
    We present a latent variable approach to the acoustic-to-articulatory mapping problem, where different vocal tract configurations can give rise to the same acoustics. In latent variable modelling, the combined acoustic and articulatory data are assumed to have been generated by an underlying low-dimensional process. A parametric probabilistic model is estimated and mappings are derived from the respective conditional distributions. This has the advantage over other methods, such as articulatory codebooks or neural networks, of directly addressing the nonuniqueness problem. We demonstrate our approach with electropalatographic and acoustic data from the ACCOR database

    Practical Identifiability of Finite Mixtures of Multivariate Bernoulli Distributions

    Get PDF
    The class of finite mixtures of multivariate Bernoulli distributions is known to be nonidentifiable; that is, different values of the mixture parameters can correspond to exactly the same probability distribution. In principle, this would mean that sample estimates using this model would give rise to different interpretations. We give empirical support to the fact that estimation of this class of mixtures can still produce meaningful results in practice, thus lessening the importance of the identifiability problem. We also show that the expectation-maximization algorithm is guaranteed to converge to a proper maximum likelihood estimate, owing to a property of the log-likelihood surface. Experiments with synthetic data sets show that an original generating distribution can be estimated from a sample. Experiments with an electropalatography data set show important structure in the data

    Cost of energy and mutual shadows in a two-axis tracking PV system

    Get PDF
    The performance improvement obtained from the use of trackers in a PV system cannot be separated from the higher requirement of land due to the mutual shadows between generators. Thus, the optimal choice of distances between trackers is a compromise between productivity and land use to minimize the cost of the energy produced by the PV system during its lifetime. This paper develops a method for the estimation and optimization of the cost of energy function. It is built upon a set of equations to model the mutual shadows geometry and a procedure for the optimal choice of the wire cross-section. Several examples illustrate the use of the method with a particular PV system under different conditions of land and equipment costs. This method is implemented using free software available as supplementary material

    Experimental Evaluation of Latent Variable Models for Dimensionality Reduction

    Get PDF
    We use electropalatographic (EPG) data as a test bed for dimensionality reduction methods based in latent variable modelling, in which an underlying lower dimension representation is inferred directly from the data. Several models (and mixtures of them) are investigated, including factor analysis and the generative topographic mapping. Experiments indicate that nonlinear latent variable modelling reveals a low-dimensional structure in the data inaccessible to the investigated linear model
    corecore