11,981 research outputs found

    Neural Connectivity with Hidden Gaussian Graphical State-Model

    Full text link
    The noninvasive procedures for neural connectivity are under questioning. Theoretical models sustain that the electromagnetic field registered at external sensors is elicited by currents at neural space. Nevertheless, what we observe at the sensor space is a superposition of projected fields, from the whole gray-matter. This is the reason for a major pitfall of noninvasive Electrophysiology methods: distorted reconstruction of neural activity and its connectivity or leakage. It has been proven that current methods produce incorrect connectomes. Somewhat related to the incorrect connectivity modelling, they disregard either Systems Theory and Bayesian Information Theory. We introduce a new formalism that attains for it, Hidden Gaussian Graphical State-Model (HIGGS). A neural Gaussian Graphical Model (GGM) hidden by the observation equation of Magneto-encephalographic (MEEG) signals. HIGGS is equivalent to a frequency domain Linear State Space Model (LSSM) but with sparse connectivity prior. The mathematical contribution here is the theory for high-dimensional and frequency-domain HIGGS solvers. We demonstrate that HIGGS can attenuate the leakage effect in the most critical case: the distortion EEG signal due to head volume conduction heterogeneities. Its application in EEG is illustrated with retrieved connectivity patterns from human Steady State Visual Evoked Potentials (SSVEP). We provide for the first time confirmatory evidence for noninvasive procedures of neural connectivity: concurrent EEG and Electrocorticography (ECoG) recordings on monkey. Open source packages are freely available online, to reproduce the results presented in this paper and to analyze external MEEG databases

    Economic Insights from Internet Auctions: A Survey

    Get PDF
    This paper surveys recent studies of Internet auctions. Four main areas of research are summarized. First, economists have documented strategic bidding in these markets and attempted to understand why sniping, or bidding at the last second, occurs. Second, some researchers have measured distortions from asymmetric information due, for instance, to the winner's curse. Third, we explore research about the role of reputation in online auctions. Finally, we discuss what Internet auctions have to teach us about auction design.

    Merchant interconnector projects by generators in the EU: Effects on profitability and allocation of capacity

    Get PDF
    When building a cross-border transmission line (a so-called interconnector) as a for-profit (merchant) project, where the regulator has required that capacity allocation be done non-discriminatorily by explicit auction, the identity of the investor can affect the profitability of the interconnector project and, once operational, the resulting allocation of its capacity. Specifically, when the investor is a generator (hereafter the integrated generator) who also can use the interconnector to export its electricity to a distant location, then, once operational, the integrated generator will bid more aggressively in the allocation auctions to increase the auction revenue and thus its profits. As a result, the integrated generator is more likely to win the auction and the capacity is sold for a higher price. This lowers the allocative efficiency of the auction, but it increases the expected ex-ante profitability of the merchant interconnector project. Unaffiliated, independent generators, however, are less likely to win the auction and, in any case, pay a higher price, which dramatically lowers their revenues from exporting electricity over this interconnector.electricity markets; regulation; cross-border electricity transmissions; vertical integration; asymmetric auctions; bidding behavior

    Generation of Strategies for Environmental Deception in Two-Player Normal-Form Games

    Get PDF
    Methods of performing and defending against deceptive actions are a popular field of study in game theory; however, the focus is mostly on action deception in turn-based games. This work focuses on developing strategies for performing environmental deception in two-player, strategic-form games. Environmental deception is defined as deception where one player has the ability to change the other\u27s perception of the state of the game through modification of their perception of the game\u27s payoff matrix, similar to the use of camouflage. The main contributions of this research are an expansion of the definition of the stability of a Nash equilibrium to include cells outside the equilibrium, and the creation of four algorithms for developing strategies for environmental deception, including closed-form solutions for the creation of a 3x3 deceptive game with a 2x2 mixed-strategy Nash equilibrium (MSNE) that benefits the deceiver from a 3x3 game containing a 2x2 MSNE. It is found that the value gain produced by a deceptive algorithm is dependent upon the type of game to which it is applied and the maximum amount of allowable change to the payoff matrix emphasizing the importance of carefully selecting an algorithm to match the situation to which it is applied

    FDTD/K-DWM simulation of 3D room acoustics on general purpose graphics hardware using compute unified device architecture (CUDA)

    Get PDF
    The growing demand for reliable prediction of sound fields in rooms have resulted in adaptation of various approaches for physical modeling, including the Finite Difference Time Domain (FDTD) and the Digital Waveguide Mesh (DWM). Whilst considered versatile and attractive methods, they suffer from dispersion errors that increase with frequency and vary with direction of propagation, thus imposing a high frequency calculation limit. Attempts have been made to reduce such errors by considering different mesh topologies, by spatial interpolation, or by simply oversampling the grid. As the latter approach is computationally expensive, its application to three-dimensional problems has often been avoided. In this paper, we propose an implementation of the FDTD on general purpose graphics hardware, allowing for high sampling rates whilst maintaining reasonable calculation times. Dispersion errors are consequently reduced and the high frequency limit is increased. A range of graphics processors are evaluated and compared with traditional CPUs in terms of accuracy, calculation time and memory requirements

    The Functional Method of Comparative Law

    Get PDF
    The functional method has become both the mantra and the bete noire of contemporary comparative law. The debate over the functional method is the focal point of almost all discussions about the field of comparative law as a whole, about centers and peripheries of scholarly projects and interests, about mainstream and avant-garde, about ethnocentrism and orientalism, about convergence and pluralism, about technocratic instrumentalism and cultural awareness, etc. Not surprisingly, this functional method is a chimera, both as theory and as practice of comparative law. In fact, the functional method is a trifold misnomer: There is not one ( the ) functional method but many, not all methods so called are functional at all, and some projects claiming adherence to it do not even follow any recognizable method. This paper first places the functional method in a historical and interdisciplinary context, in order to see its connections with, and peculiarities opposed to, the debates about functionalism in other disciplines. Second, it tries to use the functionalist method on the method itself, in order to determine how functional it is. This makes it necessary to place functionalism within a larger framework -- not within the development of comparative law, but instead within the rise and fall of functionalism in other disciplines, especially the social sciences. Thirdly, the comparison with functionalism in other disciplines enables us to see what is special about functionalism in comparative law, and why what would in other disciplines rightly be regarded as methodological shortcomings may in fact be fruitful for comparative law. This analysis leads to surprising results. Generally, one assumes that the strength of the functional method lies in its emphasis on similarities, its aspirations towards evaluation and unification of law. Actually, the functional method emphasizes difference, it does not give us criteria for evaluation, and it provides powerful arguments against unification. Further, one generally assumes that the functional method does not account sufficiently for culture and is reductionist. However, the functional method not only requires us to look at culture, but also enables us, better than other methods, to formulate general laws without having to abstract from the specificities. The problem is that the functional method, as generally described, combines a number of different concepts of function: an evolutionary concept, a structural concept, a concept focusing on equivalence. The relation between these different concepts within the method is unclear, its aspirations therefore unrealistic. If we reconstruct the method plainly on the basis of functional equivalence as the most robust of the three concepts of function and emphasize an interpretative as opposed to a scientific approach, we realize that the functional method can make less claims, but at the same time is less open to some of the critique voiced against it. In short, the functional method is strong as a tool for understanding, comparing, and critiquing different laws, but a weak tool for evaluating and unifying laws. It helps us in tolerating and critiqueing foreign law, it helps us less in critiquing our own

    Does money matter in inflation forecasting?.

    Get PDF
    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation

    Sympiler: Transforming Sparse Matrix Codes by Decoupling Symbolic Analysis

    Full text link
    Sympiler is a domain-specific code generator that optimizes sparse matrix computations by decoupling the symbolic analysis phase from the numerical manipulation stage in sparse codes. The computation patterns in sparse numerical methods are guided by the input sparsity structure and the sparse algorithm itself. In many real-world simulations, the sparsity pattern changes little or not at all. Sympiler takes advantage of these properties to symbolically analyze sparse codes at compile-time and to apply inspector-guided transformations that enable applying low-level transformations to sparse codes. As a result, the Sympiler-generated code outperforms highly-optimized matrix factorization codes from commonly-used specialized libraries, obtaining average speedups over Eigen and CHOLMOD of 3.8X and 1.5X respectively.Comment: 12 page
    corecore