3,566 research outputs found

    11 x 11 Domineering is Solved: The first player wins

    Full text link
    We have developed a program called MUDoS (Maastricht University Domineering Solver) that solves Domineering positions in a very efficient way. This enables the solution of known positions so far (up to the 10 x 10 board) much quicker (measured in number of investigated nodes). More importantly, it enables the solution of the 11 x 11 Domineering board, a board up till now far out of reach of previous Domineering solvers. The solution needed the investigation of 259,689,994,008 nodes, using almost half a year of computation time on a single simple desktop computer. The results show that under optimal play the first player wins the 11 x 11 Domineering game, irrespective if Vertical or Horizontal starts the game. In addition, several other boards hitherto unsolved were solved. Using the convention that Vertical starts, the 8 x 15, 11 x 9, 12 x 8, 12 x 15, 14 x 8, and 17 x 6 boards are all won by Vertical, whereas the 6 x 17, 8 x 12, 9 x 11, and 11 x 10 boards are all won by Horizontal

    Predator-Induced Vertical Behavior of a Ctenophore

    Get PDF
    Although many studies have focused on Mnemiopsis leidyi predation, little is known about the role of this ctenophore as prey when abundant in native and invaded pelagic systems. We examined the response of the ctenophore M. leidyi to the predatory ctenophore Beroe ovata in an experiment in which the two species could potentially sense each other while being physically separated. On average, M. leidyi responded to the predator’s presence by increasing variability in swimming speeds and by lowering their vertical distribution. Such behavior may help explain field records of vertical migration, as well as stratified and near-bottom distributions of M. leidyi

    Knowledge is at the Edge! How to Search in Distributed Machine Learning Models

    Full text link
    With the advent of the Internet of Things and Industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95% accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy.Comment: Published in CoopIS 201

    Patterns of dominant flows in the world trade web

    Get PDF
    The large-scale organization of the world economies is exhibiting increasingly levels of local heterogeneity and global interdependency. Understanding the relation between local and global features calls for analytical tools able to uncover the global emerging organization of the international trade network. Here we analyze the world network of bilateral trade imbalances and characterize its overall flux organization, unraveling local and global high-flux pathways that define the backbone of the trade system. We develop a general procedure capable to progressively filter out in a consistent and quantitative way the dominant trade channels. This procedure is completely general and can be applied to any weighted network to detect the underlying structure of transport flows. The trade fluxes properties of the world trade web determines a ranking of trade partnerships that highlights global interdependencies, providing information not accessible by simple local analysis. The present work provides new quantitative tools for a dynamical approach to the propagation of economic crises

    Free Will in a Quantum World?

    Get PDF
    In this paper, I argue that Conway and Kochen’s Free Will Theorem (1,2) to the conclusion that quantum mechanics and relativity entail freedom for the particles, does not change the situation in favor of a libertarian position as they would like. In fact, the theorem more or less implicitly assumes that people are free, and thus it begs the question. Moreover, it does not prove neither that if people are free, so are particles, nor that the property people possess when they are said to be free is the same as the one particles possess when they are claimed to be free. I then analyze the Free State Theorem (2), which generalizes the Free Will Theorem without the assumption that people are free, and I show that it does not prove anything about free will, since the notion of freedom for particles is either inconsistent, or it does not concern our common understanding of freedom. In both cases, the Free Will Theorem and the Free State Theorem do not provide any enlightenment on the constraints physics can pose on free will

    Validation of a model to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD: the rotterdam ischemic heart disease and stroke computer simulation (RISC) model.

    Get PDF
    BACKGROUND: We developed a Monte Carlo Markov model designed to investigate the effects of modifying cardiovascular disease (CVD) risk factors on the burden of CVD. Internal, predictive, and external validity of the model have not yet been established. METHODS: The Rotterdam Ischemic Heart Disease and Stroke Computer Simulation (RISC) model was developed using data covering 5 years of follow-up from the Rotterdam Study. To prove 1) internal and 2) predictive validity, the incidences of coronary heart disease (CHD), stroke, CVD death, and non-CVD death simulated by the model over a 13-year period were compared with those recorded for 3,478 participants in the Rotterdam Study with at least 13 years of follow-up. 3) External validity was verified using 10 years of follow-up data from the European Prospective Investigation of Cancer (EPIC)-Norfolk study of 25,492 participants, for whom CVD and non-CVD mortality was compared. RESULTS: At year 5, the observed incidences (with simulated incidences in brackets) of CHD, stroke, and CVD and non-CVD mortality for the 3,478 Rotterdam Study participants were 5.30% (4.68%), 3.60% (3.23%), 4.70% (4.80%), and 7.50% (7.96%), respectively. At year 13, these percentages were 10.60% (10.91%), 9.90% (9.13%), 14.20% (15.12%), and 24.30% (23.42%). After recalibrating the model for the EPIC-Norfolk population, the 10-year observed (simulated) incidences of CVD and non-CVD mortality were 3.70% (4.95%) and 6.50% (6.29%). All observed incidences fell well within the 95% credibility intervals of the simulated incidences. CONCLUSIONS: We have confirmed the internal, predictive, and external validity of the RISC model. These findings provide a basis for analyzing the effects of modifying cardiovascular disease risk factors on the burden of CVD with the RISC model.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Conversion of patellofemoral arthroplasty to total knee arthroplasty: A matched case-control study of 13 patients

    Get PDF
    Background and purpose The long-term outcome of patellofemoral arthroplasty is related to progression of femorotibial osteoarthritis with need for conversion to total knee arthroplasty. We investigated whether prior patellofemoral arthroplasty compromises the results of total knee arthroplasty

    Integrated multiple mediation analysis: A robustness–specificity trade-off in causal structure

    Get PDF
    Recent methodological developments in causal mediation analysis have addressed several issues regarding multiple mediators. However, these developed methods differ in their definitions of causal parameters, assumptions for identification, and interpretations of causal effects, making it unclear which method ought to be selected when investigating a given causal effect. Thus, in this study, we construct an integrated framework, which unifies all existing methodologies, as a standard for mediation analysis with multiple mediators. To clarify the relationship between existing methods, we propose four strategies for effect decomposition: two-way, partially forward, partially backward, and complete decompositions. This study reveals how the direct and indirect effects of each strategy are explicitly and correctly interpreted as path-specific effects under different causal mediation structures. In the integrated framework, we further verify the utility of the interventional analogues of direct and indirect effects, especially when natural direct and indirect effects cannot be identified or when cross-world exchangeability is invalid. Consequently, this study yields a robustness–specificity trade-off in the choice of strategies. Inverse probability weighting is considered for estimation. The four strategies are further applied to a simulation study for performance evaluation and for analyzing the Risk Evaluation of Viral Load Elevation and Associated Liver Disease/Cancer data set from Taiwan to investigate the causal effect of hepatitis C virus infection on mortality

    Simulation study for analysis of binary responses in the presence of extreme case problems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Estimates of variance components for binary responses in presence of extreme case problems tend to be biased due to an under-identified likelihood. The bias persists even when a normal prior is used for the fixed effects.</p> <p>Methods</p> <p>A simulation study was carried out to investigate methods for the analysis of binary responses with extreme case problems. A linear mixed model that included a fixed effect and random effects of sire and residual on the liability scale was used to generate binary data. Five simulation scenarios were conducted based on varying percentages of extreme case problems, with true values of heritability equal to 0.07 and 0.17. Five replicates of each dataset were generated and analyzed with a generalized prior (<b>g-prior</b>) of varying weight.</p> <p>Results</p> <p>Point estimates of sire variance using a normal prior were severely biased when the percentage of extreme case problems was greater than 30%. Depending on the percentage of extreme case problems, the sire variance was overestimated when a normal prior was used by 36 to 102% and 25 to 105% for a heritability of 0.17 and 0.07, respectively. When a g-prior was used, the bias was reduced and even eliminated, depending on the percentage of extreme case problems and the weight assigned to the g-prior. The lowest Pearson correlations between true and estimated fixed effects were obtained when a normal prior was used. When a 15% g-prior was used instead of a normal prior with a heritability equal to 0.17, Pearson correlations between true and fixed effects increased by 11, 20, 23, 27, and 60% for 5, 10, 20, 30 and 75% of extreme case problems, respectively. Conversely, Pearson correlations between true and estimated fixed effects were similar, within datasets of varying percentages of extreme case problems, when a 5, 10, or 15% g-prior was included. Therefore this indicates that a model with a g-prior provides a more adequate estimation of fixed effects.</p> <p>Conclusions</p> <p>The results suggest that when analyzing binary data with extreme case problems, bias in the estimation of variance components could be eliminated, or at least significantly reduced by using a g-prior.</p

    The International-Trade Network: Gravity Equations and Topological Properties

    Get PDF
    This paper begins to explore the determinants of the topological properties of the international - trade network (ITN). We fit bilateral-trade flows using a standard gravity equation to build a "residual" ITN where trade-link weights are depurated from geographical distance, size, border effects, trade agreements, and so on. We then compare the topological properties of the original and residual ITNs. We find that the residual ITN displays, unlike the original one, marked signatures of a complex system, and is characterized by a very different topological architecture. Whereas the original ITN is geographically clustered and organized around a few large-sized hubs, the residual ITN displays many small-sized but trade-oriented countries that, independently of their geographical position, either play the role of local hubs or attract large and rich countries in relatively complex trade-interaction patterns
    • 

    corecore