4,505 research outputs found

    Consistency checking within local search applied to the frequency assignment with polarization problem

    Get PDF
    International audienceWe present a hybrid approach for the Frequency Assignment Problem with Polarization. This problem, viewed as Max-CSP, is treated as a sequence of decision problems, CSP like. The proposed approach combines the Arc-Consistency techniques with a performed Tabu Search heuristic. The resulting algorithm gives some high quality solutions and has proved its robustness on instances with approximately a thousand variables and nearly ten thousand constraints

    Philosophy and the practice of Bayesian statistics

    Full text link
    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.Comment: 36 pages, 5 figures. v2: Fixed typo in caption of figure 1. v3: Further typo fixes. v4: Revised in response to referee

    Reasoning from Last Conflict(s) in Constraint Programming

    Get PDF
    International audienceConstraint programming is a popular paradigm to deal with combinatorial problems in arti cial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but su er from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classi ed into look-ahead (constraint propagation and search heuristics) and look-back (intelligent backtracking and learning) approaches. In this paper, we present an original look-ahead approach that allows to guide backtrack search toward sources of conicts and, as a side e ect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conict. Experiments over a wide range of benchmarks demonstrate the e ectiveness of this approach in both constraint satisfaction and automated arti cial intelligence planning

    Range separation: The divide between local structures and field theories

    Get PDF
    This work presents parallel histories of the development of two modern theories of condensed matter: the theory of electron structure in quantum mechanics, and the theory of liquid structure in statistical mechanics. Comparison shows that key revelations in both are not only remarkably similar, but even follow along a common thread of controversy that marks progress from antiquity through to the present. This theme appears as a creative tension between two competing philosophies, that of short range structure (atomistic models) on the one hand, and long range structure (continuum or density functional models) on the other. The timeline and technical content are designed to build up a set of key relations as guideposts for using density functional theories together with atomistic simulation.Comment: Expanded version of a 30 minute talk delivered at the 2018 TSRC workshop on Ions in Solution, to appear in the March, 2019 issue of Substantia (https://riviste.fupress.net/index.php/subs/index

    A survey on OFDM-based elastic core optical networking

    Get PDF
    Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed

    Scalable Architecture for Integrated Batch and Streaming Analysis of Big Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2015As Big Data processing problems evolve, many modern applications demonstrate special characteristics. Data exists in the form of both large historical datasets and high-speed real-time streams, and many analysis pipelines require integrated parallel batch processing and stream processing. Despite the large size of the whole dataset, most analyses focus on specific subsets according to certain criteria. Correspondingly, integrated support for efficient queries and post- query analysis is required. To address the system-level requirements brought by such characteristics, this dissertation proposes a scalable architecture for integrated queries, batch analysis, and streaming analysis of Big Data in the cloud. We verify its effectiveness using a representative application domain - social media data analysis - and tackle related research challenges emerging from each module of the architecture by integrating and extending multiple state-of-the-art Big Data storage and processing systems. In the storage layer, we reveal that existing text indexing techniques do not work well for the unique queries of social data, which put constraints on both textual content and social context. To address this issue, we propose a flexible indexing framework over NoSQL databases to support fully customizable index structures, which can embed necessary social context information for efficient queries. The batch analysis module demonstrates that analysis workflows consist of multiple algorithms with different computation and communication patterns, which are suitable for different processing frameworks. To achieve efficient workflows, we build an integrated analysis stack based on YARN, and make novel use of customized indices in developing sophisticated analysis algorithms. In the streaming analysis module, the high-dimensional data representation of social media streams poses special challenges to the problem of parallel stream clustering. Due to the sparsity of the high-dimensional data, traditional synchronization method becomes expensive and severely impacts the scalability of the algorithm. Therefore, we design a novel strategy that broadcasts the incremental changes rather than the whole centroids of the clusters to achieve scalable parallel stream clustering algorithms. Performance tests using real applications show that our solutions for parallel data loading/indexing, queries, analysis tasks, and stream clustering all significantly outperform implementations using current state-of-the-art technologies

    Automated protein NMR data analysis and its application to a-synuclein fibrils

    Get PDF
    In principle, nuclear magnetic resonance (NMR) spectroscopy provides structural and conformational information with sub-Angstrom precision and the ability to measure dynamics with timescales ranging from femtoseconds to years, all with atomic specificity. However, due to the relatively low sensitivity of NMR, fundamental limits on spectral resolution, and the complexity of the quantum mechanical phenomena NMR exploits, that wealth of information often remains out of reach. The highly varied presentation of molecular information in NMR spectra and the difficulty of numerical simulation of non-trivial systems has lead the majority of data analysis to be performed by trained experts, and because of its time-intensive nature, that analysis is rarely replicated by a third party or validated in an objective manner. In this dissertation I report my efforts to automate NMR data analysis in an objective and replicable manner and to provide tools for validation of resulting three-dimensional structures by direct comparison to raw spectral data. The first method, COMPASS, attempts to extract as much information as possible from a single 13C-13C two-dimensional spectrum for the determination of protein structure and successfully identified the true structure of 15 test proteins. The second method, GPS, predicts features of data that would be expected given a set of chemical shift assignments and possibly a three-dimensional structure and uses the presence or absence of those features in experimental spectra to refine or validate a given structure. I then report my application of these computational methods to the problems of refining an a-synuclein fibril structure with proton-detected NMR data, the analysis and characterization of a pair of interrelated a-synuclein fibril strains with distinct pathological properties, and to the general question of fibril polymorphism, a phenomenon that presents a substantial challenge to forming consistent conclusions about fibril properties and interactions across samples and research groups
    corecore