214 research outputs found

    Generalized Totalizer Encoding for Pseudo-Boolean Constraints

    Full text link
    Pseudo-Boolean constraints, also known as 0-1 Integer Linear Constraints, are used to model many real-world problems. A common approach to solve these constraints is to encode them into a SAT formula. The runtime of the SAT solver on such formula is sensitive to the manner in which the given pseudo-Boolean constraints are encoded. In this paper, we propose generalized Totalizer encoding (GTE), which is an arc-consistency preserving extension of the Totalizer encoding to pseudo-Boolean constraints. Unlike some other encodings, the number of auxiliary variables required for GTE does not depend on the magnitudes of the coefficients. Instead, it depends on the number of distinct combinations of these coefficients. We show the superiority of GTE with respect to other encodings when large pseudo-Boolean constraints have low number of distinct coefficients. Our experimental results also show that GTE remains competitive even when the pseudo-Boolean constraints do not have this characteristic.Comment: 10 pages, 2 figures, 2 tables. To be published in 21st International Conference on Principles and Practice of Constraint Programming 201

    Exploiting Resolution-based Representations for MaxSAT Solving

    Full text link
    Most recent MaxSAT algorithms rely on a succession of calls to a SAT solver in order to find an optimal solution. In particular, several algorithms take advantage of the ability of SAT solvers to identify unsatisfiable subformulas. Usually, these MaxSAT algorithms perform better when small unsatisfiable subformulas are found early. However, this is not the case in many problem instances, since the whole formula is given to the SAT solver in each call. In this paper, we propose to partition the MaxSAT formula using a resolution-based graph representation. Partitions are then iteratively joined by using a proximity measure extracted from the graph representation of the formula. The algorithm ends when only one partition remains and the optimal solution is found. Experimental results show that this new approach further enhances a state of the art MaxSAT solver to optimally solve a larger set of industrial problem instances

    On Tackling the Limits of Resolution in SAT Solving

    Full text link
    The practical success of Boolean Satisfiability (SAT) solvers stems from the CDCL (Conflict-Driven Clause Learning) approach to SAT solving. However, from a propositional proof complexity perspective, CDCL is no more powerful than the resolution proof system, for which many hard examples exist. This paper proposes a new problem transformation, which enables reducing the decision problem for formulas in conjunctive normal form (CNF) to the problem of solving maximum satisfiability over Horn formulas. Given the new transformation, the paper proves a polynomial bound on the number of MaxSAT resolution steps for pigeonhole formulas. This result is in clear contrast with earlier results on the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper also establishes the same polynomial bound in the case of modern core-guided MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard for CDCL SAT solvers, show that these can be efficiently solved with modern MaxSAT solvers

    Discovery of the acetyl cation, CH3CO+, in space and in the laboratory

    Full text link
    Using the Yebes 40m and IRAM 30m radiotelescopes, we detected two series of harmonically related lines in space that can be fitted to a symmetric rotor. The lines have been seen towards the cold dense cores TMC-1, L483, L1527, and L1544. High level of theory ab initio calculations indicate that the best possible candidate is the acetyl cation, CH3CO+, which is the most stable product resulting from the protonation of ketene. We have produced this species in the laboratory and observed its rotational transitions Ju = 10 up to Ju = 27. Hence, we report the discovery of CH3CO+ in space based on our observations, theoretical calculations, and laboratory experiments. The derived rotational and distortion constants allow us to predict the spectrum of CH3CO+ with high accuracy up to 500 GHz. We derive an abundance ratio N(H2CCO)/N(CH3CO+) = 44. The high abundance of the protonated form of H2CCO is due to the high proton affinity of the neutral species. The other isomer, H2CCOH+, is found to be 178.9 kJ/mol above CH3CO+. The observed intensity ratio between the K=0 and K=1 lines, 2.2, strongly suggests that the A and E symmetry states have suffered interconversion processes due to collisions with H and/or H2, or during their formation through the reaction of H3+ with H2CCO.Comment: Accepted for publication in A&A Letter

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    Performance and Operation of the CMS Electromagnetic Calorimeter

    Get PDF
    The operation and general performance of the CMS electromagnetic calorimeter using cosmic-ray muons are described. These muons were recorded after the closure of the CMS detector in late 2008. The calorimeter is made of lead tungstate crystals and the overall status of the 75848 channels corresponding to the barrel and endcap detectors is reported. The stability of crucial operational parameters, such as high voltage, temperature and electronic noise, is summarised and the performance of the light monitoring system is presented

    Relationship between haemagglutination-inhibiting antibody titres and clinical protection against influenza: development and application of a bayesian random-effects model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Antibodies directed against haemagglutinin, measured by the haemagglutination inhibition (HI) assay are essential to protective immunity against influenza infection. An HI titre of 1:40 is generally accepted to correspond to a 50% reduction in the risk of contracting influenza in a susceptible population, but limited attempts have been made to further quantify the association between HI titre and protective efficacy.</p> <p>Methods</p> <p>We present a model, using a meta-analytical approach, that estimates the level of clinical protection against influenza at any HI titre level. Source data were derived from a systematic literature review that identified 15 studies, representing a total of 5899 adult subjects and 1304 influenza cases with interval-censored information on HI titre. The parameters of the relationship between HI titre and clinical protection were estimated using Bayesian inference with a consideration of random effects and censorship in the available information.</p> <p>Results</p> <p>A significant and positive relationship between HI titre and clinical protection against influenza was observed in all tested models. This relationship was found to be similar irrespective of the type of viral strain (A or B) and the vaccination status of the individuals.</p> <p>Conclusion</p> <p>Although limitations in the data used should not be overlooked, the relationship derived in this analysis provides a means to predict the efficacy of inactivated influenza vaccines when only immunogenicity data are available. This relationship can also be useful for comparing the efficacy of different influenza vaccines based on their immunological profile.</p

    Brain classification reveals the right cerebellum as the best biomarker of dyslexia

    Get PDF
    Background Developmental dyslexia is a specific cognitive disorder in reading acquisition that has genetic and neurological origins. Despite histological evidence for brain differences in dyslexia, we recently demonstrated that in large cohort of subjects, no differences between control and dyslexic readers can be found at the macroscopic level (MRI voxel), because of large variances in brain local volumes. In the present study, we aimed at finding brain areas that most discriminate dyslexic from control normal readers despite the large variance across subjects. After segmenting brain grey matter, normalizing brain size and shape and modulating the voxels' content, normal readers' brains were used to build a 'typical' brain via bootstrapped confidence intervals. Each dyslexic reader's brain was then classified independently at each voxel as being within or outside the normal range. We used this simple strategy to build a brain map showing regional percentages of differences between groups. The significance of this map was then assessed using a randomization technique. Results The right cerebellar declive and the right lentiform nucleus were the two areas that significantly differed the most between groups with 100% of the dyslexic subjects (N = 38) falling outside of the control group (N = 39) 95% confidence interval boundaries. The clinical relevance of this result was assessed by inquiring cognitive brain-based differences among dyslexic brain subgroups in comparison to normal readers' performances. The strongest difference between dyslexic subgroups was observed between subjects with lower cerebellar declive (LCD) grey matter volumes than controls and subjects with higher cerebellar declive (HCD) grey matter volumes than controls. Dyslexic subjects with LCD volumes performed worse than subjects with HCD volumes in phonologically and lexicon related tasks. Furthermore, cerebellar and lentiform grey matter volumes interacted in dyslexic subjects, so that lower and higher lentiform grey matter volumes compared to controls differently modulated the phonological and lexical performances. Best performances (observed in controls) corresponded to an optimal value of grey matter and they dropped for higher or lower volumes. Conclusion These results provide evidence for the existence of various subtypes of dyslexia characterized by different brain phenotypes. In addition, behavioural analyses suggest that these brain phenotypes relate to different deficits of automatization of language-based processes such as grapheme/phoneme correspondence and/or rapid access to lexicon entries. article available here: http://www.biomedcentral.com/1471-2202/10/6
    • 

    corecore