99 research outputs found

    Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin

    Full text link
    In the Bitcoin system, participants are rewarded for solving cryptographic puzzles. In order to receive more consistent rewards over time, some participants organize mining pools and split the rewards from the pool in proportion to each participant's contribution. However, several attacks threaten the ability to participate in pools. The block withholding (BWH) attack makes the pool reward system unfair by letting malicious participants receive unearned wages while only pretending to contribute work. When two pools launch BWH attacks against each other, they encounter the miner's dilemma: in a Nash equilibrium, the revenue of both pools is diminished. In another attack called selfish mining, an attacker can unfairly earn extra rewards by deliberately generating forks. In this paper, we propose a novel attack called a fork after withholding (FAW) attack. FAW is not just another attack. The reward for an FAW attacker is always equal to or greater than that for a BWH attacker, and it is usable up to four times more often per pool than in BWH attack. When considering multiple pools - the current state of the Bitcoin network - the extra reward for an FAW attack is about 56% more than that for a BWH attack. Furthermore, when two pools execute FAW attacks on each other, the miner's dilemma may not hold: under certain circumstances, the larger pool can consistently win. More importantly, an FAW attack, while using intentional forks, does not suffer from practicality issues, unlike selfish mining. We also discuss partial countermeasures against the FAW attack, but finding a cheap and efficient countermeasure remains an open problem. As a result, we expect to see FAW attacks among mining pools.Comment: This paper is an extended version of a paper accepted to ACM CCS 201

    A Diagrammatic Approach to Crystalline Color Superconductivity

    Get PDF
    We present a derivation of the gap equation for the crystalline color superconducting phase of QCD which begins from a one-loop Schwinger-Dyson equation written using a Nambu-Gorkov propagator modified to describe the spatially varying condensate. Some aspects of previous variational calculations become more straightforward when rephrased beginning from a diagrammatic starting point. This derivation also provides a natural base from which to generalize the analysis to include quark masses, nontrivial crystal structures, gluon propagation at asymptotic densities, and nonzero temperature. In this paper, we analyze the effects of nonzero temperature on the crystalline color superconducting phase.Comment: 15 pages. 2 eps figure

    Opening the Crystalline Color Superconductivity Window

    Get PDF
    Cold dense quark matter is in a crystalline color superconducting phase wherever pairing occurs between species of quarks with chemical potentials whose difference \delta\mu lies within an appropriate window. If the interaction between quarks is modeled as point-like, this window is rather narrow. We show that when the interaction between quarks is modeled as single-gluon exchange, the window widens by about a factor of ten at accessible densities and by much larger factors at higher density. This striking enhancement reflects the increasingly (1+1)-dimensional nature of the physics at weaker and weaker coupling. Our results indicate that crystalline color superconductivity is a generic feature of the phase diagram of cold dense quark matter, occurring wherever one finds quark matter which is not in the color-flavor locked phase. If it occurs within the cores of compact stars, a crystalline color superconducting region may provide a new locus for glitch phenomena.Comment: 14 pages, 2 figure

    On the Applicability of Weak-Coupling Results in High Density QCD

    Get PDF
    Quark matter at asymptotically high baryon chemical potential is in a color superconducting state characterized by a gap Delta. We demonstrate that although present weak-coupling calculations of Delta are formally correct for mu -> Infinity, the contributions which have to this point been neglected are large enough that present results can only be trusted for mu >> mu_c ~ 10^8 MeV. We make this argument by using the gauge dependence of the present calculation as a diagnostic tool. It is known that the present calculation yields a gauge invariant result for mu -> Infinity; we show, however, that the gauge dependence of this result only begins to decrease for mu > mu_c, and conclude that the result can certainly not be trusted for mu < mu_c. In an appendix, we set up the calculation of the influence of the Meissner effect on the magnitude of the gap. This contribution to Delta is, however, much smaller than the neglected contributions whose absence we detect via the resulting gauge dependence.Comment: 21 pages, 3 figures, uses LaTeX2e and ReVTeX, updated figures, made minor text change

    Air quality evaluation of London Paddington train station

    Get PDF
    Enclosed railway stations hosting diesel trains are at risk of reduced air quality as a result of exhaust emissions that may endanger passengers and workers. Air quality measurements were conducted inside London Paddington Station, a semi-enclosed railway station where 70% of trains are powered by diesel engines. Particulate matter (PM2.5) mass was measured at five station locations. PM size, PM number, oxides of nitrogen (NOx), and sulfur dioxide (SO2) were measured at two station locations. Paddington Station’s hourly mean PM2.5 mass concentrations averaged 16 μg/m3 [min 2, max 68]. Paddington Station’s hourly mean NO2 concentrations averaged 73 ppb [49, 120] and SO2 concentrations averaged 25 ppb [15, 37]. While UK train stations are not required to comply with air quality standards, there were five instances where the hourly mean NO2 concentrations exceeded the EU hourly mean limits (106 ppb) for outdoor air quality. PM2.5, SO2, and NO2 concentrations were compared against Marylebone, a busy London roadside 1.5 km from the station. The comparisons indicated that train station air quality was more polluted than the nearby roadside. PM2.5 for at least one measurement location within Paddington Station was shown to be statistically higher (P-value < 0.05) than Marylebone on 3 out of 4 days. Measured NO2 within Paddington Station was statistically higher than Marylebone on 4 out of 5 days. Measured SO2 within Paddington Station was statistically higher than Marylebone on all 3 days.We thank the Engineering and Physical Sciences Research Council (EP/F034350/1) for funding the Energy Efficient Cities Initiative and the Schiff Foundation for doctoral studentship funding.This is the final version of the article. It first appeared from IOP via http://dx.doi.org/10.1088/1748-9326/10/9/09401

    Stratosphere‐troposphere coupling and annular mode variability in chemistry‐climate models

    Get PDF
    The internal variability and coupling between the stratosphere and troposphere in CCMVal‐2 chemistry‐climate models are evaluated through analysis of the annular mode patterns of variability. Computation of the annular modes in long data sets with secular trends requires refinement of the standard definition of the annular mode, and a more robust procedure that allows for slowly varying trends is established and verified. The spatial and temporal structure of the models’ annular modes is then compared with that of reanalyses. As a whole, the models capture the key features of observed intraseasonal variability, including the sharp vertical gradients in structure between stratosphere and troposphere, the asymmetries in the seasonal cycle between the Northern and Southern hemispheres, and the coupling between the polar stratospheric vortices and tropospheric midlatitude jets. It is also found that the annular mode variability changes little in time throughout simulations of the 21st century. There are, however, both common biases and significant differences in performance in the models. In the troposphere, the annular mode in models is generally too persistent, particularly in the Southern Hemisphere summer, a bias similar to that found in CMIP3 coupled climate models. In the stratosphere, the periods of peak variance and coupling with the troposphere are delayed by about a month in both hemispheres. The relationship between increased variability of the stratosphere and increased persistence in the troposphere suggests that some tropospheric biases may be related to stratospheric biases and that a well‐simulated stratosphere can improve simulation of tropospheric intraseasonal variability

    Results of matching valve and root repair to aortic valve and root pathology

    Get PDF
    ObjectiveFor patients with aortic root pathology and aortic valve regurgitation, aortic valve replacement is problematic because no durable bioprosthesis exists, and mechanical valves require lifetime anticoagulation. This study sought to assess outcomes of combined aortic valve and root repair, including comparison with matched bioprosthesis aortic valve replacement.MethodsFrom November 1990 to January 2005, 366 patients underwent modified David reimplantation (n = 72), root remodeling (n = 72), or valve repair with sinotubular junction tailoring (n = 222). Active follow-up was 99% complete, with a mean of 5.6 ± 4.0 years (maximum 17 years); follow-up for vital status averaged 8.5 ± 3.6 years (maximum 19 years). Propensity-adjusted models were developed for fair comparison of outcomes.ResultsThirty-day and 5-, 10-, and 15-year survivals were 98%, 86%, 74%, and 58%, respectively, similar to that of the US matched population and better than that after bioprosthesis aortic valve replacement. Propensity-score–adjusted survival was similar across procedures (P > .3). Freedom from reoperation at 30 days and 5 and 10 years was 99%, 92%, and 89%, respectively, and was similar across procedures (P > .3) after propensity-score adjustment. Patients with tricuspid aortic valves were more likely to be free of reoperation than those with bicuspid valves at 10 years (93% vs 77%, P = .002), equivalent to bioprosthesis aortic valve replacement and superior after 12 years. Bioprostheses increasingly deteriorated after 7 years, and hazard functions for reoperation crossed at 7 years.ConclusionsValve preservation (rather than replacement) and matching root procedures have excellent early and long-term results, with increasing survival benefit at 7 years and fewer reoperations by 12 years. We recommend this procedure for experienced surgical teams

    The Medical Segmentation Decathlon

    Get PDF
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts
    corecore