206 research outputs found

    StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks

    Get PDF
    This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attacks gained notoriety in 1988 as part of the Morris Worm incident on the Internet. While it is fairly simple to fix individual buffer overflow vulnerabilities, buffer overflow attacks continue to this day. Hundreds of attacks have been discovered, and while most of the obvious vulnerabilities have now been patched, more sophisticated buffer overflow attacks continue to emerge. We describe StackGuard: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties. Privileged programs that are recompiled with the StackGuard compiler extension no longer yield control to the attacker, but rather enter a fail-safe state. These programs require no source code changes at all, and are binary-compatible with existing operating systems and libraries. We describe the compiler technique (a simple patch to gcc), as well as a set of variations on the technique that tradeoff between penetration resistance and performance. We present experimental results of both the penetration resistance and the performance impact of this technique

    Hospitalized Infection as a Trigger for Acute Ischemic Stroke

    Get PDF
    Acute triggers for ischemic stroke, which may include infection, are understudied, as is whether background cardiovascular disease (CVD) risk modifies such triggering. We hypothesized that infection increases acute stroke risk, especially among those with low CVD risk

    Joint estimation of crown of thorns (Acanthaster planci) densities on the Great Barrier Reef

    Get PDF
    Crown-of-thorns starfish (CoTS; Acanthaster spp.) are an outbreaking pest among many Indo-Pacific coral reefs that cause substantial ecological and economic damage. Despite ongoing CoTS research, there remain critical gaps in observing CoTS populations and accurately estimating their numbers, greatly limiting understanding of the causes and sources of CoTS outbreaks. Here we address two of these gaps by (1) estimating the detectability of adult CoTS on typical underwater visual count (UVC) surveys using covariates and (2) inter-calibrating multiple data sources to estimate CoTS densities within the Cairns sector of the Great Barrier Reef (GBR). We find that, on average, CoTS detectability is high at 0.82 [0.77, 0.87] (median highest posterior density (HPD) and [95% uncertainty intervals]), with CoTS disc width having the greatest influence on detection. Integrating this information with coincident surveys from alternative sampling programs, we estimate CoTS densities in the Cairns sector of the GBR averaged 44 [41, 48] adults per hectare in 2014

    The human ACC2 CT-domain C-terminus is required for full functionality and has a novel twist

    Get PDF
    Inhibition of acetyl-CoA carboxylase (ACC) may prevent lipid-induced insulin resistance and type 2 diabetes, making the enzyme an attractive pharmaceutical target. Although the enzyme is highly conserved amongst animals, only the yeast enzyme structure is available for rational drug design. The use of biophysical assays has permitted the identification of a specific C-terminal truncation of the 826-residue human ACC2 carboxyl transferase (CT) domain that is both functionally competent to bind inhibitors and crystallizes in their presence. This C-terminal truncation led to the determination of the human ACC2 CT domain–CP-640186 complex crystal structure, which revealed distinctions from the yeast-enzyme complex. The human ACC2 CT-domain C-terminus is com­prised of three intertwined α-helices that extend outwards from the enzyme on the opposite side to the ligand-binding site. Differences in the observed inhibitor conformation between the yeast and human structures are caused by differing residues in the binding pocket

    How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers

    Get PDF
    Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program

    Measurement of the production of a W boson in association with a charm quark in pp collisions at √s = 7 TeV with the ATLAS detector

    Get PDF
    The production of a W boson in association with a single charm quark is studied using 4.6 fb−1 of pp collision data at s√ = 7 TeV collected with the ATLAS detector at the Large Hadron Collider. In events in which a W boson decays to an electron or muon, the charm quark is tagged either by its semileptonic decay to a muon or by the presence of a charmed meson. The integrated and differential cross sections as a function of the pseudorapidity of the lepton from the W-boson decay are measured. Results are compared to the predictions of next-to-leading-order QCD calculations obtained from various parton distribution function parameterisations. The ratio of the strange-to-down sea-quark distributions is determined to be 0.96+0.26−0.30 at Q 2 = 1.9 GeV2, which supports the hypothesis of an SU(3)-symmetric composition of the light-quark sea. Additionally, the cross-section ratio σ(W + +c¯¯)/σ(W − + c) is compared to the predictions obtained using parton distribution function parameterisations with different assumptions about the s−s¯¯¯ quark asymmetry

    Measures of Galaxy Environment - I. What is "Environment"?

    Full text link
    The influence of a galaxy's environment on its evolution has been studied and compared extensively in the literature, although differing techniques are often used to define environment. Most methods fall into two broad groups: those that use nearest neighbours to probe the underlying density field and those that use fixed apertures. The differences between the two inhibit a clean comparison between analyses and leave open the possibility that, even with the same data, different properties are actually being measured. In this work we apply twenty published environment definitions to a common mock galaxy catalogue constrained to look like the local Universe. We find that nearest neighbour-based measures best probe the internal densities of high-mass haloes, while at low masses the inter-halo separation dominates and acts to smooth out local density variations. The resulting correlation also shows that nearest neighbour galaxy environment is largely independent of dark matter halo mass. Conversely, aperture-based methods that probe super-halo scales accurately identify high-density regions corresponding to high mass haloes. Both methods show how galaxies in dense environments tend to be redder, with the exception of the largest apertures, but these are the strongest at recovering the background dark matter environment. We also warn against using photometric redshifts to define environment in all but the densest regions. When considering environment there are two regimes: the 'local environment' internal to a halo best measured with nearest neighbour and 'large-scale environment' external to a halo best measured with apertures. This leads to the conclusion that there is no universal environment measure and the most suitable method depends on the scale being probed.Comment: 14 pages, 9 figures, 1 table, published in MNRA

    Parton distributions for the LHC run II

    Get PDF
    We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.Comment: 151 pages, 69 figures. More typos corrected: published versio

    Developing Clinical and Research Priorities for Pain and Psychological Features in People With Patellofemoral Pain:An International Consensus Process With Health Care Professionals

    Get PDF
    OBJECTIVE: To decide clinical and research priorities on pain features and psychological factors in persons with patellofemoral pain. DESIGN: Consensus development process. METHODS: We undertook a 3-stage process consisting of (1) updating 2 systematic reviews on quantitative sensory testing of pain features and psychological factors in patellofemoral pain, (2) an online survey of health care professionals and persons with patellofemoral pain, and (3) a consensus meeting with expert health care professionals. Participants responded that they agreed, disagreed, or were unsure that a pain feature or psychological factor was important in clinical practice or as a research priority. Greater than 70% participant agreement was required for an item to be considered important in clinical practice or a research priority. RESULTS: Thirty-five health care professionals completed the survey, 20 of whom attended the consensus meeting. Thirty persons with patellofemoral pain also completed the survey. The review identified 5 pain features and 9 psychological factors—none reached 70% agreement in the patient survey, so all were considered at the meeting. Afte the meeting, pain catastrophizing, fear-avoidance beliefs, and pain self-efficacy were the only factors considered clinically important. All but the therma pain tests and 3 psychological factors were consid ered research priorities. CONCLUSION: Pain catastrophizing, pain self-efficacy, and fear-avoidance beliefs were factors considered important in treatment planning, clinical examination, and prognostication. Quantitative sensory tests for pain were not regarded as clinically important but were deemed to be research priorities, as were most psychological factors.</p
    corecore