44,766 research outputs found

    Validation of a laboratory method for evaluating dynamic properties of reconstructed equine racetrack surfaces.

    Get PDF
    BackgroundRacetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior.ObjectiveTo develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties.MethodsTrack-testing device (TTD) impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack) and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression.ResultsMost dynamic surface property setting differences (racetrack-laboratory) were small relative to surface material type differences (dirt-synthetic). Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces.ConclusionsLaboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD).Potential relevanceDynamic impact properties of race surfaces can be evaluated in a laboratory setting, allowing for further study of factors affecting surface behavior under controlled conditions

    Static Application-Level Race Detection in STM Haskell using Contracts

    Get PDF
    Writing concurrent programs is a hard task, even when using high-level synchronization primitives such as transactional memories together with a functional language with well-controlled side-effects such as Haskell, because the interferences generated by the processes to each other can occur at different levels and in a very subtle way. The problem occurs when a thread leaves or exposes the shared data in an inconsistent state with respect to the application logic or the real meaning of the data. In this paper, we propose to associate contracts to transactions and we define a program transformation that makes it possible to extend static contract checking in the context of STM Haskell. As a result, we are able to check statically that each transaction of a STM Haskell program handles the shared data in a such way that a given consistency property, expressed in the form of a user-defined boolean function, is preserved. This ensures that bad interference will not occur during the execution of the concurrent program.Comment: In Proceedings PLACES 2013, arXiv:1312.2218. [email protected]; [email protected]

    Preventing Atomicity Violations with Contracts

    Full text link
    Software developers are expected to protect concurrent accesses to shared regions of memory with some mutual exclusion primitive that ensures atomicity properties to a sequence of program statements. This approach prevents data races but may fail to provide all necessary correctness properties.The composition of correlated atomic operations without further synchronization may cause atomicity violations. Atomic violations may be avoided by grouping the correlated atomic regions in a single larger atomic scope. Concurrent programs are particularly prone to atomicity violations when they use services provided by third party packages or modules, since the programmer may fail to identify which services are correlated. In this paper we propose to use contracts for concurrency, where the developer of a module writes a set of contract terms that specify which methods are correlated and must be executed in the same atomic scope. These contracts are then used to verify the correctness of the main program with respect to the usage of the module(s). If a contract is well defined and complete, and the main program respects it, then the program is safe from atomicity violations with respect to that module. We also propose a static analysis based methodology to verify contracts for concurrency that we applied to some real-world software packages. The bug we found in Tomcat 6.0 was immediately acknowledged and corrected by its development team

    Insulin-like growth factors and related proteins in plasma and cerebrospinal fluids of HIV-positive individuals.

    Get PDF
    BackgroundClinically significant dysregulation of the insulin-like growth factor (IGF) family proteins occurs in HIV-infected individuals, but the details including whether the deficiencies in IGFs contribute to CNS dysfunction are unknown.MethodsWe measured the levels of IGF1, IGF2, IGFBP1, IGFBP2, and IGF2 receptor (IGF2R) in matching plasma and cerebrospinal fluid (CSF) samples of 107 HIV+ individuals from CNS HIV Antiretroviral Therapy Effects Research (CHARTER) and analyzed their associations with demographic and disease characteristics, as well as levels of several soluble inflammatory mediators (TNFα, IL-6, IL-10, IL-17, IP-10, MCP-1, and progranulin). We also determined whether IGF1 or IGF2 deficiency is associated with HIV-associated neurocognitive disorder (HAND) and whether the levels of soluble IGF2R (an IGF scavenging receptor, which we also have found to be a cofactor for HIV infection in vitro) correlate with HIV viral load (VL).ResultsThere was a positive correlation between the levels of IGF-binding proteins (IGFBPs) and those of inflammatory mediators: between plasma IGFBP1 and IL-17 (β coefficient 0.28, P = 0.009), plasma IGFBP2 and IL-6 (β coefficient 0.209, P = 0.021), CSF IGFBP1 and TNFα (β coefficient 0.394, P < 0.001), and CSF IGFBP2 and TNF-α (β coefficient 0.14, P < 0.001). As IGFBPs limit IGF availability, these results suggest that inflammation is a significant factor that modulates IGF protein expression/availability in the setting of HIV infection. However, there was no significant association between HAND and the reduced levels of plasma IGF1, IGF2, or CSF IGF1, suggesting a limited power of our study. Interestingly, plasma IGF1 was significantly reduced in subjects on non-nucleoside reverse transcriptase inhibitor-based antiretroviral therapy (ART) compared to protease inhibitor-based therapy (174.1 ± 59.8 vs. 202.8 ± 47.3 ng/ml, P = 0.008), suggesting a scenario in which ART regimen-related toxicity can contribute to HAND. Plasma IGF2R levels were positively correlated with plasma VL (β coefficient 0.37, P = 0.021) and inversely correlated with current CD4+ T cell counts (β coefficient -0.04, P = 0.021), supporting our previous findings in vitro.ConclusionsTogether, these results strongly implicate (1) an inverse relationship between inflammation and IGF growth factor availability and the contribution of IGF deficiencies to HAND and (2) the role of IGF2R in HIV infection and as a surrogate biomarker for HIV VL

    Delays in Leniency Application: Is There Really a Race to the Enforcer's Door?

    Get PDF
    This paper studies cartels’ strategic behavior in delaying leniency applications, a take-up decision that has been ignored in the previous literature. Using European Commission decisions issued over a 16-year span, we show, contrary to common beliefs and the existing literature, that conspirators often apply for leniency long after a cartel collapses. We estimate hazard and probit models to study the determinants of leniency-application delays. Statistical tests find that delays are symmetrically affected by antitrust policies and macroeconomic fluctuations. Our results shed light on the design of enforcement programs against cartels and other forms of conspiracy

    Do changes in health reveal the possibility of undiagnosed pancreatic cancer? Development of a risk-prediction model based on healthcare claims data.

    Get PDF
    Background and objectiveEarly detection methods for pancreatic cancer are lacking. We aimed to develop a prediction model for pancreatic cancer based on changes in health captured by healthcare claims data.MethodsWe conducted a case-control study on 29,646 Medicare-enrolled patients aged 68 years and above with pancreatic ductal adenocarcinoma (PDAC) reported to the Surveillance Epidemiology an End Results (SEER) tumor registries program in 2004-2011 and 88,938 age and sex-matched controls. We developed a prediction model using multivariable logistic regression on Medicare claims for 16 risk factors and pre-diagnostic symptoms of PDAC present within 15 months prior to PDAC diagnosis. Claims within 3 months of PDAC diagnosis were excluded in sensitivity analyses. We evaluated the discriminatory power of the model with the area under the receiver operating curve (AUC) and performed cross-validation by bootstrapping.ResultsThe prediction model on all cases and controls reached AUC of 0.68. Excluding the final 3 months of claims lowered the AUC to 0.58. Among new-onset diabetes patients, the prediction model reached AUC of 0.73, which decreased to 0.63 when claims from the final 3 months were excluded. Performance measures of the prediction models was confirmed by internal validation using the bootstrap method.ConclusionModels based on healthcare claims for clinical risk factors, symptoms and signs of pancreatic cancer are limited in classifying those who go on to diagnosis of pancreatic cancer and those who do not, especially when excluding claims that immediately precede the diagnosis of PDAC

    Urinary Phthalate Metabolites and Biomarkers of Oxidative Stress in a Mexican-American Cohort: Variability in Early and Late Pregnancy.

    Get PDF
    People are exposed to phthalates through their wide use as plasticizers and in personal care products. Many phthalates are endocrine disruptors and have been associated with adverse health outcomes. However, knowledge gaps exist in understanding the molecular mechanisms associated with the effects of exposure in early and late pregnancy. In this study, we examined the relationship of eleven urinary phthalate metabolites with isoprostane, an established marker of oxidative stress, among pregnant Mexican-American women from an agricultural cohort. Isoprostane levels were on average 20% higher at 26 weeks than at 13 weeks of pregnancy. Urinary phthalate metabolite concentrations suggested relatively consistent phthalate exposures over pregnancy. The relationship between phthalate metabolite concentrations and isoprostane levels was significant for the sum of di-2-ethylhexyl phthalate and the sum of high molecular weight metabolites with the exception of monobenzyl phthalate, which was not associated with oxidative stress at either time point. In contrast, low molecular weight metabolite concentrations were not associated with isoprostane at 13 weeks, but this relationship became stronger later in pregnancy (p-value = 0.009 for the sum of low molecular weight metabolites). Our findings suggest that prenatal exposure to phthalates may influence oxidative stress, which is consistent with their relationship with obesity and other adverse health outcomes

    Training methods for facial image comparison: a literature review

    Get PDF
    This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, Bülthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie
    corecore