36 research outputs found

    Notes and Comments: Evolving Consumer Safeguards — Increased Producer and Seller Responsibility in the Absence of Strict Liability

    Get PDF
    The author examines the roots of product liability controversy, including several early cases developing the law, and reviews recent cases suggesting some important modifications in this field. Emphasis is placed on changes in the duty to warn, res ipsa loquitur, and contract formation as exemplified in the cases of Moran v. Faberge, Inc. and Giant Food, Inc. v. Washington Coca-Cola Bottling Co

    Notes and Comments: Evolving Consumer Safeguards — Increased Producer and Seller Responsibility in the Absence of Strict Liability

    Get PDF
    The author examines the roots of product liability controversy, including several early cases developing the law, and reviews recent cases suggesting some important modifications in this field. Emphasis is placed on changes in the duty to warn, res ipsa loquitur, and contract formation as exemplified in the cases of Moran v. Faberge, Inc. and Giant Food, Inc. v. Washington Coca-Cola Bottling Co

    Implicit Essentialism: Genetic Concepts Are Implicitly Associated with Fate Concepts

    Get PDF
    Genetic essentialism is the tendency for people to think in more essentialist ways upon encountering genetic concepts. The current studies assessed whether genetic essentialist biases would also be evident at the automatic level. In two studies, using different versions of the Implicit Association Test [1], we found that participants were faster to categorize when genes and fate were linked, compared to when these two concepts were kept separate and opposing. In addition to the wealth of past findings of genetic essentialism with explicit and deliberative measures, these biases appear to be also evident with implicit measure

    [Comment] Redefine statistical significance

    Get PDF
    The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on “statistically significant” findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (e.g., multiple testing, P-hacking, publication bias, and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: Statistical standards of evidence for claiming discoveries in many fields of science are simply too low. Associating “statistically significant” findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems. For fields where the threshold for defining statistical significance is P<0.05, we propose a change to P<0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called “significant” but do not meet the new threshold should instead be called “suggestive.” While statisticians have known the relative weakness of using P≈0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new (1, 2), a critical mass of researchers now endorse this change. We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (e.g., genomics and high-energy physics research; see Potential Objections below). We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P-values. However, changing the P-value threshold is simple and might quickly achieve broad acceptance

    Multicenter evaluation of the clinical utility of laparoscopy-assisted ERCP in patients with Roux-en-Y gastric bypass

    Get PDF
    Background and Aims The obesity epidemic has led to increased use of Roux-en-Y gastric bypass (RYGB). These patients have an increased incidence of pancreaticobiliary diseases yet standard ERCP is not possible due to surgically altered gastroduodenal anatomy. Laparoscopic-ERCP (LA-ERCP) has been proposed as an option but supporting data are derived from single center small case-series. Therefore, we conducted a large multicenter study to evaluate the feasibility, safety, and outcomes of LA-ERCP. Methods This is retrospective cohort study of adult patients with RYGB who underwent LA-ERCP in 34 centers. Data on demographics, indications, procedure success, and adverse events were collected. Procedure success was defined when all of the following were achieved: reaching the papilla, cannulating the desired duct and providing endoscopic therapy as clinically indicated. Results A total of 579 patients (median age 51, 84% women) were included. Indication for LA-ERCP was biliary in 89%, pancreatic in 8%, and both in 3%. Procedure success was achieved in 98%. Median total procedure time was 152 minutes (IQR 109-210) with median ERCP time 40 minutes (IQR 28-56). Median hospital stay was 2 days (IQR 1-3). Adverse events were 18% (laparoscopy-related 10%, ERCP-related 7%, both 1%) with the clear majority (92%) classified as mild/moderate whereas 8% were severe and 1 death occurred. Conclusion Our large multicenter study indicates that LA-ERCP in patients with RYGB is feasible with a high procedure success rate comparable with that of standard ERCP in patients with normal anatomy. ERCP-related adverse events rate is comparable with conventional ERCP, but the overall adverse event rate was higher due to the added laparoscopy-related events

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
    corecore