109 research outputs found

    Improved approximation guarantees for weighted matching in the semi-streaming model

    Get PDF
    We study the maximum weight matching problem in the semi-streaming model, and improve on the currently best one-pass algorithm due to Zelke (Proc. of STACS2008, pages 669-680) by devising a deterministic approach whose performance guarantee is 4.91+epsilon. In addition, we study preemptive online algorithms, a sub-class of one-pass algorithms where we are only allowed to maintain a feasible matching in memory at any point in time. All known results prior to Zelke's belong to this sub-class. We provide a lower bound of 4.967 on the competitive ratio of any such deterministic algorithm, and hence show that future improvements will have to store in memory a set of edges which is not necessarily a feasible matching

    MIxS-BE : a MIxS extension defining a minimum information standard for sequence data from the built environment

    Get PDF
    © The Author(s), 2013. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in ISME Journal 8 (2014): 1-3, doi:10.1038/ismej.2013.176.The need for metadata standards for microbe sampling in the built environment.We would like to thank the Alfred P Sloan Foundation grant FP047325-01-PR for support for this project

    A structured overview of simultaneous component based data integration

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Data integration is currently one of the main challenges in the biomedical sciences. Often different pieces of information are gathered on the same set of entities (e.g., tissues, culture samples, biomolecules) with the different pieces stemming, for example, from different measurement techniques. This implies that more and more data appear that consist of two or more data arrays that have a shared mode. An integrative analysis of such coupled data should be based on a simultaneous analysis of all data arrays. In this respect, the family of simultaneous component methods (e.g., SUM-PCA, unrestricted PCovR, MFA, STATIS, and SCA-P) is a natural choice. Yet, different simultaneous component methods may lead to quite different results.</p> <p>Results</p> <p>We offer a structured overview of simultaneous component methods that frames them in a principal components setting such that both the common core of the methods and the specific elements with regard to which they differ are highlighted. An overview of principles is given that may guide the data analyst in choosing an appropriate simultaneous component method. Several theoretical and practical issues are illustrated with an empirical example on metabolomics data for <it>Escherichia coli </it>as obtained with different analytical chemical measurement methods.</p> <p>Conclusion</p> <p>Of the aspects in which the simultaneous component methods differ, pre-processing and weighting are consequential. Especially, the type of weighting of the different matrices is essential for simultaneous component analysis. These types are shown to be linked to different specifications of the idea of a fair integration of the different coupled arrays.</p

    Contributions of high- and low-quality patches to a metapopulation with stochastic disturbance

    Get PDF
    © The Author(s), 2010. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Theoretical Ecology 5 (2012): 167-179, doi:10.1007/s12080-010-0106-9.Studies of time-invariant matrix metapopulation models indicate that metapopulation growth rate is usually more sensitive to the vital rates of individuals in high-quality (i.e., good) patches than in low-quality (i.e., bad) patches. This suggests that, given a choice, management efforts should focus on good rather than bad patches. Here, we examine the sensitivity of metapopulation growth rate for a two-patch matrix metapopulation model with and without stochastic disturbance and found cases where managers can more efficiently increase metapopulation growth rate by focusing efforts on the bad patch. In our model, net reproductive rate differs between the two patches so that in the absence of dispersal, one patch is high quality and the other low quality. Disturbance, when present, reduces net reproductive rate with equal frequency and intensity in both patches. The stochastic disturbance model gives qualitatively similar results to the deterministic model. In most cases, metapopulation growth rate was elastic to changes in net reproductive rate of individuals in the good patch than the bad patch. However, when the majority of individuals are located in the bad patch, metapopulation growth rate can be most elastic to net reproductive rate in the bad patch. We expand the model to include two stages and parameterize the patches using data for the softshell clam, Mya arenaria. With a two-stage demographic model, the elasticities of metapopulation growth rate to parameters in the bad patch increase, while elasticities to the same parameters in the good patch decrease. Metapopulation growth rate is most elastic to adult survival in the population of the good patch for all scenarios we examine. If the majority of the metapopulation is located in the bad patch, the elasticity to parameters of that population increase but do not surpass elasticity to parameters in the good patch. This model can be expanded to include additional patches, multiple stages, stochastic dispersal, and complex demography.Financial support was provided by the Woods Hole Oceanographic Institution Academic Programs Office; National Science Foundation grants OCE-0326734, OCE- 0215905, OCE-0349177, DEB-0235692, DEB-0816514, DMS- 0532378, OCE-1031256, and ATM-0428122; and by National Oceanic and Atmospheric Administration National Sea Grant College Program Office, Department of Commerce, under Grant No. NA86RG0075 (Woods Hole Oceanographic Institution Sea Grant Project No. R/0-32), and Grant No. NA16RG2273 (Woods Hole Oceanographic Institution Sea Grant Project No. R/0-35)

    Evaluating Electronic Referrals for Specialty Care at a Public Hospital

    Get PDF
    Poor communication between referring clinicians and specialists may lead to inefficient use of specialist services. San Francisco General Hospital implemented an electronic referral system (eReferral) that facilitates iterative pre-visit communication between referring and specialty clinicians to improve the referral process. The purpose of the study was to determine the impact of eReferral (compared with paper-based referrals) on specialty referrals. The study was based on a visit-based questionnaire appended to new patient charts at randomly selected specialist clinic sessions before and after the implementation of eReferral. Specialty clinicians. The questionnaire focused on the self-reported difficulty in identifying referral question, referral appropriateness, need for and avoidability of follow-up visits. We collected 505 questionnaires from speciality clinicians. It was difficult to identify the reason for referral in 19.8% of medical and 38.0% of surgical visits using paper-based methods vs. 11.0% and 9.5% of those using eReferral (p-value 0.03 and &lt;0.001). Of those using eReferral, 6.4% and 9.8% of medical and surgical referrals using paper methods vs. 2.6% and 2.1% were deemed not completely appropriate (p-value 0.21 and 0.03). Follow-up was requested for 82.4% and 76.2% of medical and surgical patients with paper-based referrals vs. 90.1% and 58.1% of eReferrals (p-value 0.06 and 0.01). Follow-up was considered avoidable for 32.4% and 44.7% of medical and surgical follow-ups with paper-based methods vs. 27.5% and 13.5% with eReferral (0.41 and &lt;0.001). Use of technology to promote standardized referral processes and iterative communication between referring clinicians and specialists has the potential to improve communication between primary care providers and specialists and to increase the effectiveness of specialty referrals

    Not Perfect, but Better: Primary Care Providers’ Experiences with Electronic Referrals in a Safety Net Health System

    Get PDF
    BackgroundElectronic referrals can improve access to subspecialty care in safety net settings. In January 2007, San Francisco General Hospital (SFGH) launched an electronic referral portal that incorporated subspecialist triage, iterative communication with referring providers, and existing electronic health record data to improve access to subspecialty care.ObjectiveWe surveyed primary care providers (PCPs) to assess the impact of electronic referrals on workflow and clinical care.DesignWe administered an 18-item, web-based questionnaire to all 368 PCPs who had the option of referring to SFGH.MeasurementsWe asked participants to rate time spent submitting a referral, guidance of workup, wait times, and change in overall clinical care compared to prior referral methods using 5-point Likert scales. We used multivariate logistic regression to identify variables associated with perceived improvement in overall clinical care.ResultsTwo hundred ninety-eight PCPs (81.0%) from 24 clinics participated. Over half (55.4%) worked at hospital-based clinics, 27.9% at county-funded community clinics, and 17.1% at non-county-funded community clinics. Most (71.9%) reported that electronic referrals had improved overall clinical care. Providers from non-county-funded clinics (AOR 0.40, 95% CI 0.14-0.79) and those who spent &gt; or =6 min submitting an electronic referral (AOR 0.33, 95%CI 0.18-0.61) were significantly less likely than other participants to report that electronic referrals had improved clinical care.ConclusionsPCPs felt electronic referrals improved health-care access and quality; those who reported a negative impact on workflow were less likely to agree. While electronic referrals hold promise as a tool to improve clinical care, their impact on workflow should be considered

    Beyond R0 : demographic models for variability of lifetime reproductive output

    Get PDF
    © The Author(s), 2011. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in PLoS One 6 (2011): e20809, doi:10.1371/journal.pone.0020809.The net reproductive rate measures the expected lifetime reproductive output of an individual, and plays an important role in demography, ecology, evolution, and epidemiology. Well-established methods exist to calculate it from age- or stage-classified demographic data. As an expectation, provides no information on variability; empirical measurements of lifetime reproduction universally show high levels of variability, and often positive skewness among individuals. This is often interpreted as evidence of heterogeneity, and thus of an opportunity for natural selection. However, variability provides evidence of heterogeneity only if it exceeds the level of variability to be expected in a cohort of identical individuals all experiencing the same vital rates. Such comparisons require a way to calculate the statistics of lifetime reproduction from demographic data. Here, a new approach is presented, using the theory of Markov chains with rewards, obtaining all the moments of the distribution of lifetime reproduction. The approach applies to age- or stage-classified models, to constant, periodic, or stochastic environments, and to any kind of reproductive schedule. As examples, I analyze data from six empirical studies, of a variety of animal and plant taxa (nematodes, polychaetes, humans, and several species of perennial plants).Supported by National Science Foundation Grant DEB-0816514 and by a Research Award from the Alexander von Humboldt Foundation

    Diagnostic accuracy of a clinical diagnosis of idiopathic pulmonary fibrosis: An international case-cohort study

    Get PDF
    We conducted an international study of idiopathic pulmonary fibrosis (IPF) diagnosis among a large group of physicians and compared their diagnostic performance to a panel of IPF experts. A total of 1141 respiratory physicians and 34 IPF experts participated. Participants evaluated 60 cases of interstitial lung disease (ILD) without interdisciplinary consultation. Diagnostic agreement was measured using the weighted kappa coefficient (\u3baw). Prognostic discrimination between IPF and other ILDs was used to validate diagnostic accuracy for first-choice diagnoses of IPF and were compared using the Cindex. A total of 404 physicians completed the study. Agreement for IPF diagnosis was higher among expert physicians (\u3baw=0.65, IQR 0.53-0.72, p20 years of experience (C-index=0.72, IQR 0.0-0.73, p=0.229) and non-university hospital physicians with more than 20 years of experience, attending weekly MDT meetings (C-index=0.72, IQR 0.70-0.72, p=0.052), did not differ significantly (p=0.229 and p=0.052 respectively) from the expert panel (C-index=0.74 IQR 0.72-0.75). Experienced respiratory physicians at university-based institutions diagnose IPF with similar prognostic accuracy to IPF experts. Regular MDT meeting attendance improves the prognostic accuracy of experienced non-university practitioners to levels achieved by IPF experts

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
    corecore