843 research outputs found
Fast Augmenting Paths by Random Sampling from Residual Graphs
Consider an n-vertex, m-edge, undirected graph with integral capacities and max-flow value v. We give a new [~ over O](m + nv)-time maximum flow algorithm. After assigning certain special sampling probabilities to edges in [~ over O](m)$ time, our algorithm is very simple: repeatedly find an augmenting path in a random sample of edges from the residual graph. Breaking from past work, we demonstrate that we can benefit by random sampling from directed (residual) graphs. We also slightly improve an algorithm for approximating flows of arbitrary value, finding a flow of value (1 - Δ) times the maximum in [~ over O](mân/Δ) time.National Science Foundation (U.S.
Experimental study of minimum cut algorithms
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. [123]-126).by Matthew S. Levine.M.S
Recommended from our members
Improving Regulatory Effectiveness through Better Targeting: Evidence from OSHA
We study how a regulator can best allocate its limited inspection resources. We direct our analysis to a US Occupational Safety and Health Administration (OSHA) inspection program that targeted dangerous establishments and allocated some inspections via random assignment. We find that inspections reduced serious injuries by an average of 9% over the following five years. We use new machine learning methods to estimate the effects of counterfactual targeting rules OSHA could have deployed. OSHA could have averted over twice as many injuries if its inspections had targeted the establishments where we predict inspections would avert the most injuries. The agency could have averted nearly as many additional injuries by targeting the establishments predicted to have the most injuries. Both of these targeting regimes would have generated over $1 billion in social value over the decade we examine. Our results demonstrate the promise, and limitations, of using machine learning to improve resource allocation. JEL Classifications: I18; L51; J38; J
A method for statistically comparing spatial distribution maps
<p>Abstract</p> <p>Background</p> <p>Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters), has been challenging.</p> <p>Results</p> <p>We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production) software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used) to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease.</p> <p>Conclusion</p> <p>In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison tests afford users the ability to empirically distinguish the significance of each of the diverse environmental parameters included in the modeling process. This method will be particularly useful in situations where the outcomes (maps) appear similar upon visual inspection (as are generated with other modeling programs such as MAXENT), as it allows an investigator the capacity to explore subtle differences among ecological parameters and to demonstrate the individual importance of these factors within an overall model.</p
Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions
This paper seeks to establish a framework for directing a society of simple,
specialized, self-interested agents to solve what traditionally are posed as
monolithic single-agent sequential decision problems. What makes it challenging
to use a decentralized approach to collectively optimize a central objective is
the difficulty in characterizing the equilibrium strategy profile of
non-cooperative games. To overcome this challenge, we design a mechanism for
defining the learning environment of each agent for which we know that the
optimal solution for the global objective coincides with a Nash equilibrium
strategy profile of the agents optimizing their own local objectives. The
society functions as an economy of agents that learn the credit assignment
process itself by buying and selling to each other the right to operate on the
environment state. We derive a class of decentralized reinforcement learning
algorithms that are broadly applicable not only to standard reinforcement
learning but also for selecting options in semi-MDPs and dynamically composing
computation graphs. Lastly, we demonstrate the potential advantages of a
society's inherent modular structure for more efficient transfer learning.Comment: 18 pages, 13 figures, accepted to the International Conference on
Machine Learning (ICML) 202
Elevated serum interleukin-6 has prognostic significance for postoperative outcome in patients undergoing cardiothoracic surgery
Does wage rank affect employees' well-being?
How do workers make wage comparisons? Both an experimental study and an analysis of 16,000 British employees are reported. Satisfaction and well-being levels are shown to depend on more than simple relative pay. They depend upon the ordinal rank of an individual's wage within a comparison group. âRankâ itself thus seems to matter to human beings. Moreover, consistent with psychological theory, quits in a workplace are correlated with pay distribution skewness
Real-Time Electronic Health Record Mortality Prediction During the COVID-19 Pandemic: A Prospective Cohort Study
Background: The SARS-CoV-2 virus has infected millions of people, overwhelming critical care resources in some regions. Many plans for rationing critical care resources during crises are based on the Sequential Organ Failure Assessment (SOFA) score. The COVID-19 pandemic created an emergent need to develop and validate a novel electronic health record (EHR)-computable tool to predict mortality.
Research Questions: To rapidly develop, validate, and implement a novel real-time mortality score for the COVID-19 pandemic that improves upon SOFA.
Study Design and Methods: We conducted a prospective cohort study of a regional health system with 12 hospitals in Colorado between March 2020 and July 2020. All patients >14 years old hospitalized during the study period without a do not resuscitate order were included. Patients were stratified by the diagnosis of COVID-19. From this cohort, we developed and validated a model using stacked generalization to predict mortality using data widely available in the EHR by combining five previously validated scores and additional novel variables reported to be associated with COVID-19-specific mortality. We compared the area under the receiver operator curve (AUROC) for the new model to the SOFA score and the Charlson Comorbidity Index.
Results: We prospectively analyzed 27,296 encounters, of which 1,358 (5.0%) were positive for SARS-CoV-2, 4,494 (16.5%) included intensive care unit (ICU)-level care, 1,480 (5.4%) included invasive mechanical ventilation, and 717 (2.6%) ended in death. The Charlson Comorbidity Index and SOFA scores predicted overall mortality with an AUROC of 0.72 and 0.90, respectively. Our novel score predicted overall mortality with AUROC 0.94. In the subset of patients with COVID-19, we predicted mortality with AUROC 0.90, whereas SOFA had AUROC of 0.85.
Interpretation: We developed and validated an accurate, in-hospital mortality prediction score in a live EHR for automatic and continuous calculation using a novel model, that improved upon SOFA.
Study Question: Can we improve upon the SOFA score for real-time mortality prediction during the COVID-19 pandemic by leveraging electronic health record (EHR) data?
Results: We rapidly developed and implemented a novel yet SOFA-anchored mortality model across 12 hospitals and conducted a prospective cohort study of 27,296 adult hospitalizations, 1,358 (5.0%) of which were positive for SARS-CoV-2. The Charlson Comorbidity Index and SOFA scores predicted all-cause mortality with AUROCs of 0.72 and 0.90, respectively. Our novel score predicted mortality with AUROC 0.94.
Interpretation: A novel EHR-based mortality score can be rapidly implemented to better predict patient outcomes during an evolving pandemic
- âŠ