99 research outputs found

    Divergent platforms

    Get PDF
    Models of electoral competition between two opportunistic, office-motivated parties typically predict that both parties become indistinguishable in equilibrium. I show that this strong connection between the office motivation of parties and their equilibrium choice of identical platforms depends on two—possibly false—assumptions: (1) Issue spaces are uni-dimensional and (2) Parties are unitary actors whose preferences can be represented by expected utilities. I provide an example of a two-party model in which parties offer substantially different equilibrium platforms even though no exogenous differences between parties are assumed. In this example, some voters’ preferences over the 2-dimensional issue space exhibit non-convexities and parties evaluate their actions with respect to a set of beliefs on the electorate

    Markov clustering versus affinity propagation for the partitioning of protein interaction graphs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Genome scale data on protein interactions are generally represented as large networks, or graphs, where hundreds or thousands of proteins are linked to one another. Since proteins tend to function in groups, or complexes, an important goal has been to reliably identify protein complexes from these graphs. This task is commonly executed using clustering procedures, which aim at detecting densely connected regions within the interaction graphs. There exists a wealth of clustering algorithms, some of which have been applied to this problem. One of the most successful clustering procedures in this context has been the Markov Cluster algorithm (MCL), which was recently shown to outperform a number of other procedures, some of which were specifically designed for partitioning protein interactions graphs. A novel promising clustering procedure termed Affinity Propagation (AP) was recently shown to be particularly effective, and much faster than other methods for a variety of problems, but has not yet been applied to partition protein interaction graphs.</p> <p>Results</p> <p>In this work we compare the performance of the Affinity Propagation (AP) and Markov Clustering (MCL) procedures. To this end we derive an unweighted network of protein-protein interactions from a set of 408 protein complexes from <it>S. cervisiae </it>hand curated in-house, and evaluate the performance of the two clustering algorithms in recalling the annotated complexes. In doing so the parameter space of each algorithm is sampled in order to select optimal values for these parameters, and the robustness of the algorithms is assessed by quantifying the level of complex recall as interactions are randomly added or removed to the network to simulate noise. To evaluate the performance on a weighted protein interaction graph, we also apply the two algorithms to the consolidated protein interaction network of <it>S. cerevisiae</it>, derived from genome scale purification experiments and to versions of this network in which varying proportions of the links have been randomly shuffled.</p> <p>Conclusion</p> <p>Our analysis shows that the MCL procedure is significantly more tolerant to noise and behaves more robustly than the AP algorithm. The advantage of MCL over AP is dramatic for unweighted protein interaction graphs, as AP displays severe convergence problems on the majority of the unweighted graph versions that we tested, whereas MCL continues to identify meaningful clusters, albeit fewer of them, as the level of noise in the graph increases. MCL thus remains the method of choice for identifying protein complexes from binary interaction networks.</p

    Capacity management of nursing staff as a vehicle for organizational improvement

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Capacity management systems create insight into required resources like staff and equipment. For inpatient hospital care, capacity management requires information on beds and nursing staff capacity, on a daily as well as annual basis. This paper presents a comprehensive capacity model that gives insight into required nursing staff capacity and opportunities to improve capacity utilization on a ward level.</p> <p>Methods</p> <p>A capacity model was developed to calculate required nursing staff capacity. The model used historical bed utilization, nurse-patient ratios, and parameters concerning contract hours to calculate beds and nursing staff needed per shift and the number of nurses needed on an annual basis in a ward. The model was applied to three different capacity management problems on three separate groups of hospital wards. The problems entailed operational, tactical, and strategic management issues: optimizing working processes on pediatric wards, predicting the consequences of reducing length of stay on nursing staff required on a cardiology ward, and calculating the nursing staff consequences of merging two internal medicine wards.</p> <p>Results</p> <p>It was possible to build a model based on easily available data that calculate the nursing staff capacity needed daily and annually and that accommodate organizational improvements. Organizational improvement processes were initiated in three different groups of wards. For two pediatric wards, the most important improvements were found to be improving working processes so that the agreed nurse-patient ratios could be attained. In the second case, for a cardiology ward, what-if analyses with the model showed that workload could be substantially lowered by reducing length of stay. The third case demonstrated the possible savings in capacity that could be achieved by merging two small internal medicine wards.</p> <p>Conclusion</p> <p>A comprehensive capacity model was developed and successfully applied to support capacity decisions on operational, tactical, and strategic levels. It appeared to be a useful tool for supporting discussions between wards and hospital management by giving objective and quantitative insight into staff and bed requirements. Moreover, the model was applied to initiate organizational improvements, which resulted in more efficient capacity utilization.</p

    Epigenetics and male reproduction: the consequences of paternal lifestyle on fertility, embryo development, and children lifetime health

    Full text link

    Canagliflozin and Renal Outcomes in Type 2 Diabetes and Nephropathy

    Get PDF
    BACKGROUND Type 2 diabetes mellitus is the leading cause of kidney failure worldwide, but few effective long-term treatments are available. In cardiovascular trials of inhibitors of sodium–glucose cotransporter 2 (SGLT2), exploratory results have suggested that such drugs may improve renal outcomes in patients with type 2 diabetes. METHODS In this double-blind, randomized trial, we assigned patients with type 2 diabetes and albuminuric chronic kidney disease to receive canagliflozin, an oral SGLT2 inhibitor, at a dose of 100 mg daily or placebo. All the patients had an estimated glomerular filtration rate (GFR) of 30 to 300 to 5000) and were treated with renin–angiotensin system blockade. The primary outcome was a composite of end-stage kidney disease (dialysis, transplantation, or a sustained estimated GFR of <15 ml per minute per 1.73 m 2), a doubling of the serum creatinine level, or death from renal or cardiovascular causes. Prespecified secondary outcomes were tested hierarchically. RESULTS The trial was stopped early after a planned interim analysis on the recommendation of the data and safety monitoring committee. At that time, 4401 patients had undergone randomization, with a median follow-up of 2.62 years. The relative risk of the primary outcome was 30% lower in the canagliflozin group than in the placebo group, with event rates of 43.2 and 61.2 per 1000 patient-years, respectively (hazard ratio, 0.70; 95% confidence interval [CI], 0.59 to 0.82; P=0.00001). The relative risk of the renal-specific composite of end-stage kidney disease, a doubling of the creatinine level, or death from renal causes was lower by 34% (hazard ratio, 0.66; 95% CI, 0.53 to 0.81; P<0.001), and the relative risk of end-stage kidney disease was lower by 32% (hazard ratio, 0.68; 95% CI, 0.54 to 0.86; P=0.002). The canagliflozin group also had a lower risk of cardiovascular death, myocardial infarction, or stroke (hazard ratio, 0.80; 95% CI, 0.67 to 0.95; P=0.01) and hospitalization for heart failure (hazard ratio, 0.61; 95% CI, 0.47 to 0.80; P<0.001). There were no significant differences in rates of amputation or fracture. CONCLUSIONS In patients with type 2 diabetes and kidney disease, the risk of kidney failure and cardiovascular events was lower in the canagliflozin group than in the placebo group at a median follow-up of 2.62 years

    Constrained nurse staffing analysis

    No full text
    Nurse work force management has been described as a multi-phase sequential planning and control process. Previous research has addressed this process by focusing on the development of phase-specific problem solving methodologies. For practicing managers, it may be more beneficial to evaluate various policy options that management may pursue in addressing the nurse work force issue in light of the nationwide shortage of qualified nurses. This research evaluates management policy at the staffing phase since these decisions have the broadest impact on nurse work force utilization. The impact of nurse staffing policy options on annual nursing labor costs are evaluated for a large public hospital in the State of Florida. A linear programming staffing model served as the research vehicle for the study and response surface methodology was used to investigate the relationship between labor costs and the policy options. Service level, nurse labor availability, nurse staffing mix and flex-staff assignment had the most significant effects on annual nursing labor costs. The implications of these findings for work force management and suggestions for future research are presented.health service LP policy analysis operations management

    An interactive, optimization-based decision support system for scheduling part-time, computer lab attendants

    No full text
    The labor tour scheduling problem has attracted much recent research, focusing on the development and evaluation of optimal and heuristic methods to minimize labor costs while satisfying demand for labor. Researchers typically assume that a sufficient labor pool is available. However, service organizations such as fast-food restaurants, grocery stores, and video rental stores, as well as not-for-profit organizations using volunteer workers, typically use a large number of part-time employees with limited availabilities for work. This study presents an interactive decision support system that addresses the conflicting objectives of efficient labor scheduling and accommodating employee needs. The system uses a linear programming model to provide sets of optimal shifts from which employees can construct acceptable weekly schedules; the manager may override the schedule if necessary. The decision support system is used to schedule student computer lab attendants at a major university in an efficient and equitable manner.decision support systems LP operations management part-time personnel scheduling

    Augmented neural networks and problem-structure based heuristics for the bin-packing problem

    No full text
    In this paper, we apply the Augmented-neural-networks (AugNN) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority- rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which sub problems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem-structure based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 seconds per problem. We also discuss the computational complexity of our approach
    • …
    corecore