489 research outputs found

    Numerical Simulation for Heat Transfer in Liquid Cooling System of Electronic Components

    Get PDF
    In this study, the task of optimizing the thermal liquid cooling system distributor of electronic components by means of numerical simulation of heat transfer in the investigated object. This task allowed us to find the optimal geometric parameters of the thermal spreader

    Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data.</p> <p>Methods</p> <p>Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data.</p> <p>Results</p> <p>Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates.</p> <p>Conclusions</p> <p>Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.</p

    Probabilistic Analysis of Facility Location on Random Shortest Path Metrics

    Get PDF
    The facility location problem is an NP-hard optimization problem. Therefore, approximation algorithms are often used to solve large instances. Such algorithms often perform much better than worst-case analysis suggests. Therefore, probabilistic analysis is a widely used tool to analyze such algorithms. Most research on probabilistic analysis of NP-hard optimization problems involving metric spaces, such as the facility location problem, has been focused on Euclidean instances, and also instances with independent (random) edge lengths, which are non-metric, have been researched. We would like to extend this knowledge to other, more general, metrics. We investigate the facility location problem using random shortest path metrics. We analyze some probabilistic properties for a simple greedy heuristic which gives a solution to the facility location problem: opening the κ\kappa cheapest facilities (with κ\kappa only depending on the facility opening costs). If the facility opening costs are such that κ\kappa is not too large, then we show that this heuristic is asymptotically optimal. On the other hand, for large values of κ\kappa, the analysis becomes more difficult, and we provide a closed-form expression as upper bound for the expected approximation ratio. In the special case where all facility opening costs are equal this closed-form expression reduces to O(ln(n)4)O(\sqrt[4]{\ln(n)}) or O(1)O(1) or even 1+o(1)1+o(1) if the opening costs are sufficiently small.Comment: A preliminary version accepted to CiE 201

    Direct estimation of cause-specific mortality fractions from verbal autopsies: multisite validation study using clinical diagnostic gold standards

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Verbal autopsy (VA) is used to estimate the causes of death in areas with incomplete vital registration systems. The King and Lu method (KL) for direct estimation of cause-specific mortality fractions (CSMFs) from VA studies is an analysis technique that estimates CSMFs in a population without predicting individual-level cause of death as an intermediate step. In previous studies, KL has shown promise as an alternative to physician-certified verbal autopsy (PCVA). However, it has previously been impossible to validate KL with a large dataset of VAs for which the underlying cause of death is known to meet rigorous clinical diagnostic criteria.</p> <p>Methods</p> <p>We applied the KL method to adult, child, and neonatal VA datasets from the Population Health Metrics Research Consortium gold standard verbal autopsy validation study, a multisite sample of 12,542 VAs where gold standard cause of death was established using strict clinical diagnostic criteria. To emulate real-world populations with varying CSMFs, we evaluated the KL estimations for 500 different test datasets of varying cause distribution. We assessed the quality of these estimates in terms of CSMF accuracy as well as linear regression and compared this with the results of PCVA.</p> <p>Results</p> <p>KL performance is similar to PCVA in terms of CSMF accuracy, attaining values of 0.669, 0.698, and 0.795 for adult, child, and neonatal age groups, respectively, when health care experience (HCE) items were included. We found that the length of the cause list has a dramatic effect on KL estimation quality, with CSMF accuracy decreasing substantially as the length of the cause list increases. We found that KL is not reliant on HCE the way PCVA is, and without HCE, KL outperforms PCVA for all age groups.</p> <p>Conclusions</p> <p>Like all computer methods for VA analysis, KL is faster and cheaper than PCVA. Since it is a direct estimation technique, though, it does not produce individual-level predictions. KL estimates are of similar quality to PCVA and slightly better in most cases. Compared to other recently developed methods, however, KL would only be the preferred technique when the cause list is short and individual-level predictions are not needed.</p

    Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach

    Full text link
    The subset sum problem (SSP) can be briefly stated as: given a target integer EE and a set AA containing nn positive integer aja_j, find a subset of AA summing to EE. The \textit{density} dd of an SSP instance is defined by the ratio of nn to mm, where mm is the logarithm of the largest integer within AA. Based on the structural and statistical properties of subset sums, we present an improved enumeration scheme for SSP, and implement it as a complete and exact algorithm (EnumPlus). The algorithm always equivalently reduces an instance to be low-density, and then solve it by enumeration. Through this approach, we show the possibility to design a sole algorithm that can efficiently solve arbitrary density instance in a uniform way. Furthermore, our algorithm has considerable performance advantage over previous algorithms. Firstly, it extends the density scope, in which SSP can be solved in expected polynomial time. Specifically, It solves SSP in expected O(nlogn)O(n\log{n}) time when density dcn/lognd \geq c\cdot \sqrt{n}/\log{n}, while the previously best density scope is dcn/(logn)2d \geq c\cdot n/(\log{n})^{2}. In addition, the overall expected time and space requirement in the average case are proven to be O(n5logn)O(n^5\log n) and O(n5)O(n^5) respectively. Secondly, in the worst case, it slightly improves the previously best time complexity of exact algorithms for SSP. Specifically, the worst-case time complexity of our algorithm is proved to be O((n6)2n/2+n)O((n-6)2^{n/2}+n), while the previously best result is O(n2n/2)O(n2^{n/2}).Comment: 11 pages, 1 figur

    Global estimates on the number of people blind or visually impaired by cataract:a meta-analysis from 2000 to 2020

    Get PDF
    BACKGROUND: To estimate global and regional trends from 2000 to 2020 of the number of persons visually impaired by cataract and their proportion of the total number of vision-impaired individuals.METHODS: A systematic review and meta-analysis of published population studies and gray literature from 2000 to 2020 was carried out to estimate global and regional trends. We developed prevalence estimates based on modeled distance visual impairment and blindness due to cataract, producing location-, year-, age-, and sex-specific estimates of moderate to severe vision impairment (MSVI presenting visual acuity &lt;6/18, ≥3/60) and blindness (presenting visual acuity &lt;3/60). Estimates are age-standardized using the GBD standard population.RESULTS: In 2020, among overall (all ages) 43.3 million blind and 295 million with MSVI, 17.0 million (39.6%) people were blind and 83.5 million (28.3%) had MSVI due to cataract blind 60% female, MSVI 59% female. From 1990 to 2020, the count of persons blind (MSVI) due to cataract increased by 29.7%(93.1%) whereas the age-standardized global prevalence of cataract-related blindness improved by -27.5% and MSVI increased by 7.2%. The contribution of cataract to the age-standardized prevalence of blindness exceeded the global figure only in South Asia (62.9%) and Southeast Asia and Oceania (47.9%).CONCLUSIONS: The number of people blind and with MSVI due to cataract has risen over the past 30 years, despite a decrease in the age-standardized prevalence of cataract. This indicates that cataract treatment programs have been beneficial, but population growth and aging have outpaced their impact. Growing numbers of cataract blind indicate that more, better-directed, resources are needed to increase global capacity for cataract surgery.</p
    corecore