133 research outputs found

    A proposed methodology for deriving tsunami fragility functions for buildings using optimum intensity measures

    Get PDF
    Tsunami fragility curves are statistical models which form a key component of tsunami risk models, as they provide a probabilistic link between a tsunami intensity measure (TIM) and building damage. Existing studies apply different TIMs (e.g. depth, velocity, force etc.) with conflicting recommendations of which to use. This paper presents a rigorous methodology using advanced statistical methods for the selection of the optimal TIM for fragility function derivation for any given dataset. This methodology is demonstrated using a unique, detailed, disaggregated damage dataset from the 2011 Great East Japan earthquake and tsunami (total 67,125 buildings), identifying the optimum TIM for describing observed damage for the case study locations. This paper first presents the proposed methodology, which is broken into three steps: (1) exploratory analysis, (2) statistical model selection and trend analysis and (3) comparison and selection of TIMs. The case study dataset is then presented, and the methodology is then applied to this dataset. In Step 1, exploratory analysis on the case study dataset suggests that fragility curves should be constructed for the sub-categories of engineered (RC and steel) and non-engineered (wood and masonry) construction materials. It is shown that the exclusion of buildings of unknown construction material (common practice in existing studies) may introduce bias in the results; hence, these buildings are estimated as engineered or non-engineered through use of multiple imputation (MI) techniques. In Step 2, a sensitivity analysis of several statistical methods for fragility curve derivation is conducted in order to select multiple statistical models with which to conduct further exploratory analysis and the TIM comparison (to draw conclusions which are non-model-specific). Methods of data aggregation and ordinary least squares parameter estimation (both used in existing studies) are rejected as they are quantitatively shown to reduce fragility curve accuracy and increase uncertainty. Partially ordered probit models and generalised additive models (GAMs) are selected for the TIM comparison of Step 3. In Step 3, fragility curves are then constructed for a number of TIMs, obtained from numerical simulation of the tsunami inundation of the 2011 GEJE. These fragility curves are compared using K-fold cross-validation (KFCV), and it is found that for the case study dataset a force-based measure that considers different flow regimes (indicated by Froude number) proves the most efficient TIM. It is recommended that the methodology proposed in this paper be applied for defining future fragility functions based on optimum TIMs. With the introduction of several concepts novel to the field of fragility assessment (MI, GAMs, KFCV for model optimisation and comparison), this study has significant implications for the future generation of empirical and analytical fragility functions

    Generalized Totalizer Encoding for Pseudo-Boolean Constraints

    Full text link
    Pseudo-Boolean constraints, also known as 0-1 Integer Linear Constraints, are used to model many real-world problems. A common approach to solve these constraints is to encode them into a SAT formula. The runtime of the SAT solver on such formula is sensitive to the manner in which the given pseudo-Boolean constraints are encoded. In this paper, we propose generalized Totalizer encoding (GTE), which is an arc-consistency preserving extension of the Totalizer encoding to pseudo-Boolean constraints. Unlike some other encodings, the number of auxiliary variables required for GTE does not depend on the magnitudes of the coefficients. Instead, it depends on the number of distinct combinations of these coefficients. We show the superiority of GTE with respect to other encodings when large pseudo-Boolean constraints have low number of distinct coefficients. Our experimental results also show that GTE remains competitive even when the pseudo-Boolean constraints do not have this characteristic.Comment: 10 pages, 2 figures, 2 tables. To be published in 21st International Conference on Principles and Practice of Constraint Programming 201

    Approximation Strategies for Incomplete MaxSAT

    Get PDF
    Incomplete MaxSAT solving aims to quickly find a solution that attempts to minimize the sum of the weights of the unsati sfied soft clauses without providing any optimality guarantees. In th is paper, we propose two approximation strategies for improving incomp lete MaxSAT solving. In one of the strategies, we cluster the weights and approximate them with a representative weight. In another strategy, we b reak up the problem of minimizing the sum of weights of unsatisfiable clauses into multiple minimization subproblems. Experimental res ults show that approximation strategies can be used to find better solution s than the best incomplete solvers in the MaxSAT Evaluation 2017

    Molecular Characterization of Rotavirus Gastroenteritis Strains, Iraqi Kurdistan

    Get PDF
    Of 260 children with acute diarrhea in Erbil, Iraqi Kurdistan, 96 (37%) were infected with rotavirus. Reverse transcription–polymerase chain reaction identified G1, G4, G2, G9, P[8], P[6], and P[4] as the most common genotypes. Eight G/P combinations were found, but P[8]G1 and P[4]G2 accounted for >50% of the strains

    On Tackling the Limits of Resolution in SAT Solving

    Full text link
    The practical success of Boolean Satisfiability (SAT) solvers stems from the CDCL (Conflict-Driven Clause Learning) approach to SAT solving. However, from a propositional proof complexity perspective, CDCL is no more powerful than the resolution proof system, for which many hard examples exist. This paper proposes a new problem transformation, which enables reducing the decision problem for formulas in conjunctive normal form (CNF) to the problem of solving maximum satisfiability over Horn formulas. Given the new transformation, the paper proves a polynomial bound on the number of MaxSAT resolution steps for pigeonhole formulas. This result is in clear contrast with earlier results on the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper also establishes the same polynomial bound in the case of modern core-guided MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard for CDCL SAT solvers, show that these can be efficiently solved with modern MaxSAT solvers

    A proposed methodology for deriving tsunami fragility functions for buildings using optimum intensity measures

    Get PDF
    Tsunami fragility curves are statistical models which form a key component of tsunami risk models, as they provide a probabilistic link between a tsunami intensity measure (TIM) and building damage. Existing studies apply different TIMs (e.g. depth, velocity, force etc.) with conflicting recommendations of which to use. This paper presents a rigorous methodology using advanced statistical methods for the selection of the optimal TIM for fragility function derivation for any given dataset. This methodology is demonstrated using a unique, detailed, disaggregated damage dataset from the 2011 Great East Japan earthquake and tsunami (total 67,125 buildings), identifying the optimum TIM for describing observed damage for the case study locations. This paper first presents the proposed methodology, which is broken into three steps: (1) exploratory analysis, (2) statistical model selection and trend analysis and (3) comparison and selection of TIMs. The case study dataset is then presented, and the methodology is then applied to this dataset. In Step 1, exploratory analysis on the case study dataset suggests that fragility curves should be constructed for the sub-categories of engineered (RC and steel) and non-engineered (wood and masonry) construction materials. It is shown that the exclusion of buildings of unknown construction material (common practice in existing studies) may introduce bias in the results; hence, these buildings are estimated as engineered or non-engineered through use of multiple imputation (MI) techniques. In Step 2, a sensitivity analysis of several statistical methods for fragility curve derivation is conducted in order to select multiple statistical models with which to conduct further exploratory analysis and the TIM comparison (to draw conclusions which are non-model-specific). Methods of data aggregation and ordinary least squares parameter estimation (both used in existing studies) are rejected as they are quantitatively shown to reduce fragility curve accuracy and increase uncertainty. Partially ordered probit models and generalised additive models (GAMs) are selected for the TIM comparison of Step 3. In Step 3, fragility curves are then constructed for a number of TIMs, obtained from numerical simulation of the tsunami inundation of the 2011 GEJE. These fragility curves are compared using K-fold cross-validation (KFCV), and it is found that for the case study dataset a force-based measure that considers different flow regimes (indicated by Froude number) proves the most efficient TIM. It is recommended that the methodology proposed in this paper be applied for defining future fragility functions based on optimum TIMs. With the introduction of several concepts novel to the field of fragility assessment (MI, GAMs, KFCV for model optimisation and comparison), this study has significant implications for the future generation of empirical and analytical fragility functions

    ERYTHROPOIETIN FOR THE TREATMENT OF SUBARACHNOID HEMORRAGE: A FEASIBLE INGREDIENT FOR A SUCCESS MEDICAL RECIPE

    Get PDF
    Subaracnhoid hemorrage (SAH) following aneurysm bleeding accounts for 6% to 8% of all cerebrovascular accidents. Althoug an aneurysm can be effectively managed by surgery or endovascular therapy, delayed cerebral ischemia is diagnosed in a high percentage of patients resulting in significant morbility and mortality. Cerebral vasospasm occurs in more than half of all patients after aneurysm rupture and is recognized as the leading cause of delayed cerebral ischemia after SAH. Hemodynamic strategies and endovascular procedures may be considered fo the treatment of cerebral vasospasm. In recent years, the mechanism contributing to the development of vasospasm, abnormal reactivity of cerebral arteries and cerebral ischemia following SAH, have been intensively investigated. A number of pathological processes have been identified in the pathogenesis of vasospasm including endothelial injury, smooth muscle cell contraction from spasmogenic substances produced by the subarachnoid blood clots, changes in vascular responsiveness and inflammatory response of the vascular endothelium. to date, the current therapeutic interventions remain ineffective being limited to the manipulation os systemic blood pressure, variation of blood volume and viscosity, and control of arterial carbon dioxide tension. In this scenario, the hormone erythropoietin (EPO), has been found to exert neuroprotective action during experimental SAH when its recombinant form (rHuEPO) is systematically administered. However, recent translation of experimental data into clinical trials has suggested an unclear role of recombinant human EPO in the setting of SAH. In this context, the aim of the recurrent review is to present current evidence on the potential role of EPO in cerebrovascular dysfunction following aneurysmal subarachnoid hemorrage

    Pharmacological treatment of delayed cerebral ischemia and vasospasm in subarachnoid hemorrhage

    Get PDF
    Subarachnoid hemorrhage after the rupture of a cerebral aneurysm is the cause of 6% to 8% of all cerebrovascular accidents involving 10 of 100,000 people each year. Despite effective treatment of the aneurysm, delayed cerebral ischemia (DCI) is observed in 30% of patients, with a peak on the tenth day, resulting in significant infirmity and mortality. Cerebral vasospasm occurs in more than half of all patients and is recognized as the main cause of delayed cerebral ischemia after subarachnoid hemorrhage. Its treatment comprises hemodynamic management and endovascular procedures. To date, the only drug shown to be efficacious on both the incidence of vasospasm and poor outcome is nimodipine. Given its modest effects, new pharmacological treatments are being developed to prevent and treat DCI. We review the different drugs currently being tested

    Erythropoietin: a multimodal neuroprotective agent

    Get PDF
    The tissue protective functions of the hematopoietic growth factor erythropoietin (EPO) are independent of its action on erythropoiesis. EPO and its receptors (EPOR) are expressed in multiple brain cells during brain development and upregulated in the adult brain after injury. Peripherally administered EPO crosses the blood-brain barrier and activates in the brain anti-apoptotic, anti-oxidant and anti-inflammatory signaling in neurons, glial and cerebrovascular endothelial cells and stimulates angiogenesis and neurogenesis. These mechanisms underlie its potent tissue protective effects in experimental models of stroke, cerebral hemorrhage, traumatic brain injury, neuroinflammatory and neurodegenerative disease. The preclinical data in support of the use of EPO in brain disease have already been translated to first clinical pilot studies with encouraging results with the use of EPO as a neuroprotective agent
    corecore