9 research outputs found

    ParaDIME: Parallel Distributed Infrastructure for Minimization of Energy for data centers

    Get PDF
    Dramatic environmental and economic impact of the ever increasing power and energy consumption of modern computing devices in data centers is now a critical challenge. On the one hand, designers use technology scaling as one of the methods to face the phenomenon called dark silicon (only segments of a chip function concurrently due to power restrictions). On the other hand, designers use extreme-scale systems such as teradevices to meet the performance needs of their applications which in turn increases the power consumption of the platform. In order to overcome these challenges, we need novel computing paradigms that address energy efficiency. One of the promising solutions is to incorporate parallel distributed methodologies at different abstraction levels. The FP7 project ParaDIME focuses on this objective to provide different distributed methodologies (software-hardware techniques) at different abstraction levels to attack the power-wall problem. In particular, the ParaDIME framework will utilize: circuit and architecture operation below safe voltage limits for drastic energy savings, specialized energy-aware computing accelerators, heterogeneous computing, energy-aware runtime, approximate computing and power-aware message passing. The major outcome of the project will be a noval processor architecture for a heterogeneous distributed system that utilizes future device characteristics, runtime and programming model for drastic energy savings of data centers. Wherever possible, ParaDIME will adopt multidisciplinary techniques, such as hardware support for message passing, runtime energy optimization utilizing new hardware energy performance counters, use of accelerators for error recovery from sub-safe voltage operation, and approximate computing through annotated code. Furthermore, we will establish and investigate the theoretical limits of energy savings at the device, circuit, architecture, runtime and programming model levels of the computing stack, as well as quantify the actual energy savings achieved by the ParaDIME approach for the complete computing stack with the real environment

    Goodbye Hartmann trial: a prospective, international, multicenter, observational study on the current use of a surgical procedure developed a century ago

    Get PDF
    Background: Literature suggests colonic resection and primary anastomosis (RPA) instead of Hartmann's procedure (HP) for the treatment of left-sided colonic emergencies. We aim to evaluate the surgical options globally used to treat patients with acute left-sided colonic emergencies and the factors that leading to the choice of treatment, comparing HP and RPA. Methods: This is a prospective, international, multicenter, observational study registered on ClinicalTrials.gov. A total 1215 patients with left-sided colonic emergencies who required surgery were included from 204 centers during the period of March 1, 2020, to May 31, 2020. with a 1-year follow-up. Results: 564 patients (43.1%) were females. The mean age was 65.9 ± 15.6 years. HP was performed in 697 (57.3%) patients and RPA in 384 (31.6%) cases. Complicated acute diverticulitis was the most common cause of left-sided colonic emergencies (40.2%), followed by colorectal malignancy (36.6%). Severe complications (Clavien-Dindo ≥ 3b) were higher in the HP group (P < 0.001). 30-day mortality was higher in HP patients (13.7%), especially in case of bowel perforation and diffused peritonitis. 1-year follow-up showed no differences on ostomy reversal rate between HP and RPA. (P = 0.127). A backward likelihood logistic regression model showed that RPA was preferred in younger patients, having low ASA score (≤ 3), in case of large bowel obstruction, absence of colonic ischemia, longer time from admission to surgery, operating early at the day working hours, by a surgeon who performed more than 50 colorectal resections. Conclusions: After 100 years since the first Hartmann's procedure, HP remains the most common treatment for left-sided colorectal emergencies. Treatment's choice depends on patient characteristics, the time of surgery and the experience of the surgeon. RPA should be considered as the gold standard for surgery, with HP being an exception

    ZEBRA: A data-centric, hybrid-policy hardware transactional memory design

    No full text
    Hardware Transactional Memory (HTM) systems, in prior research, have either fixed policies of conflict resolution and data versioning for the entire system or allowed a degree of flexibility at the level of transactions. Unfortunately, this results in susceptibility to pathologies, lower average performance over diverse workload characteristics or high design complexity. In this work we explore a new dimension along which flexibility in policy can be introduced. Recognizing the fact that contention is more a property of data rather than that of an atomic code block, we develop an HTM system that allows selection of versioning and conflict resolution policies at the granularity of cache lines. We discover that this neat match in granularity with that of the cache coherence protocol results in a design that is very simple and yet able to track closely or exceed the performance of the best performing policy for a given workload. It also brings together the benefits of parallel commits (inherent in traditional eager HTMs) and good optimistic concurrency without deadlock avoidance mechanisms (inherent in lazy HTMs), with little increase in complexity

    Eager meets lazy: The impact of write-buffering on hardware transactional memory

    No full text
    Hardware transactional memory (HTM) systems have been studied extensively along the dimensions of speculative versioning and contention management policies. The relative performance of several designs policies has been discussed at length in prior work within the framework of scalable chipmultiprocessing systems. Yet, the impact of simple structural optimizations like write-buffering has not been investigated and performance deviations due to the presence or absence of these optimizations remains unclear. This lack of insight into the effective use and impact of these interfacial structures between the processor core and the coherent memory hierarchy forms the crux of the problem we study in this paper. Through detailed modeling of various write-buffering configurations we show that they play a major role in determining the overall performance of a practical HTM system. Our study of both eager and lazy conflict resolution mechanisms in a scalable parallel architecture notes a remarkable convergence of the performance of these two diametrically opposite design points when write buffers are introduced and used well to support the common case. Mitigation of redundant actions, fewer invalidations on abort, latency-hiding and prefetch effects contribute towards reducing execution times for transactions. Shorter transaction durations also imply a lower contention probability, thereby amplifying gains even further. The insights, related to the interplay between buffering mechanisms, system policies and workload characteristics, contained in this paper clearly distinguish gains in performance to be had from write-buffering from those that can be ascribed to HTM policy. We believe that this information would facilitate sound design decisions when incorporating HTMs into parallel architectures

    ZEBRA: A data-centric, hybrid-policy hardware transactional memory design

    No full text
    Hardware Transactional Memory (HTM) systems, in prior research, have either fixed policies of conflict resolution and data versioning for the entire system or allowed a degree of flexibility at the level of transactions. Unfortunately, this results in susceptibility to pathologies, lower average performance over diverse workload characteristics or high design complexity. In this work we explore a new dimension along which flexibility in policy can be introduced. Recognizing the fact that contention is more a property of data rather than that of an atomic code block, we develop an HTM system that allows selection of versioning and conflict resolution policies at the granularity of cache lines. We discover that this neat match in granularity with that of the cache coherence protocol results in a design that is very simple and yet able to track closely or exceed the performance of the best performing policy for a given workload. It also brings together the benefits of parallel commits (inherent in traditional eager HTMs) and good optimistic concurrency without deadlock avoidance mechanisms (inherent in lazy HTMs), with little increase in complexity

    Eager meets lazy: The impact of write-buffering on hardware transactional memory

    No full text
    Hardware transactional memory (HTM) systems have been studied extensively along the dimensions of speculative versioning and contention management policies. The relative performance of several designs policies has been discussed at length in prior work within the framework of scalable chipmultiprocessing systems. Yet, the impact of simple structural optimizations like write-buffering has not been investigated and performance deviations due to the presence or absence of these optimizations remains unclear. This lack of insight into the effective use and impact of these interfacial structures between the processor core and the coherent memory hierarchy forms the crux of the problem we study in this paper. Through detailed modeling of various write-buffering configurations we show that they play a major role in determining the overall performance of a practical HTM system. Our study of both eager and lazy conflict resolution mechanisms in a scalable parallel architecture notes a remarkable convergence of the performance of these two diametrically opposite design points when write buffers are introduced and used well to support the common case. Mitigation of redundant actions, fewer invalidations on abort, latency-hiding and prefetch effects contribute towards reducing execution times for transactions. Shorter transaction durations also imply a lower contention probability, thereby amplifying gains even further. The insights, related to the interplay between buffering mechanisms, system policies and workload characteristics, contained in this paper clearly distinguish gains in performance to be had from write-buffering from those that can be ascribed to HTM policy. We believe that this information would facilitate sound design decisions when incorporating HTMs into parallel architectures

    Comprehensive analysis and insights gained from long-term experience of the Spanish DILI Registry

    Get PDF
    Altres ajuts: Fondo Europeo de Desarrollo Regional (FEDER); Agencia Española del Medicamento; Consejería de Salud de Andalucía.Background & Aims: Prospective drug-induced liver injury (DILI) registries are important sources of information on idiosyncratic DILI. We aimed to present a comprehensive analysis of 843 patients with DILI enrolled into the Spanish DILI Registry over a 20-year time period. Methods: Cases were identified, diagnosed and followed prospectively. Clinical features, drug information and outcome data were collected. Results: A total of 843 patients, with a mean age of 54 years (48% females), were enrolled up to 2018. Hepatocellular injury was associated with younger age (adjusted odds ratio [aOR] per year 0.983; 95% CI 0.974-0.991) and lower platelet count (aOR per unit 0.996; 95% CI 0.994-0.998). Anti-infectives were the most common causative drug class (40%). Liver-related mortality was more frequent in patients with hepatocellular damage aged ≥65 years (p = 0.0083) and in patients with underlying liver disease (p = 0.0221). Independent predictors of liver-related death/transplantation included nR-based hepatocellular injury, female sex, higher onset aspartate aminotransferase (AST) and bilirubin values. nR-based hepatocellular injury was not associated with 6-month overall mortality, for which comorbidity burden played a more important role. The prognostic capacity of Hy's law varied between causative agents. Empirical therapy (corticosteroids, ursodeoxycholic acid and MARS) was prescribed to 20% of patients. Drug-induced autoimmune hepatitis patients (26 cases) were mainly females (62%) with hepatocellular damage (92%), who more frequently received immunosuppressive therapy (58%). Conclusions: AST elevation at onset is a strong predictor of poor outcome and should be routinely assessed in DILI evaluation. Mortality is higher in older patients with hepatocellular damage and patients with underlying hepatic conditions. The Spanish DILI Registry is a valuable tool in the identification of causative drugs, clinical signatures and prognostic risk factors in DILI and can aid physicians in DILI characterisation and management. Lay summary: Clinical information on drug-induced liver injury (DILI) collected from enrolled patients in the Spanish DILI Registry can guide physicians in the decision-making process. We have found that older patients with hepatocellular type liver injury and patients with additional liver conditions are at a higher risk of mortality. The type of liver injury, patient sex and analytical values of aspartate aminotransferase and total bilirubin can also help predict clinical outcomes
    corecore