47 research outputs found

    Power and memory optimization techniques in embedded systems design

    Get PDF
    Embedded systems incur tight constraints on power consumption and memory (which impacts size) in addition to other constraints such as weight and cost. This dissertation addresses two key factors in embedded system design, namely minimization of power consumption and memory requirement. The first part of this dissertation considers the problem of optimizing power consumption (peak power as well as average power) in high-level synthesis (HLS). The second part deals with memory usage optimization mainly targeting a restricted class of computations expressed as loops accessing large data arrays that arises in scientific computing such as the coupled cluster and configuration interaction methods in quantum chemistry. First, a mixed-integer linear programming (MILP) formulation is presented for the scheduling problem in HLS using multiple supply-voltages in order to optimize peak power as well as average power and energy consumptions. For large designs, the MILP formulation may not be suitable; therefore, a two-phase iterative linear programming formulation and a power-resource-saving heuristic are presented to solve this problem. In addition, a new heuristic that uses an adaptation of the well-known force-directed scheduling heuristic is presented for the same problem. Next, this work considers the problem of module selection simultaneously with scheduling for minimizing peak and average power consumption. Then, the problem of power consumption (peak and average) in synchronous sequential designs is addressed. A solution integrating basic retiming and multiple-voltage scheduling (MVS) is proposed and evaluated. A two-stage algorithm namely power-oriented retiming followed by a MVS technique for peak and/or average power optimization is presented. Memory optimization is addressed next. Dynamic memory usage optimization during the evaluation of a special class of interdependent large data arrays is considered. Finally, this dissertation develops a novel integer-linear programming (ILP) formulation for static memory optimization using the well-known fusion technique by encoding of legality rules for loop fusion of a special class of loops using logical constraints over binary decision variables and a highly effective approximation of memory usage

    Automatic Segmentation of Optic Disc in Eye Fundus Images: A Survey

    Get PDF
    Optic disc detection and segmentation is one of the key elements for automatic retinal disease screening systems. The aim of this survey paper is to review, categorize and compare the optic disc detection algorithms and methodologies, giving a description of each of them, highlighting their key points and performance measures. Accordingly, this survey firstly overviews the anatomy of the eye fundus showing its main structural components along with their properties and functions. Consequently, the survey reviews the image enhancement techniques and also categorizes the image segmentation methodologies for the optic disc which include property-based methods, methods based on convergence of blood vessels, and model-based methods. The performance of segmentation algorithms is evaluated using a number of publicly available databases of retinal images via evaluation metrics which include accuracy and true positive rate (i.e. sensitivity). The survey, at the end, describes the different abnormalities occurring within the optic disc region

    Automatic Segmentation of Optic Disc in Eye Fundus Images : a Survey

    Get PDF
    Optic disc detection and segmentation is one of the key elements for automatic retinal disease screening systems. The aim of this survey paper is to review, categorize and compare the optic disc detection algorithms and methodologies, giving a description of each of them, highlighting their key points and performance measures. Accordingly, this survey firstly overviews the anatomy of the eye fundus showing its main structural components along with their properties and functions. Consequently, the survey reviews the image enhancement techniques and also categorizes the image segmentation methodologies for the optic disc which include property-based methods, methods based on convergence of blood vessels, and model-based methods. The performance of segmentation algorithms is evaluated using a number of publicly available databases of retinal images via evaluation metrics which include accuracy and true positive rate (i.e. sensitivity). The survey, at the end, describes the different abnormalities occurring within the optic disc region

    Improvements and Limitations of Humanized Mouse Models for HIV Research: NIH/NIAID “Meet the Experts” 2015 Workshop Summary

    Get PDF
    The number of humanized mouse models for the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) and other infectious diseases has expanded rapidly over the past 8 years. Highly immunodeficient mouse strains, such as NOD/SCID/gamma chainnull (NSG, NOG), support better human hematopoietic cell engraftment. Another improvement is the derivation of highly immunodeficient mice, transgenic with human leukocyte antigens (HLAs) and cytokines that supported development of HLA-restricted human T cells and heightened human myeloid cell engraftment. Humanized mice are also used to study the HIV reservoir using new imaging techniques. Despite these advances, there are still limitations in HIV immune responses and deficits in lymphoid structures in these models in addition to xenogeneic graft-versus-host responses. To understand and disseminate the improvements and limitations of humanized mouse models to the scientific community, the NIH sponsored and convened a meeting on April 15, 2015 to discuss the state of knowledge concerning these questions and best practices for selecting a humanized mouse model for a particular scientific investigation. This report summarizes the findings of the NIH meeting

    Twelve-month observational study of children with cancer in 41 countries during the COVID-19 pandemic

    Get PDF
    Introduction Childhood cancer is a leading cause of death. It is unclear whether the COVID-19 pandemic has impacted childhood cancer mortality. In this study, we aimed to establish all-cause mortality rates for childhood cancers during the COVID-19 pandemic and determine the factors associated with mortality. Methods Prospective cohort study in 109 institutions in 41 countries. Inclusion criteria: children <18 years who were newly diagnosed with or undergoing active treatment for acute lymphoblastic leukaemia, non-Hodgkin's lymphoma, Hodgkin lymphoma, retinoblastoma, Wilms tumour, glioma, osteosarcoma, Ewing sarcoma, rhabdomyosarcoma, medulloblastoma and neuroblastoma. Of 2327 cases, 2118 patients were included in the study. The primary outcome measure was all-cause mortality at 30 days, 90 days and 12 months. Results All-cause mortality was 3.4% (n=71/2084) at 30-day follow-up, 5.7% (n=113/1969) at 90-day follow-up and 13.0% (n=206/1581) at 12-month follow-up. The median time from diagnosis to multidisciplinary team (MDT) plan was longest in low-income countries (7 days, IQR 3-11). Multivariable analysis revealed several factors associated with 12-month mortality, including low-income (OR 6.99 (95% CI 2.49 to 19.68); p<0.001), lower middle income (OR 3.32 (95% CI 1.96 to 5.61); p<0.001) and upper middle income (OR 3.49 (95% CI 2.02 to 6.03); p<0.001) country status and chemotherapy (OR 0.55 (95% CI 0.36 to 0.86); p=0.008) and immunotherapy (OR 0.27 (95% CI 0.08 to 0.91); p=0.035) within 30 days from MDT plan. Multivariable analysis revealed laboratory-confirmed SARS-CoV-2 infection (OR 5.33 (95% CI 1.19 to 23.84); p=0.029) was associated with 30-day mortality. Conclusions Children with cancer are more likely to die within 30 days if infected with SARS-CoV-2. However, timely treatment reduced odds of death. This report provides crucial information to balance the benefits of providing anticancer therapy against the risks of SARS-CoV-2 infection in children with cancer

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P < 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Mortality of emergency abdominal surgery in high-, middle- and low-income countries

    Get PDF
    Background: Surgical mortality data are collected routinely in high-income countries, yet virtually no low- or middle-income countries have outcome surveillance in place. The aim was prospectively to collect worldwide mortality data following emergency abdominal surgery, comparing findings across countries with a low, middle or high Human Development Index (HDI). Methods: This was a prospective, multicentre, cohort study. Self-selected hospitals performing emergency surgery submitted prespecified data for consecutive patients from at least one 2-week interval during July to December 2014. Postoperative mortality was analysed by hierarchical multivariable logistic regression. Results: Data were obtained for 10 745 patients from 357 centres in 58 countries; 6538 were from high-, 2889 from middle- and 1318 from low-HDI settings. The overall mortality rate was 1⋅6 per cent at 24 h (high 1⋅1 per cent, middle 1⋅9 per cent, low 3⋅4 per cent; P < 0⋅001), increasing to 5⋅4 per cent by 30 days (high 4⋅5 per cent, middle 6⋅0 per cent, low 8⋅6 per cent; P < 0⋅001). Of the 578 patients who died, 404 (69⋅9 per cent) did so between 24 h and 30 days following surgery (high 74⋅2 per cent, middle 68⋅8 per cent, low 60⋅5 per cent). After adjustment, 30-day mortality remained higher in middle-income (odds ratio (OR) 2⋅78, 95 per cent c.i. 1⋅84 to 4⋅20) and low-income (OR 2⋅97, 1⋅84 to 4⋅81) countries. Surgical safety checklist use was less frequent in low- and middle-income countries, but when used was associated with reduced mortality at 30 days. Conclusion: Mortality is three times higher in low- compared with high-HDI countries even when adjusted for prognostic factors. Patient safety factors may have an important role. Registration number: NCT02179112 (http://www.clinicaltrials.gov)

    by

    No full text
    To my LORD WHO created me and created every thing ii ACKNOWLEDGEMENTS I would like to express my gratitude to my advisor Dr. Ramanujam for his support, encouragement, and technical advice during the course of this work. His valuable hints when I am stuck will always be appreciated. I would also like to thank Dr. R. Vaidyanathan, Dr. D. Carver, Dr. J. Tyler, Dr. J. Trahan, Dr. J. Hong, and Dr. A. Lisan for serving on my committee. Finally, I would like to express my gratitude to my parents, my eldest brother Eng. El Badry, and my wife for their marvelous support that makes this work eventually possible

    Model-Based Hardware-Software Codesign of ECT Digital Processing Unit

    No full text
    Image reconstruction algorithm and its controller constitute the main modules of the electrical capacitance tomography (ECT) system; in order to achieve the trade-off between the attainable performance and the flexibility of the image reconstruction and control design of the ECT system, hardware-software codesign of a digital processing unit (DPU) targeting FPGA system-on-chip (SoC) is presented. Design and implementation of software and hardware components of the ECT-DPU and their integration and verification based on the model-based design (MBD) paradigm are proposed. The inner-product of large vectors constitutes the core of the majority of these ECT image reconstruction algorithms. Full parallel implementation of large vector multiplication on FPGA consumes a huge number of resources and incurs long combinational path delay. The proposed MBD of the ECT-DPU tackles this problem by crafting a parametric segmented parallel inner-product architecture so as to work as the shared hardware core unit for the parallel matrix multiplication in the image reconstruction and control of the ECT system. This allowed the parameterized core unit to be configured at system-level to tackle large matrices with the segment length working as a design degree of freedom. It allows the trade-off between performance and resource usage and determines the level of computation parallelism. Using MBD with the proposed segmented architecture, the system design can be flexibly tailored to the designer specifications to fulfill the required performance while meeting the resources constraint. In the linear-back projection image reconstruction algorithm, the segmentation scheme has exhibited high resource saving of 43% and 71% for a small degradation in a frame rate of 3% and 14%, respectively
    corecore