4,419 research outputs found

    Popular Ensemble Methods: An Empirical Study

    Full text link
    An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Error Resilient Video Coding Using Bitstream Syntax And Iterative Microscopy Image Segmentation

    Get PDF
    There has been a dramatic increase in the amount of video traffic over the Internet in past several years. For applications like real-time video streaming and video conferencing, retransmission of lost packets is often not permitted. Popular video coding standards such as H.26x and VPx make use of spatial-temporal correlations for compression, typically making compressed bitstreams vulnerable to errors. We propose several adaptive spatial-temporal error concealment approaches for subsampling-based multiple description video coding. These adaptive methods are based on motion and mode information extracted from the H.26x video bitstreams. We also present an error resilience method using data duplication in VPx video bitstreams. A recent challenge in image processing is the analysis of biomedical images acquired using optical microscopy. Due to the size and complexity of the images, automated segmentation methods are required to obtain quantitative, objective and reproducible measurements of biological entities. In this thesis, we present two techniques for microscopy image analysis. Our first method, “Jelly Filling” is intended to provide 3D segmentation of biological images that contain incompleteness in dye labeling. Intuitively, this method is based on filling disjoint regions of an image with jelly-like fluids to iteratively refine segments that represent separable biological entities. Our second method selectively uses a shape-based function optimization approach and a 2D marked point process simulation, to quantify nuclei by their locations and sizes. Experimental results exhibit that our proposed methods are effective in addressing the aforementioned challenges

    A Study on the Hierarchical Control Structure of the Islanded Microgrid

    Get PDF
    The microgrid is essential in promoting the power system’s resilience through its ability to host small-scale DG units. Furthermore, the microgrid can isolate itself during main grid faults and supply its demands. However, islanded operation of the microgrid is challenging due to difficulties in frequency and voltage control. In islanded mode, grid-forming units collaborate to control the frequency and voltage. A hierarchical control structure employing the droop control technique provides these control objectives in three consecutive levels: primary, secondary, and tertiary. However, challenges associated with DG units in the vicinity of distribution networks limit the effectiveness of the islanded mode of operation.In MV and LV distribution networks, the X/R ratio is low; hence, the frequency and voltage are related to the active and reactive power by line parameters. Therefore, frequency and voltage must be tuned for changes in active or reactive powers. Furthermore, the line parameters mismatch causes the voltage to be measured differently at each bus due to the different voltage drops in the lines. Hence, a trade-off between voltage regulation and reactive power-sharing is formed, which causes either circulating currents for voltage mismatch or overloading for reactive power mismatch. Finally, the economic dispatch is usually implemented in tertiary control, which takes minutes to hours. Therefore, an estimation algorithm is required for load and renewable energy quantities forecasting. Hence, prediction errors may occur that affect the stability and optimality of the control. This dissertation aims to improve the power system resilience by enhancing the operation of the islanded microgrid by addressing the above-mentioned issues. Firstly, a linear relationship described by line parameters is used in droop control at the primary control level to accurately control the frequency and voltage based on measured active and reactive power. Secondly, an optimization-based consensus secondary control is presented to manage the trade-off between voltage regulation and reactive power-sharing in the inductive grid with high line parameters mismatch. Thirdly, the economic dispatch-based secondary controller is implemented in secondary control to avoid prediction errors by depending on the measured active and reactive powers rather than the load and renewable energy generation estimation. The developed methods effectively resolve the frequency and voltage control issues in MATLAB/SIMULINK simulations

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Enhancing hospital planning capacity and resilience in crisis scenarios using interpretive structural modeling (ISM)

    Get PDF
    Hospitals are the critical support infrastructures. In the confrontation with natural disasters, infectious diseases, and other crises that severely affect the supply and demand of local medical services—and even jeopardize the hospital itself—, the hospital needs first to secure the essential emergency functions and, secondly, to recover from the impact as quickly as possible. Hospital resilience has numerous influencing elements and evaluation criteria, but there are still ambiguous boundaries in their internal influence relationships and hierarchical structures. Therefore, this study explores the determinants and pathways of practice for strengthening hospital resilience from an internal management perspective, applying Group Decision Making and Interpretive Structural Modeling (ISM) to pool the knowledge and experience of experts in related fields and identify critical variables. Based on the information collected and analyzed, a hierarchical model of hospital resilience was established. The results and practical applicability of the model were then validated by external experts to provide new knowledge for the development of hospital resilience management.Os hospitais são infraestruturas críticas. No confronto com desastres naturais, doenças infeciosas ou outras crises que afetem gravemente a oferta e a procura de serviços médicos locais—e que até põem em risco o próprio hospital—, o hospital precisa, em primeiro lugar, de assegurar as funções essenciais de emergência e, em segundo lugar, de recuperar desses impactos o mais rapidamente possível. A resiliência do hospital tem numerosos elementos de influência e critérios de avaliação, mas existem ainda fronteiras ambíguas nas suas relações de influência interna e nas suas estruturas hierárquicas. Neste contexto, o presente estudo explora determinantes e práticas para reforçar a resiliência hospitalar a partir de uma perspetiva de gestão interna, aplicando métodos de tomada de decisão de grupo e Interpretive Structural Modeling (ISM) para reunir o conhecimento e a experiência de especialistas em áreas relacionadas e identificar variáveis críticas. Com base na informação recolhida, foi estabelecido um modelo hierárquico de resiliência hospitalar. Os resultados e a aplicabilidade prática do modelo foram validados por peritos externos, no sentido de fornecer novos conhecimentos para o desenvolvimento da gestão da resiliência hospitalar
    corecore