105 research outputs found

    Adaptive Regularization for Class-Incremental Learning

    Full text link
    Class-Incremental Learning updates a deep classifier with new categories while maintaining the previously observed class accuracy. Regularizing the neural network weights is a common method to prevent forgetting previously learned classes while learning novel ones. However, existing regularizers use a constant magnitude throughout the learning sessions, which may not reflect the varying levels of difficulty of the tasks encountered during incremental learning. This study investigates the necessity of adaptive regularization in Class-Incremental Learning, which dynamically adjusts the regularization strength according to the complexity of the task at hand. We propose a Bayesian Optimization-based approach to automatically determine the optimal regularization magnitude for each learning task. Our experiments on two datasets via two regularizers demonstrate the importance of adaptive regularization for achieving accurate and less forgetful visual incremental learning

    Reservoir management through characterization of smart fields using capacitance-resistance models

    Get PDF
    Use of smart well technologies to improve the recovery has caught significant attention in the oil industry in the last decade. Capacitance-Resistance (CRM) methodology is a robust data-driven technique for reservoir surveillance. Reservoir sweep is a crucial part of efficient recovery, especially where significant investment is done by means of installation of smart wells that feature inflow control valves (ICVs) that are remotely controllable. However, as it is a relatively newer concept, effective use of this new technology has been a challenge. In this study, the objective is to present the efficient use of ICVs in intelligent fields through the integrated use of capacitance-resistance modeling and smart wells with ICVs. A standard realistic SPE reservoir simulation model of a waterflooding process is used in this study where the smart well ICVs are controlled with conditional statements called procedures in a fully commercial full-physics numerical reservoir simulator. The simulation data is utilized to build the CRM model to obtain the inter-well connectivities at the zonal level beyond only the inter-well connectivity data as smart wells provide control and information on the amount of injection into each layer or zone. Thus, after analyzing the CRM model to detect the inter-well connectivities at the zone/layer-level in an iterative way, the optimum injection not only at the well level but also at the perf/zone level is found. The workflow is outlined as well as the improvements in the results. The smart well technology has been challenged with the associated cost component thus, it is important to present the benefits of this technology with applications in more diverse cases with different workflows. It has been observed that a robust reservoir characterization in an intelligent field can provide an insight into the physics of reservoir including smart wells with ICVs. The results are presented in a comparative way against the base case to illustrate the incremental value of the use of ICVs along with key performance indicators. Most importantly, it has been shown that smart well use without a robust reservoir management strategy does not always lead to successful results. In reservoir management, it is not only important to catch the well level details but also see the big picture at the field level to improve the performance of the reservoirs beyond individual well performances taking into account the interference between wells. This method takes the reservoir surveillance to the next level where reservoir characterization is improved using smart field technologies and capacitance-resistance modeling as a robust cost-effective data-driven method

    Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

    Full text link
    Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-Renyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, however, uniform initialization demonstrates more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy, and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners

    Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

    Get PDF
    peer reviewedContinual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible. To this end; regularization, replay, architecture, and parameter isolation approaches were introduced to the literature. Parameter isolation using a sparse network which enables to allocate distinct parts of the neural network to different tasks and also allows to share of parameters between tasks if they are similar. Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task. This paper is the first empirical study investigating the effect of different DST components under the CL paradigm to fill a critical research gap and shed light on the optimal configuration of DST for CL if it exists. Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection. We found that, at a low sparsity level, Erdos-R\'enyi Kernel (ERK) initialization utilizes the backbone more efficiently and allows to effectively learn increments of tasks. At a high sparsity level, unless it is extreme, uniform initialization demonstrates a more reliable and robust performance. In terms of growth strategy; performance is dependent on the defined initialization strategy and the extent of sparsity. Finally, adaptivity within DST components is a promising way for better continual learners.9. Industry, innovation and infrastructur

    Measurement and clinical implications of choroidal thickness in patients with inflammatory bowel disease

    Full text link
    ABSTRACTPurpose:Ocular inflammation is a frequent extraintestinal manifestation of inflammatory bowel disease (IBD) and may parallel disease activity. In this study, we evaluated the utility of a choroidal thickness measurement in assessing IBD activity.Methods:A total of 62 eyes of 31 patients with IBD [Crohn's disease (CD), n=10 and ulcerative colitis (UC), n=21] and 104 eyes of 52 healthy blood donors were included in this study. Choroidal thickness was measured using enhanced depth imaging optical coherence tomography. The Crohn's disease activity index (CDAI) and the modified Truelove Witts score were used to assess disease activity in CD and UC, respectively.Results:No significant differences in mean subfoveal, nasal 3000 μm, or temporal 3000 μm choroidal thickness measurements (P>0.05 for all) were observed between IBD patients and healthy controls. Age, smoking, CD site of involvement (ileal and ileocolonic involvement), CDAI, CD activity, and UC endoscopic activity index were all found to be significantly correlated with choroidal thickness by univariate analysis (P<0.05). Smoking (P<0.05) and the CD site of involvement (P<0.01) were the only independent parameters associated with increased choroidal thickness at all measurement locations.Conclusions:Choroidal thickness is not a useful marker of disease activity in patients with IBD but may be an indicator of ileal involvement in patients with CD

    Newly discovered mutations in the GALNT3 gene causing autosomal recessive hyperostosis-hyperphosphatemia syndrome

    Get PDF
    Background and purpose Periosteal new bone formation and cortical hyperostosis often suggest an initial diagnosis of bone malignancy or osteomyelitis. In the present study, we investigated the cause of persistent bone hyperostosis in the offspring of two consanguineous parents

    Energy-Efficient Computing through Approximate Arithmetic

    No full text

    Prediction of MHC class I binding peptides with a new feature encoding technique

    No full text
    The recognition of specific peptides, bound to major histocompatibility complex (MHC) class I molecules, is of particular importance to the robust identification of T-cell epitopes and thus the successful design of protein-based vaccines. Here, we present a new feature amino acid encoding technique termed OEDICHO to predict MHC class I/peptide complexes. In the proposed method, we have combined orthonormal encoding (OE) and the binary representation of selected 10 best physicochemical properties of amino acids derived from Amino Acid Index Database (AAindex). We also have compared our method to current feature encoding techniques. The tests have been carried out on comparatively large Human Leukocyte Antigen (HLA)-A and HLA-B allele peptide binding datasets. Empirical results show that our amino acid encoding scheme leads to better classification performance on a standalone classifier. (C) 2012 Elsevier Inc. All rights reserved
    corecore