5,773 research outputs found

    Basic linear algebra subprograms for FORTRAN usage

    Get PDF
    A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108

    Optimization of the Asymptotic Property of Mutual Learning Involving an Integration Mechanism of Ensemble Learning

    Full text link
    We propose an optimization method of mutual learning which converges into the identical state of optimum ensemble learning within the framework of on-line learning, and have analyzed its asymptotic property through the statistical mechanics method.The proposed model consists of two learning steps: two students independently learn from a teacher, and then the students learn from each other through the mutual learning. In mutual learning, students learn from each other and the generalization error is improved even if the teacher has not taken part in the mutual learning. However, in the case of different initial overlaps(direction cosine) between teacher and students, a student with a larger initial overlap tends to have a larger generalization error than that of before the mutual learning. To overcome this problem, our proposed optimization method of mutual learning optimizes the step sizes of two students to minimize the asymptotic property of the generalization error. Consequently, the optimized mutual learning converges to a generalization error identical to that of the optimal ensemble learning. In addition, we show the relationship between the optimum step size of the mutual learning and the integration mechanism of the ensemble learning.Comment: 13 pages, 3 figures, submitted to Journal of Physical Society of Japa

    Ensemble learning of linear perceptron; Online learning theory

    Full text link
    Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review

    Statistical Mechanics of Nonlinear On-line Learning for Ensemble Teachers

    Full text link
    We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model. Moreover, it is clarified that Hebbian learning and perceptron learning show qualitatively different behaviors from each other. In Hebbian learning, we can analytically obtain the solutions. In this case, the generalization error monotonically decreases. The steady value of the generalization error is independent of the learning rate. The larger the number of teachers is and the more variety the ensemble teachers have, the smaller the generalization error is. In perceptron learning, we have to numerically obtain the solutions. In this case, the dynamical behaviors of the generalization error are non-monotonic. The smaller the learning rate is, the larger the number of teachers is; and the more variety the ensemble teachers have, the smaller the minimum value of the generalization error is.Comment: 13 pages, 9 figure

    Laminar flow of two miscible fluids in a simple network

    Full text link
    When a fluid comprised of multiple phases or constituents flows through a network, non-linear phenomena such as multiple stable equilibrium states and spontaneous oscillations can occur. Such behavior has been observed or predicted in a number of networks including the flow of blood through the microcirculation, the flow of picoliter droplets through microfluidic devices, the flow of magma through lava tubes, and two-phase flow in refrigeration systems. While the existence of non-linear phenomena in a network with many inter-connections containing fluids with complex rheology may seem unsurprising, this paper demonstrates that even simple networks containing Newtonian fluids in laminar flow can demonstrate multiple equilibria. The paper describes a theoretical and experimental investigation of the laminar flow of two miscible Newtonian fluids of different density and viscosity through a simple network. The fluids stratify due to gravity and remain as nearly distinct phases with some mixing occurring only by diffusion. This fluid system has the advantage that it is easily controlled and modeled, yet contains the key ingredients for network non-linearities. Experiments and 3D simulations are first used to explore how phases distribute at a single T-junction. Once the phase separation at a single junction is known, a network model is developed which predicts multiple equilibria in the simplest of networks. The existence of multiple stable equilibria is confirmed experimentally and a criteria for their existence is developed. The network results are generic and could be applied to or found in different physical systems

    Statistical Mechanics of Time Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.Comment: 10 pages, 10 figure

    Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    Full text link
    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.Comment: 11 pages, 7 figure

    Explainable artificial intelligence for human-machine interaction in brain tumor localization

    Get PDF
    Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.publishedVersio

    SHAPE selection (SHAPES) enrich for RNA structure signal in SHAPE sequencing-based probing data

    Get PDF
    Selective 2′ Hydroxyl Acylation analyzed by Primer Extension (SHAPE) is an accurate method for probing of RNA secondary structure. In existing SHAPE methods, the SHAPE probing signal is normalized to a no-reagent control to correct for the background caused by premature termination of the reverse transcriptase. Here, we introduce a SHAPE Selection (SHAPES) reagent, N-propanone isatoic anhydride (NPIA), which retains the ability of SHAPE reagents to accurately probe RNA structure, but also allows covalent coupling between the SHAPES reagent and a biotin molecule. We demonstrate that SHAPES-based selection of cDNA–RNA hybrids on streptavidin beads effectively removes the large majority of background signal present in SHAPE probing data and that sequencing-based SHAPES data contain the same amount of RNA structure data as regular sequencing-based SHAPE data obtained through normalization to a no-reagent control. Moreover, the selection efficiently enriches for probed RNAs, suggesting that the SHAPES strategy will be useful for applications with high-background and low-probing signal such as in vivo RNA structure probing
    • …
    corecore