272,233 research outputs found

    Characterization of Large Scale Functional Brain Networks During Ketamine-Medetomidine Anesthetic Induction

    Full text link
    Several experiments evidence that specialized brain regions functionally interact and reveal that the brain processes and integrates information in a specific and structured manner. Networks can be used to model brain functional activities constituting a way to characterize and quantify this structured form of organization. Reports state that different physiological states or even diseases that affect the central nervous system may be associated to alterations on those networks, that might reflect in graphs of different architectures. However, the relation of their structure to different states or conditions of the organism is not well comprehended. Thus, experiments that involve the estimation of functional neural networks of subjects exposed to different controlled conditions are of great relevance. Within this context, this research has sought to model large scale functional brain networks during an anesthetic induction process. The experiment was based on intra-cranial recordings of neural activities of an old world macaque of the species Macaca fuscata. Neural activity was recorded during a Ketamine-Medetomidine anesthetic induction process. Networks were serially estimated in time intervals of five seconds. Changes were observed in various networks properties within about one and a half minutes after the administration of the anesthetics. These changes reveal the occurrence of a transition on the networks architecture. During general anesthesia a reduction in the functional connectivity and network integration capabilities were verified in both local and global levels. It was also observed that the brain shifted to a highly specific and dynamic state. The results bring empirical evidence and report the relation of the induced state of anesthesia to properties of functional networks, thus, they contribute for the elucidation of some new aspects of neural correlates of consciousness.Comment: 28 pages , 9 figures, 7 tables; - English errors were corrected; Figures 1,3,4,5,6,8 and 9 were replaced by (exact the same)figures of higher resolution; Three(3) references were added on the introduction sectio

    Reliable and structural deep neural networks

    Get PDF
    Deep neural networks have dominated a wide range of computer vision research recently. However, recent studies have shown that deep neural networks are sensitive to adversarial perturbations. The limitations of deep networks cause reliability concerns in real-world problems and demonstrate that computational behaviors differ from humans. In this dissertation, we focus on investigating the characteristic of deep neural networks. The first part of this dissertation proposed an effective defense method against adversarial examples. We introduced an ensemble generative network with feedback loops, which use the feature-level denoising modules to improve the defense capability for adversarial examples. We then discussed the vulnerability of deep neural networks. We explored a consistency and sensitivity-guided attack method in a low-dimensional space, which can effectively generate adversarial examples, even in a black-box manner. Our proposed approach illustrated that the adversarial examples are transferable across different networks and universal in deep networks. The last part of this dissertation focuses on rethinking the structure and behavior of deep neural networks. Rather than enhancing defense methods against attacks, we take a further step toward developing a new structure of neural networks, which provide a dynamic link between the feature map representation and their graph-based structural representation. In addition, we introduced a new feature interaction method based on the vision transformer. The new structure can learn to dynamically select the most discriminative features and help deep networks improve the generalization ability.Includes bibliographical references

    Rule Extraction and Insertion to Improve the Performance of a Dynamic Cell Structure Neural Network

    Get PDF
    Artificial Neural Networks are extremely useful machine learning tools. They are used for many purposes, such as prediction, classification, pattern recognition, etc. Although neural networks have been used for decades, they are still often not completely understood or trusted, especially in safety and mission critical situations. Typically, neural networks are trained on data sets that are representative of what needs to be learned. Sometimes training sets are constructed in order to train the neural network in a certain way, in order to embed appropriate knowledge. The purpose of this research is to determine if there is another method that can be used to embed specific knowledge in a neural network before training and if this improves the performance of a neural network. This research develops and tests a new method of embedding pre-knowledge into the Dynamic Cell Structure (DCS) neural network. The DCS is a type of self-organizing map neural network that has been used for many purposes, including classification. In the research presented here, the method used for embedding pre-knowledge into the neural network is to start by converting the knowledge to a set of IF/THEN rules, that can be easily understood and/or validated by a human expert. Once the rules are constructed and validated, then they are converted to a beginning neural network structure. This allows pre-knowledge to be embedded before training the neural network. This conversion and embedding process is called Rule Insertion. In order to determine whether this process improves performance, the neural network was trained with and without pre-knowledge embedded. After the training, the neural network structure was again converted to rules, Rule Extraction, and then the neural network accuracy and the rule accuracy were computed. Also, the agreement between the neural network and the extracted rules was computed. The findings of this research show that using Rule Insertion to embed pre-knowledge into a DCS neural network can increase the accuracy of the neural network. An expert can create the rules to be embedded and can also examine and validate the rules extracted to give more confidence in what the neural network has learned during training. The extracted rules are also a refinement of the inserted rules, meaning the neural network was able to improve upon the expert knowledge based on the data presented

    Processing of Synthetic Aperture Radar Images by the Boundary Contour System and Feature Contour System

    Full text link
    An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to two large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. Finally, a diffusive filling-in operation within the segmented regions produces coherent visible structures. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083

    An optimization method for dynamics of structures with repetitive component patterns

    Get PDF
    The occurrence of dynamic problems during the operation of machinery may have devastating effects on a product. Therefore, design optimization of these products becomes essential in order to meet safety criteria. In this research, a hybrid design optimization method is proposed where attention is focused on structures having repeating patterns in their geometries. In the proposed method, the analysis is decomposed but the optimization problem itself is treated as a whole. The model of an entire structure is obtained without modeling all the repetitive components using the merits of the Component Mode Synthesis method. Backpropagation Neural Networks are used for surrogate modeling. The optimization is performed using two techniques: Genetic Algorithms (GAs) and Sequential Quadratic Programming (SQP). GAs are utilized to increase the chance of finding the location of the global optimum and since this optimum may not be exact, SQP is employed afterwards to improve the solution. A theoretical test problem is used to demonstrate the method

    Design Optimization Utilizing Dynamic Substructuring and Artificial Intelligence Techniques

    Get PDF
    In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure. In this paper, a design optimization strategy based on the integration of the Component Mode Synthesis (CMS) method with numerical optimization techniques is presented. For reasons of numerical efficiency, a Finite Element (FE) model is represented by a surrogate model which is a function of the design parameters. The surrogate model is obtained in four steps: First, the reduced FE models of the components are derived using the CMS method. Then the components are aassembled to obtain the entire structural response. Afterwards the dynamic behavior is determined for a number of design parameter settings. Finally, the surrogate model representing the dynamic behavior is obtained. In this research, the surrogate model is determined using the Backpropagation Neural Networks which is then optimized using the Genetic Algorithms and Sequential Quadratic Programming method. The application of the introduced techniques is demonstrated on a simple test problem

    Infinite dynamic bayesian networks

    Get PDF
    We present the infinite dynamic Bayesian network model (iDBN), a nonparametric, factored state-space model that generalizes dynamic Bayesian networks (DBNs). The iDBN can infer every aspect of a DBN: the number of hidden factors, the number of values each factor can take, and (arbitrarily complex) connections and conditionals between factors and observations. In this way, the iDBN generalizes other nonparametric state space models, which until now generally focused on binary hidden nodes and more restricted connection structures. We show how this new prior allows us to find interesting structure in benchmark tests and on two realworld datasets involving weather data and neural information flow networks.Massachusetts Institute of Technology (Hugh Hampton Young Memorial Fund Fellowship)United States. Air Force Office of Scientific Research (AFOSR FA9550-07-1-0075

    Neural units with higher-order synaptic operations with applications to edge detection and control systems

    Get PDF
    The biological sense organ contains infinite potential. The artificial neural structures have emulated the potential of the central nervous system; however, most of the researchers have been using the linear combination of synaptic operation. In this thesis, this neural structure is referred to as the neural unit with linear synaptic operation (LSO). The objective of the research reported in this thesis is to develop novel neural units with higher-order synaptic operations (HOSO), and to explore their potential applications. The neural units with quadratic synaptic operation (QSO) and cubic synaptic operation (CSO) are developed and reported in this thesis. A comparative analysis is done on the neural units with LSO, QSO, and CSO. It is to be noted that the neural units with lower order synaptic operations are the subsets of the neural units with higher-order synaptic operations. It is found that for much more complex problems the neural units with higher-order synaptic operations are much more efficient than the neural units with lower order synaptic operations. Motivated by the intensity of the biological neural systems, the dynamic nature of the neural structure is proposed and implemented using the neural unit with CSO. The dynamic structure makes the system response relatively insensitive to external disturbances and internal variations in system parameters. With the success of these dynamic structures researchers are inclined to replace the recurrent (feedback) neural networks (NNs) in their present systems with the neural units with CSO. Applications of these novel dynamic neural structures are gaining potential in the areas of image processing for the machine vision and motion controls. One of the machine vision emulations from the biological attribution is edge detection. Edge detection of images is a significant component in the field of computer vision, remote sensing and image analysis. The neural units with HOSO do replicate some of the biological attributes for edge detection. Further more, the developments in robotics are gaining momentum in neural control applications with the introduction of mobile robots, which in turn use the neural units with HOSO; a CCD camera for the vision is implemented, and several photo-sensors are attached on the machine. In summary, it was demonstrated that the neural units with HOSO present the advanced control capability for the mobile robot with neuro-vision and neuro-control systems
    corecore