45,262 research outputs found

    A Machine Learning Framework to Model Extreme Events for Nonlinear Marine Dynamics

    Full text link
    Extreme events such as large motions and excess loadings of marine systems can result in damage to the device or loss of life. Since the system is exposed to a random ocean environment, these extreme events need to be understood from a statistical perspective to design a safe system. However, analysis of extreme events is challenging because most marine systems operate in the nonlinear region, especially when extreme events occur, and observation of the extreme events is relatively rare for a proper design. Conducting high-fidelity simulations or experimental tests to observe such events is cost-prohibitive. In the current research, a novel framework is proposed to randomly generate test environments that lead to a large response of the system. With the generated environment, large responses that would take a very long time to achieve can be observed within a much shorter time window. The time-domain context around the extreme event provides the user with rich insights towards the improvement of the design. The proposed framework consists of two modules, which are named as Threshold Exceedance Generator (TEG) and Design Response Estimator (DRE). The framework is data-driven, and its application requires minimal knowledge about the system from the user. The DRE module can identify a nonlinear marine system based on collected data. The TEG module can generate ocean environments that lead to large system response based on the system identification by the DRE module. Machine learning methods, especially neural networks, are heavily used in the proposed framework. In the thesis, the extreme generation problem in the marine field is described and addressed from a machine-learning perspective. To validate the framework, marine examples including linear wave propagation, nonlinear wave propagation, nonlinear ship roll, tank sloshing, and a floating object in waves are explored. Examples from such a wide range show that the framework can be used for linear or nonlinear systems and Gaussian or non-Gaussian environments. The cost and the amount of data to apply the method are estimated and measured. The comparison between the results from the framework and Monte Carlo Simulation fully demonstrates the accuracy and feasibility of using the data-driven approach.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163268/1/wenzhe_1.pd

    Wavelet Network: Online Sequential Extreme Learning Machine for Nonlinear Dynamic Systems Identification

    Get PDF
    A single hidden layer feedforward neural network (SLFN) with online sequential extreme learning machine (OSELM) algorithm has been introduced and applied in many regression problems successfully. However, using SLFN with OSELM as black-box for nonlinear system identification may lead to building models for the identified plant with inconsistency responses from control perspective. The reason can refer to the random initialization procedure of the SLFN hidden node parameters with OSELM algorithm. In this paper, a single hidden layer feedforward wavelet network (WN) is introduced with OSELM for nonlinear system identification aimed at getting better generalization performances by reducing the effect of a random initialization procedure

    Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Get PDF
    As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine) neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine) neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control) will be used to improve the control performance. Simulation results are included to complement the theoretical results

    RECURSIVE LEARNING ALGORITHMS ON RBF NETWORKS FOR NONLINEAR SYSTEM IDENTIFICATION

    Get PDF
    Science and technology development has the tendency of learning from nature where human also try to develop artificial intelligent by imitating biological neuron network which is popularly termed Artificial Neural Network (ANN). It represents an interconnection among neuron which consists of several adjustable parameters which are tuned using a set of learning examples to obtain the desired function represent the actual system. Radial Basis Function (RBF) networks, one of feed forward artificial neural network architecture, have recently been given much attention due to its good generalization ability. The RBF network is popular among scientist and engineer and used in a number of wide ranging signal and control applications which includes the area of system identification or estimation. The learning approach, a process which updated the parameters of RBF networks, will be the most important issue in neural computing research communities. The learning method will determine the performance’s capability of the networks for the system identification process which will be one of the key issues to be discussed in the thesis. This thesis proposes derivative free learning, using finite difference, methods for fixed size RBF network in comparison to gradient based learning for the application of system identification. The thesis also try to investigate the influence of initialization of RBF weights parameters on the overall learning performance using random method and advanced unsupervised learning, such as clustering techniques, as a comparison. By taking advantage of localized Gaussian basis function of RBF network, a decomposed version of learning method using finite difference (or derivative free) gradient estimate has been proposed in order to reduce memory requirement for the computation of the weight updates. The proposed training algorithms discussed in this thesis are derived for fixed size RBF network and being compared with Extreme Learning Machine (ELM) as the ELM technique just randomly assigned centers and width of the hidden neurons and update the output connected weights. The proposed methods are tested using well known nonlinear benchmark problems and also evaluated for system with irregular sample time or known as lost packets. The finite difference based gradient estimate, proposed in this thesis, provides a viable solution only for identifying a system with irregular sample time

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
    corecore