8 research outputs found

    A new adaptive multiple modelling approach for non-linear and non-stationary systems

    Get PDF
    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption

    Distributed Clustering and Learning Over Networks

    Full text link
    Distributed processing over networks relies on in-network processing and cooperation among neighboring agents. Cooperation is beneficial when agents share a common objective. However, in many applications agents may belong to different clusters that pursue different objectives. Then, indiscriminate cooperation will lead to undesired results. In this work, we propose an adaptive clustering and learning scheme that allows agents to learn which neighbors they should cooperate with and which other neighbors they should ignore. In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks. We carry out a detailed mean-square analysis and assess the error probabilities of Types I and II, i.e., false alarm and mis-detection, for the clustering mechanism. Among other results, we establish that these probabilities decay exponentially with the step-sizes so that the probability of correct clustering can be made arbitrarily close to one.Comment: 47 pages, 6 figure

    Robust Distributed Clustering Algorithm Over Multitask Networks

    Get PDF
    We propose a new adaptive clustering algorithm that is robust to various multitask environments. Positional relationships among optimal vectors and a reference signal are determined by using the mean-square deviation relation derived from a one-step least-mean-square update. Clustering is performed by combining determinations on the positional relationships at several iterations. From this geometrical basis, unlike the conventional clustering algorithms using simple thresholding method, the proposed algorithm can perform clustering accurately in various multitask environments. Simulation results show that the proposed algorithm has more accurate estimation accuracy than the conventional algorithms and is insensitive to parameter selection.11Ysciescopu

    A Fault-Tolerant Regularizer for RBF Networks

    Full text link

    Utforsking av overgangen fra tradisjonell dataanalyse til metoder med maskin- og dyp læring

    Get PDF
    Data analysis methods based on machine- and deep learning approaches are continuously replacing traditional methods. Models based on deep learning (DL) are applicable to many problems and often have better prediction performance compared to traditional methods. One major difference between the traditional methods and machine learning (ML) approaches is the black box aspect often associated with ML and DL models. The use of ML and DL models offers many opportunities but also challenges. This thesis explores some of these opportunities and challenges of DL modelling with a focus on applications in spectroscopy. DL models are based on artificial neural networks (ANNs) and are known to automatically find complex relations in the data. In Paper I, this property is exploited by designing DL models to learn spectroscopic preprocessing based on classical preprocessing techniques. It is shown that the DL-based preprocessing has some merits with regard to prediction performance, but there is considerable extra effort required when training and tuning these DL models. The flexibility of ANN architecture designs is further studied in Paper II when a DL model for multiblock data analysis is proposed which can also quantify the importance of each data block. A drawback of the DL models is the lack of interpretability. To address this, a different modelling approach is taken in Paper III where the focus is to use DL models in such a way as to retain as much interpretability as possible. The paper presents the concept of non-linear error modelling, where the DL model is used to model the residuals of the linear model instead of the raw input data. The concept is essentially a shrinking of the black box aspect since the majority of the data modelling is done by a linear interpretable model. The final topic explored in this thesis is a more traditional modelling approach inspired by DL techniques. Data sometimes contain intrinsic subgroups which might be more accurately modelled separately than with a global model. Paper IV presents a modelling framework based on locally weighted models and fuzzy partitioning that automatically finds relevant clusters and combines the predictions of each local model. Compared to a DL model, the locally weighted modelling framework is more transparent. It is also shown how the framework can utilise DL techniques to be scaled to problems with huge amounts of data.Metoder basert på maskin- og dyp læring erstatter i stadig økende grad tradisjonell datamodellering. Modeller basert på dyp læring (DL) kan brukes på mange problemer og har ofte bedre prediksjonsevne sammenlignet med tradisjonelle metoder. En stor forskjell mellom tradisjonelle metoder og metoder basert på maskinlæring (ML) er den "svarte boksen" som ofte forbindes med ML- og DL-modeller. Bruken av ML- og DL-modeller åpner opp for mange muligheter, men også utfordringer. Denne avhandlingen utforsker noen av disse mulighetene og utfordringene med DL modeller, fokusert på anvendelser innen spektroskopi. DL-modeller er basert på kunstige nevrale nettverk (KNN) og er kjent for å kunne finne komplekse relasjoner i data. I Artikkel I blir denne egenskapen utnyttet ved å designe DL-modeller som kan lære spektroskopisk preprosessering basert på klassiske preprosesseringsteknikker. Det er vist at DL-basert preprosessering kan være gunstig med tanke på prediksjonsevne, men det kreves større innsats for å trene og justere disse DL-modellene. Fleksibiliteten til design av KNN-arkitekturer er studert videre i Artikkel II hvor en DL-modell for analyse av multiblokkdata er foreslått, som også kan kvantifisere viktigheten til hver datablokk. En ulempe med DL-modeller er manglende muligheter for tolkning. For å adressere dette, er en annen modelleringsframgangsmåte brukt i Artikkel III, hvor fokuset er på å bruke DL-modeller på en måte som bevarer mest mulig tolkbarhet. Artikkelen presenterer konseptet ikke-lineær feilmodellering, hvor en DL-modell blir bruk til å modellere residualer fra en lineær modell i stedet for rå inputdata. Konseptet kan ses på som en krymping av den svarte boksen, siden mesteparten av datamodelleringen er gjort av en lineær, tolkbar modell. Det siste temaet som er utforsket i denne avhandlingen er nærmere en tradisjonell modelleringsvariant, men som er inspirert av DL-teknikker. Data har av og til iboende undergrupper som kan bli mer nøyaktig modellert hver for seg enn med en global modell. Artikkel IV presenterer et modelleringsrammeverk basert på lokalt vektede modeller og "fuzzy" oppdeling, som automatisk finner relevante grupperinger ("clusters") og kombinerer prediksjonene fra hver lokale modell. Sammenlignet med en DL-modell, er det lokalt vektede modelleringsrammeverket mer transparent. Det er også vist hvordan rammeverket kan utnytte teknikker fra DL for å skalere opp til problemer med store mengder data

    Modern Telemetry

    Get PDF
    Telemetry is based on knowledge of various disciplines like Electronics, Measurement, Control and Communication along with their combination. This fact leads to a need of studying and understanding of these principles before the usage of Telemetry on selected problem solving. Spending time is however many times returned in form of obtained data or knowledge which telemetry system can provide. Usage of telemetry can be found in many areas from military through biomedical to real medical applications. Modern way to create a wireless sensors remotely connected to central system with artificial intelligence provide many new, sometimes unusual ways to get a knowledge about remote objects behaviour. This book is intended to present some new up to date accesses to telemetry problems solving by use of new sensors conceptions, new wireless transfer or communication techniques, data collection or processing techniques as well as several real use case scenarios describing model examples. Most of book chapters deals with many real cases of telemetry issues which can be used as a cookbooks for your own telemetry related problems

    Eine neue Methode zum robusten Entwurf von Regressionsmodellen bei beschränkter Rohdatenqualität

    Get PDF
    corecore