25,671 research outputs found

    Automatic Reconstruction of Fault Networks from Seismicity Catalogs: 3D Optimal Anisotropic Dynamic Clustering

    Get PDF
    We propose a new pattern recognition method that is able to reconstruct the 3D structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering method, that originally partitions a set of datapoints into clusters, using a global minimization criterion over the spatial inertia of those clusters. The new method improves on it by taking into account the full spatial inertia tensor of each cluster, in order to partition the dataset into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size and orientation. The main tunable parameter is the accuracy of the earthquake localizations, which fixes the resolution, i.e. the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog, the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm reconstructs successfully the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershocks sequence of the 28th June 1992 Landers earthquake in Southern California, the reconstructed plane segments fully agree with faults already known on geological maps, or with blind faults that appear quite obvious on longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multi-scale study of the inner structure of fault zones

    From Data Topology to a Modular Classifier

    Full text link
    This article describes an approach to designing a distributed and modular neural classifier. This approach introduces a new hierarchical clustering that enables one to determine reliable regions in the representation space by exploiting supervised information. A multilayer perceptron is then associated with each of these detected clusters and charged with recognizing elements of the associated cluster while rejecting all others. The obtained global classifier is comprised of a set of cooperating neural networks and completed by a K-nearest neighbor classifier charged with treating elements rejected by all the neural networks. Experimental results for the handwritten digit recognition problem and comparison with neural and statistical nonmodular classifiers are given

    Optimal Clustering Framework for Hyperspectral Band Selection

    Full text link
    Band selection, by choosing a set of representative bands in hyperspectral image (HSI), is an effective method to reduce the redundant information without compromising the original contents. Recently, various unsupervised band selection methods have been proposed, but most of them are based on approximation algorithms which can only obtain suboptimal solutions toward a specific objective function. This paper focuses on clustering-based band selection, and proposes a new framework to solve the above dilemma, claiming the following contributions: 1) An optimal clustering framework (OCF), which can obtain the optimal clustering result for a particular form of objective function under a reasonable constraint. 2) A rank on clusters strategy (RCS), which provides an effective criterion to select bands on existing clustering structure. 3) An automatic method to determine the number of the required bands, which can better evaluate the distinctive information produced by certain number of bands. In experiments, the proposed algorithm is compared to some state-of-the-art competitors. According to the experimental results, the proposed algorithm is robust and significantly outperform the other methods on various data sets

    Modeling and Optimal Design of Machining-Induced Residual Stresses in Aluminium Alloys Using a Fast Hierarchical Multiobjective Optimization Algorithm

    Get PDF
    The residual stresses induced during shaping and machining play an important role in determining the integrity and durability of metal components. An important issue of producing safety critical components is to find the machining parameters that create compressive surface stresses or minimise tensile surface stresses. In this paper, a systematic data-driven fuzzy modelling methodology is proposed, which allows constructing transparent fuzzy models considering both accuracy and interpretability attributes of fuzzy systems. The new method employs a hierarchical optimisation structure to improve the modelling efficiency, where two learning mechanisms cooperate together: NSGA-II is used to improve the model’s structure while the gradient descent method is used to optimise the numerical parameters. This hybrid approach is then successfully applied to the problem that concerns the prediction of machining induced residual stresses in aerospace aluminium alloys. Based on the developed reliable prediction models, NSGA-II is further applied to the multi-objective optimal design of aluminium alloys in a ‘reverse-engineering’ fashion. It is revealed that the optimal machining regimes to minimise the residual stress and the machining cost simultaneously can be successfully located

    Topic-based mixture language modelling

    Get PDF
    This paper describes an approach for constructing a mixture of language models based on simple statistical notions of semantics using probabilistic models developed for information retrieval. The approach encapsulates corpus-derived semantic information and is able to model varying styles of text. Using such information, the corpus texts are clustered in an unsupervised manner and a mixture of topic-specific language models is automatically created. The principal contribution of this work is to characterise the document space resulting from information retrieval techniques and to demonstrate the approach for mixture language modelling. A comparison is made between manual and automatic clustering in order to elucidate how the global content information is expressed in the space. We also compare (in terms of association with manual clustering and language modelling accuracy) alternative term-weighting schemes and the effect of singular value decomposition dimension reduction (latent semantic analysis). Test set perplexity results using the British National Corpus indicate that the approach can improve the potential of statistical language modelling. Using an adaptive procedure, the conventional model may be tuned to track text data with a slight increase in computational cost
    • 

    corecore