8,232 research outputs found

    Fuzzy Jets

    Get PDF
    Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets. To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets, are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variables. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001

    Combining global and local information for the segmentation of MR images of the brain

    Get PDF
    Magnetic resonance imaging can provide high resolution volumetric images of the brain with exceptional soft tissue contrast. These factors allow the complex structure of the brain to be clearly visualised. This has lead to the development of quantitative methods to analyse neuroanatomical structures. In turn, this has promoted the use of computational methods to automate and improve these techniques. This thesis investigates methods to accurately segment MRI images of the brain. The use of global and local image information is considered, where global information includes image intensity distributions, means and variances and local information is based on the relationship between spatially neighbouring voxels. Methods are explored that aim to improve the classification and segmentation of MR images of the brain by combining these elements. Some common artefacts exist in MR brain images that can be seriously detrimental to image analysis methods. Methods to correct for these artifacts are assessed by exploring their effect, first with some well established classification methods and then with methods that combine global information with local information in the form of a Markov random field model. Another characteristic of MR images is the partial volume effect that occurs where signals from different tissues become mixed over the finite volume of a voxel. This effect is demonstrated and quantified using a simulation. Analysis methods that address these issues are tested on simulated and real MR images. They are also applied to study the structure of the temporal lobes in a group of patients with temporal lobe epilepsy. The results emphasise the benefits and limitations of applying these methods to a problem of this nature. The work in this thesis demonstrates the advantages of using global and local information together in the segmentation of MR brain images and proposes a generalised framework that allows this information to be combined in a flexible way

    Fuzzy inference enhanced information recovery from digital PIV using cross-correlation combined with particle tracking

    Get PDF
    Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities

    A fast algorithm to initialize cluster centroids in fuzzy clustering applications

    Get PDF
    The goal of partitioning clustering analysis is to divide a dataset into a predetermined number of homogeneous clusters. The quality of final clusters from a prototype-based partitioning algorithm is highly affected by the initially chosen centroids. In this paper, we propose the InoFrep, a novel data-dependent initialization algorithm for improving computational efficiency and robustness in prototype-based hard and fuzzy clustering. The InoFrep is a single-pass algorithm using the frequency polygon data of the feature with the highest peaks count in a dataset. By using the Fuzzy C-means (FCM) clustering algorithm, we empirically compare the performance of the InoFrep on one synthetic and six real datasets to those of two common initialization methods: Random sampling of data points and K-means++. Our results show that the InoFrep algorithm significantly reduces the number of iterations and the computing time required by the FCM algorithm. Additionally, it can be applied to multidimensional large datasets because of its shorter initialization time and independence from dimensionality due to working with only one feature with the highest number of peaks

    Hand (Motor) Movement Imagery Classification of EEG Using Takagi-Sugeno-Kang Fuzzy-Inference Neural Network

    Get PDF
    Approximately 20 million people in the United States suffer from irreversible nerve damage and would benefit from a neuroprosthetic device modulated by a Brain-Computer Interface (BCI). These devices restore independence by replacing peripheral nervous system functions such as peripheral control. Although there are currently devices under investigation, contemporary methods fail to offer adaptability and proper signal recognition for output devices. Human anatomical differences prevent the use of a fixed model system from providing consistent classification performance among various subjects. Furthermore, notoriously noisy signals such as Electroencephalography (EEG) require complex measures for signal detection. Therefore, there remains a tremendous need to explore and improve new algorithms. This report investigates a signal-processing model that is better suited for BCI applications because it incorporates machine learning and fuzzy logic. Whereas traditional machine learning techniques utilize precise functions to map the input into the feature space, fuzzy-neuro system apply imprecise membership functions to account for uncertainty and can be updated via supervised learning. Thus, this method is better equipped to tolerate uncertainty and improve performance over time. Moreover, a variation of this algorithm used in this study has a higher convergence speed. The proposed two-stage signal-processing model consists of feature extraction and feature translation, with an emphasis on the latter. The feature extraction phase includes Blind Source Separation (BSS) and the Discrete Wavelet Transform (DWT), and the feature translation stage includes the Takagi-Sugeno-Kang Fuzzy-Neural Network (TSKFNN). Performance of the proposed model corresponds to an average classification accuracy of 79.4 % for 40 subjects, which is higher than the standard literature values, 75%, making this a superior model

    Digital signal processing for the analysis of fetal breathing movements

    Get PDF

    Multispectral segmentation of whole-brain MRI

    Get PDF
    Magnetic Resonance Imaging (MRI) is a widely used medical technology for diagnosis and detection of various tissue abnormalities, tumor detection, and in evaluation of either residual or recurrent tumors. This thesis work exploits MRI information acquired on brain tumor structure and physiological properties and uses a novel image segmentation technique to better delineate tissue differences.;MR image segmentation will be important in distinguishing between boundaries of different tissues in the brain. A segmentation software tool was developed that combines the different types of clinical MR images and presents them as a single colored image. This technique is based on the fuzzy c-means (FCM) clustering algorithm. The MR data sets are used to form five-dimensional feature vectors. These vectors are segmented by FCM into six tissue classes for normal brains and nine tissue classes for human brains with tumors. The segmented images are then compared with segmentation performed using Statistical Parametric Mapping (SPM2)---software that is commonly used for brain tissue segmentation. The results from segmenting the whole volume MRI using FCM show better distinction between tumor tissues than SPM2

    Unsupervised tracking of time-evolving data streams and an application to short-term urban traffic flow forecasting

    Get PDF
    I am indebted to many people for their help and support I receive during my Ph.D. study and research at DIBRIS-University of Genoa. First and foremost, I would like to express my sincere thanks to my supervisors Prof.Dr. Masulli, and Prof.Dr. Rovetta for the invaluable guidance, frequent meetings, and discussions, and the encouragement and support on my way of research. I thanks all the members of the DIBRIS for their support and kindness during my 4 years Ph.D. I would like also to acknowledge the contribution of the projects Piattaforma per la mobili\ue0 Urbana con Gestione delle INformazioni da sorgenti eterogenee (PLUG-IN) and COST Action IC1406 High Performance Modelling and Simulation for Big Data Applications (cHiPSet). Last and most importantly, I wish to thanks my family: my wife Shaimaa who stays with me through the joys and pains; my daughter and son whom gives me happiness every-day; and my parents for their constant love and encouragement

    Towards a more realistic sink particle algorithm for the RAMSES code

    Full text link
    We present a new sink particle algorithm developed for the Adaptive Mesh Refinement code RAMSES. Our main addition is the use of a clump finder to identify density peaks and their associated regions (the peak patches). This allows us to unambiguously define a discrete set of dense molecular cores as potential sites for sink particle formation. Furthermore, we develop a new scheme to decide if the gas in which a sink could potentially form, is indeed gravitationally bound and rapidly collapsing. This is achieved using a general integral form of the virial theorem, where we use the curvature in the gravitational potential to correctly account for the background potential. We detail all the necessary steps to follow the evolution of sink particles in turbulent molecular cloud simulations, such as sink production, their trajectory integration, sink merging and finally the gas accretion rate onto an existing sink. We compare our new recipe for sink formation to other popular implementations. Statistical properties such as the sink mass function, the average sink mass and the sink multiplicity function are used to evaluate the impact that our new scheme has on accurately predicting fundamental quantities such as the stellar initial mass function or the stellar multiplicity function.Comment: submitted to MNRAS, 24 pages, 19 figures, 5 table
    • …
    corecore