2,869 research outputs found

    MINING MULTI-GRANULAR MULTIVARIATE MEDICAL MEASUREMENTS

    Get PDF
    This thesis is motivated by the need to predict the mortality of patients in the Intensive Care Unit. The heart of this problem revolves around being able to accurately classify multivariate, multi-granular time series patient data. The approach ultimately taken in this thesis involves using Z-Score normalization to make variables comparable, Single Value Decomposition to reduce the number of features, and a Support Vector Machine to classify patient tuples. This approach proves to outperform other classification models such as k-Nearest Neighbor and demonstrates that SVM is a viable model for this project. The hope is that going forward other work can build off of this research and one day make an impact in the medical community

    Performance Evaluation of Vanilla, Residual, and Dense 2D U-Net Architectures for Skull Stripping of Augmented 3D T1-weighted MRI Head Scans

    Full text link
    Skull Stripping is a requisite preliminary step in most diagnostic neuroimaging applications. Manual Skull Stripping methods define the gold standard for the domain but are time-consuming and challenging to integrate into processing pipelines with a high number of data samples. Automated methods are an active area of research for head MRI segmentation, especially deep learning methods such as U-Net architecture implementations. This study compares Vanilla, Residual, and Dense 2D U-Net architectures for Skull Stripping. The Dense 2D U-Net architecture outperforms the Vanilla and Residual counterparts by achieving an accuracy of 99.75% on a test dataset. It is observed that dense interconnections in a U-Net encourage feature reuse across layers of the architecture and allow for shallower models with the strengths of a deeper network.Comment: Research Article submitted to the 2nd International Conference on Biomedical Engineering Science and Technology: Roadway from Laboratory to Market, at the National Institute of Technology Raipur, Chhattisgarh, Indi

    A Deep Learning Approach for Dynamic Balance Sheet Stress Testing

    Full text link
    In the aftermath of the financial crisis, supervisory authorities have considerably improved their approaches in performing financial stress testing. However, they have received significant criticism by the market participants due to the methodological assumptions and simplifications employed, which are considered as not accurately reflecting real conditions. First and foremost, current stress testing methodologies attempt to simulate the risks underlying a financial institution's balance sheet by using several satellite models, making their integration a really challenging task with significant estimation errors. Secondly, they still suffer from not employing advanced statistical techniques, like machine learning, which capture better the nonlinear nature of adverse shocks. Finally, the static balance sheet assumption, that is often employed, implies that the management of a bank passively monitors the realization of the adverse scenario, but does nothing to mitigate its impact. To address the above mentioned criticism, we introduce in this study a novel approach utilizing deep learning approach for dynamic balance sheet stress testing. Experimental results give strong evidence that deep learning applied in big financial/supervisory datasets create a state of the art paradigm, which is capable of simulating real world scenarios in a more efficient way.Comment: Preprint submitted to Journal of Forecastin

    Machine Learning Techniques as Applied to Discrete and Combinatorial Structures

    Get PDF
    Machine Learning Techniques have been used on a wide array of input types: images, sound waves, text, and so forth. In articulating these input types to the almighty machine, there have been all sorts of amazing problems that have been solved for many practical purposes. Nevertheless, there are some input types which don’t lend themselves nicely to the standard set of machine learning tools we have. Moreover, there are some provably difficult problems which are abysmally hard to solve within a reasonable time frame. This thesis addresses several of these difficult problems. It frames these problems such that we can then attempt to marry the allegedly powerful utility of existing machine learning techniques to the practical solvability of said problems

    GBG++: A Fast and Stable Granular Ball Generation Method for Classification

    Full text link
    Granular ball computing (GBC), as an efficient, robust, and scalable learning method, has become a popular research topic of granular computing. GBC includes two stages: granular ball generation (GBG) and multi-granularity learning based on the granular ball (GB). However, the stability and efficiency of existing GBG methods need to be further improved due to their strong dependence on kk-means or kk-division. In addition, GB-based classifiers only unilaterally consider the GB's geometric characteristics to construct classification rules, but the GB's quality is ignored. Therefore, in this paper, based on the attention mechanism, a fast and stable GBG (GBG++) method is proposed first. Specifically, the proposed GBG++ method only needs to calculate the distances from the data-driven center to the undivided samples when splitting each GB instead of randomly selecting the center and calculating the distances between it and all samples. Moreover, an outlier detection method is introduced to identify local outliers. Consequently, the GBG++ method can significantly improve effectiveness, robustness, and efficiency while being absolutely stable. Second, considering the influence of the sample size within the GB on the GB's quality, based on the GBG++ method, an improved GB-based kk-nearest neighbors algorithm (GBkkNN++) is presented, which can reduce misclassification at the class boundary. Finally, the experimental results indicate that the proposed method outperforms several existing GB-based classifiers and classical machine learning classifiers on 2424 public benchmark datasets

    Machine learning and computer vision in solar physics

    Get PDF
    In the recent decades, the difficult task of understanding and predicting violent solar eruptions and their terrestrial impacts has become a strategic national priority, as it affects the life of human beings, including communication, transportation, the power grid, national defense, space travel, and more. This dissertation explores new machine learning and computer vision techniques to tackle this difficult task. Specifically, the dissertation addresses four interrelated problems in solar physics: magnetic flux tracking, fibril tracing, Stokes inversion and vector magnetogram generation. First, the dissertation presents a new deep learning method, named SolarUnet, to identify and track solar magnetic flux elements in observed vector magnetograms. The method consists of a data preprocessing component that prepares training data from a physics-based tool, a deep learning model implemented as a U-shaped convolutional neural network for fast and accurate image segmentation, and a postprocessing component that prepares tracking results. The tracking results can be used in deriving statistical parameters of the local and global solar dynamo, allowing for sophisticated analyses of solar activities in the solar corona and solar wind. Second, the dissertation presents another new deep learning method, named FibrilNet, for tracing chromospheric fibrils in Ha images of solar observations. FibrilNet is a Bayesian convolutional neural network, which adopts the Monte Carlo dropout sampling technique for probabilistic image segmentation with uncertainty quantification capable of handling both aleatoric uncertainty and epistemic uncertainty. The traced Ha fibril structures provide the direction of magnetic fields, where the orientations of the fibrils can be used as a constraint to improve the nonlinear force-free extrapolation of coronal fields. Third, the dissertation presents a stacked deep neural network (SDNN) for inferring line-of-sight (LOS) velocities and Doppler widths from Stokes profiles collected by GST/NIRIS at Big Bear Solar Observatory. Experimental results show that SDNN is faster, while producing smoother and cleaner LOS velocity and Doppler width maps, than a widely used physics-based method. Furthermore, the results demonstrate the better learning capability of SDNN than several related machine learning algorithms. The high-quality velocity fields obtained through Stokes inversion can be used to understand solar activity and predict solar eruptions. Fourth, the dissertation presents a generative adversarial network, named MagNet, for generating vector components to create synthetic vector magnetograms of solar active regions. MagNet allows us to expand the availability of photospheric vector magnetograms to the period from 1996 to present, covering solar cycles 23 and 24, where photospheric vector magnetograms were not available prior to 2010. The synthetic vector magnetograms can be used as input of physics-based models to derive important physical parameters for studying the triggering mechanisms of solar eruptions and for forecasting eruptive events. Finally, implementations of some of the deep learning-based methods using Jupyter notebooks and Google Colab with GitHub are presented and discussed
    • …
    corecore