1,038 research outputs found

    Using reconfigurable computing technology to accelerate matrix decomposition and applications

    Get PDF
    Matrix decomposition plays an increasingly significant role in many scientific and engineering applications. Among numerous techniques, Singular Value Decomposition (SVD) and Eigenvalue Decomposition (EVD) are widely used as factorization tools to perform Principal Component Analysis for dimensionality reduction and pattern recognition in image processing, text mining and wireless communications, while QR Decomposition (QRD) and sparse LU Decomposition (LUD) are employed to solve the dense or sparse linear system of equations in bioinformatics, power system and computer vision. Matrix decompositions are computationally expensive and their sequential implementations often fail to meet the requirements of many time-sensitive applications. The emergence of reconfigurable computing has provided a flexible and low-cost opportunity to pursue high-performance parallel designs, and the use of FPGAs has shown promise in accelerating this class of computation. In this research, we have proposed and implemented several highly parallel FPGA-based architectures to accelerate matrix decompositions and their applications in data mining and signal processing. Specifically, in this dissertation we describe the following contributions: • We propose an efficient FPGA-based double-precision floating-point architecture for EVD, which can efficiently analyze large-scale matrices. • We implement a floating-point Hestenes-Jacobi architecture for SVD, which is capable of analyzing arbitrary sized matrices. • We introduce a novel deeply pipelined reconfigurable architecture for QRD, which can be dynamically configured to perform either Householder transformation or Givens rotation in a manner that takes advantage of the strengths of each. • We design a configurable architecture for sparse LUD that supports both symmetric and asymmetric sparse matrices with arbitrary sparsity patterns. • By further extending the proposed hardware solution for SVD, we parallelize a popular text mining tool-Latent Semantic Indexing with an FPGA-based architecture. • We present a configurable architecture to accelerate Homotopy l1-minimization, in which the modification of the proposed FPGA architecture for sparse LUD is used at its core to parallelize both Cholesky decomposition and rank-1 update. Our experimental results using an FPGA-based acceleration system indicate the efficiency of our proposed novel architectures, with application and dimension-dependent speedups over an optimized software implementation that range from 1.5ÃÂ to 43.6ÃÂ in terms of computation time

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developments are related to diverse mathematical elds such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    The Admission of DNA Evidence in State and Federal Courts

    Get PDF

    Computational Approaches for Monitoring of Health Parameters and Their Evaluation for Application in Clinical Setting.

    Get PDF
    The algorithms and mathematical methods developed in this work focus on using computational approaches for low cost solution of health care problems for better patient outcome. Furthermore, evaluation of those approaches for clinical application considering the risk and benefit in a clinical setting is studied. Those risks and benefits are discussed in terms of sensitivity, specificity and area under the receiver operating characteristics curve. With a rising cost of health care and increasing number of aging population, there is a need for innovative and low cost solutions for health care problems. In this work, algorithms, mathematical techniques for the solutions of the problems related to physiological parameter monitoring have been explored and their evaluation approaches for application in a clinical setting have been studied. The physiological parameters include affective state, pain level, heart rate, oxygen saturation, hemoglobin level and blood pressure. For the mathematical basis development for different data intensive problems, eigenvalue based methods along with others have been used in designing innovative solutions for health care problems, developing new algorithms for smart monitoring of patients; from home monitoring to combat casualty situations. Eigenvalue based methods already have wide applications in many areas such as analysis of stability in control systems, search algorithms (Google Page Rank), Eigenface methods for face recognition, principal component analysis for data compression and pattern recognition. Here, the research work in 1) multi-parameter monitoring of affective state, 2) creating a smart phone based pain detection tool from facial images, 3) early detection of hemorrhage from arterial blood pressure data, 4) noninvasive measurement of physiological signals including hemoglobin level and 5) evaluation of the results for clinical application are presented

    A demand driven multiprocessor.

    Get PDF

    Combining Synthesis of Cardiorespiratory Signals and Artifacts with Deep Learning for Robust Vital Sign Estimation

    Get PDF
    Healthcare has been remarkably morphing on the account of Big Data. As Machine Learning (ML) consolidates its place in simpler clinical chores, more complex Deep Learning (DL) algorithms have struggled to keep up, despite their superior capabilities. This is mainly attributed to the need for large amounts of data for training, which the scientific community is unable to satisfy. The number of promising DL algorithms is considerable, although solutions directly targeting the shortage of data lack. Currently, dynamical generative models are the best bet, but focus on single, classical modalities and tend to complicate significantly with the amount of physiological effects they can simulate. This thesis aims at providing and validating a framework, specifically addressing the data deficit in the scope of cardiorespiratory signals. Firstly, a multimodal statistical synthesizer was designed to generate large, annotated artificial signals. By expressing data through coefficients of pre-defined, fitted functions and describing their dependence with Gaussian copulas, inter- and intra-modality associations were learned. Thereafter, new coefficients are sampled to generate artificial, multimodal signals with the original physiological dynamics. Moreover, normal and pathological beats along with artifacts were included by employing Markov models. Secondly, a convolutional neural network (CNN) was conceived with a novel sensor-fusion architecture and trained with synthesized data under real-world experimental conditions to evaluate how its performance is affected. Both the synthesizer and the CNN not only performed at state of the art level but also innovated with multiple types of generated data and detection error improvements, respectively. Cardiorespiratory data augmentation corrected performance drops when not enough data is available, enhanced the CNN’s ability to perform on noisy signals and to carry out new tasks when introduced to, otherwise unavailable, types of data. Ultimately, the framework was successfully validated showing potential to leverage future DL research on Cardiology into clinical standards

    The severity of stages estimation during hemorrhage using error correcting output codes method

    Get PDF
    As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system
    • …
    corecore