32 research outputs found

    Blind deconvolution of medical ultrasound images: parametric inverse filtering approach

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used

    Approaches for Outlier Detection in Sparse High-Dimensional Regression Models

    Get PDF
    Modern regression studies often encompass a very large number of potential predictors, possibly larger than the sample size, and sometimes growing with the sample size itself. This increases the chances that a substantial portion of the predictors is redundant, as well as the risk of data contamination. Tackling these problems is of utmost importance to facilitate scientific discoveries, since model estimates are highly sensitive both to the choice of predictors and to the presence of outliers. In this thesis, we contribute to this area considering the problem of robust model selection in a variety of settings, where outliers may arise both in the response and the predictors. Our proposals simplify model interpretation, guarantee predictive performance, and allow us to study and control the influence of outlying cases on the fit. First, we consider the co-occurrence of multiple mean-shift and variance-inflation outliers in low-dimensional linear models. We rely on robust estimation techniques to identify outliers of each type, exclude mean-shift outliers, and use restricted maximum likelihood estimation to down-weight and accommodate variance-inflation outliers into the model fit. Second, we extend our setting to high-dimensional linear models. We show that mean-shift and variance-inflation outliers can be modeled as additional fixed and random components, respectively, and evaluated independently. Specifically, we perform feature selection and mean-shift outlier detection through a robust class of nonconcave penalization methods, and variance-inflation outlier detection through the penalization of the restricted posterior mode. The resulting approach satisfies a robust oracle property for feature selection in the presence of data contamination – which allows the number of features to exponentially increase with the sample size – and detects truly outlying cases of each type with asymptotic probability one. This provides an optimal trade-off between a high breakdown point and efficiency. Third, focusing on high-dimensional linear models affected by meanshift outliers, we develop a general framework in which L0-constraints coupled with mixed-integer programming techniques are used to perform simultaneous feature selection and outlier detection with provably optimal guarantees. In particular, we provide necessary and sufficient conditions for a robustly strong oracle property, where again the number of features can increase exponentially with the sample size, and prove optimality for parameter estimation and the resulting breakdown point. Finally, we consider generalized linear models and rely on logistic slippage to perform outlier detection and removal in binary classification. Here we use L0-constraints and mixed-integer conic programming techniques to solve the underlying double combinatorial problem of feature selection and outlier detection, and the framework allows us again to pursue optimality guarantees. For all the proposed approaches, we also provide computationally lean heuristic algorithms, tuning procedures, and diagnostic tools which help to guide the analysis. We consider several real-world applications, including the study of the relationships between childhood obesity and the human microbiome, and of the main drivers of honey bee loss. All methods developed and data used, as well as the source code to replicate our analyses, are publicly available

    Non-Invasive Electrocardiographic Imaging of Ventricular Activities: Data-Driven and Model-Based Approaches

    Get PDF
    Die vorliegende Arbeit beleuchtet ausgewählte Aspekte der Vorwärtsmodellierung, so zum Beispiel die Simulation von Elektro- und Magnetokardiogrammen im Falle einer elektrisch stillen Ischämie sowie die Anpassung der elektrischen Potentiale unter Variation der Leitfähigkeiten. Besonderer Fokus liegt auf der Entwicklung neuer Regularisierungsalgorithmen sowie der Anwendung und Bewertung aktuell verwendeter Methoden in realistischen in silico bzw. klinischen Studien

    Holistic Robust Data-Driven Decisions

    Full text link
    The design of data-driven formulations for machine learning and decision-making with good out-of-sample performance is a key challenge. The observation that good in-sample performance does not guarantee good out-of-sample performance is generally known as overfitting. Practical overfitting can typically not be attributed to a single cause but instead is caused by several factors all at once. We consider here three overfitting sources: (i) statistical error as a result of working with finite sample data, (ii) data noise which occurs when the data points are measured only with finite precision, and finally (iii) data misspecification in which a small fraction of all data may be wholly corrupted. We argue that although existing data-driven formulations may be robust against one of these three sources in isolation they do not provide holistic protection against all overfitting sources simultaneously. We design a novel data-driven formulation which does guarantee such holistic protection and is furthermore computationally viable. Our distributionally robust optimization formulation can be interpreted as a novel combination of a Kullback-Leibler and Levy-Prokhorov robust optimization formulation which is novel in its own right. However, we show how in the context of classification and regression problems that several popular regularized and robust formulations reduce to a particular case of our proposed novel formulation. Finally, we apply the proposed HR formulation on a portfolio selection problem with real stock data, and analyze its risk/return tradeoff against several benchmarks formulations. Our experiments show that our novel ambiguity set provides a significantly better risk/return trade-off

    Sparse machine learning methods with applications in multivariate signal processing

    Get PDF
    This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research

    Innovations in Quantitative Risk Management

    Get PDF
    Quantitative Finance; Game Theory, Economics, Social and Behav. Sciences; Finance/Investment/Banking; Actuarial Science

    Semi-supervised and unsupervised kernel-based novelty detection with application to remote sensing images

    Get PDF
    The main challenge of new information technologies is to retrieve intelligible information from the large volume of digital data gathered every day. Among the variety of existing data sources, the satellites continuously observing the surface of the Earth are key to the monitoring of our environment. The new generation of satellite sensors are tremendously increasing the possibilities of applications but also increasing the need for efficient processing methodologies in order to extract information relevant to the users' needs in an automatic or semi-automatic way. This is where machine learning comes into play to transform complex data into simplified products such as maps of land-cover changes or classes by learning from data examples annotated by experts. These annotations, also called labels, may actually be difficult or costly to obtain since they are established on the basis of ground surveys. As an example, it is extremely difficult to access a region recently flooded or affected by wildfires. In these situations, the detection of changes has to be done with only annotations from unaffected regions. In a similar way, it is difficult to have information on all the land-cover classes present in an image while being interested in the detection of a single one of interest. These challenging situations are called novelty detection or one-class classification in machine learning. In these situations, the learning phase has to rely only on a very limited set of annotations, but can exploit the large set of unlabeled pixels available in the images. This setting, called semi-supervised learning, allows significantly improving the detection. In this Thesis we address the development of methods for novelty detection and one-class classification with few or no labeled information. The proposed methodologies build upon the kernel methods, which take place within a principled but flexible framework for learning with data showing potentially non-linear feature relations. The thesis is divided into two parts, each one having a different assumption on the data structure and both addressing unsupervised (automatic) and semi-supervised (semi-automatic) learning settings. The first part assumes the data to be formed by arbitrary-shaped and overlapping clusters and studies the use of kernel machines, such as Support Vector Machines or Gaussian Processes. An emphasis is put on the robustness to noise and outliers and on the automatic retrieval of parameters. Experiments on multi-temporal multispectral images for change detection are carried out using only information from unchanged regions or none at all. The second part assumes high-dimensional data to lie on multiple low dimensional structures, called manifolds. We propose a method seeking a sparse and low-rank representation of the data mapped in a non-linear feature space. This representation allows us to build a graph, which is cut into several groups using spectral clustering. For the semi-supervised case where few labels of one class of interest are available, we study several approaches incorporating the graph information. The class labels can either be propagated on the graph, constrain spectral clustering or used to train a one-class classifier regularized by the given graph. Experiments on the unsupervised and oneclass classification of hyperspectral images demonstrate the effectiveness of the proposed approaches

    Noise Modelling for GRACE Follow-On Observables in the Celestial Mechanics Approach

    Get PDF
    A key to understanding the dynamic system Earth in its current state is the continuous observation of its time-variable gravity field. The satellite missions Gravity Recovery And Climate Experiment (GRACE) and its successor GRACE Follow-On take an exceptional position in sensing these time-variable components because of their unique observing concept, which is based on ultra precise measurements of distance changes between a pair of satellites separated by a few hundred kilometres. These observations allow for a modelling of the Earth’s gravity field, typically on a basis of monthly snapshots. One of the key components of any model is the accurate specification of its quality. In temporal gravityfield modelling from GRACE Follow-On data one has to cope with several noise sources contaminating not only the observations but also the observation equations via mis-modellings in the underlying background force models. When employing the Celestial Mechanics Approach (CMA), developed at the Astronomical Institute of the University of Bern (AIUB), for gravity field modelling from satellite data a Least-Squares Adjustment (LSQA) is performed to compute monthly models of the Earth’s gravity field. However, as a consequence of the various contaminations with noise, the jointly estimated formal errors usually do not reflect the error level that could be expected but provides much lower error estimates. One way to deal with such deficiencies in the observations and modelling is to extend the parameter space, i.e., the model, by additional quantities, such as pseudo-stochastic parameters, which are co-estimated in the LSQA. These parameters are meant to absorb any kind of noise while retaining the signal in the gravity field and orbit parameters. In the CMA such pseudo-stochastic parameters are typically set-up as Piece-wise Constant Accelerations (PCAs) in regular intervals of e.g., 15 min. The stochastic behaviour of these parameters is unknown because they reflect an accumulation of a variety of noise sources. In the CMA fictitious artificial zero-observations are appended to the vector of observations together with an empirically determined variance to introduce a stochastic model for the PCAs. In order to also co-estimate a stochastic model for the pseudo-stochastic parameters in the LSQA Variance Component Estimation (VCE) is used in this work as a well established tool to assign variance components to individual groups of observations. In the simplest case the magnitude of the constraints of the pseudo-stochastic parameters can be determined fully automatically. Additionally, VCE is applied as an on-the-fly data reviewing method to account for gross outliers in the observations. Addressing the problem of noise contamination from the point of the GRACE Follow-On satellite mission’s observations, this work presents the incorporation of several noise models into the CMA to not only obtain high-quality time-variable gravity field models but also an accurate description of their stochastic behaviour. The noise models applied stem from pre-launch simulations or the formal covariance propagation of a kinematic point positioning process. Furthermore, the derivation and application of empirical noise models obtained from post-fit residuals between the final GRACE Follow-On orbits, that are co-estimated together with the gravity field, and the observations, expressed in position residuals to the kinematic positions and in the inter-satellite link range-rate residuals, is implemented. Additionally, the current operational processing scheme of GRACE Follow-On data is expounded, including the normal equation handling in the CMA with BLAS and LAPACK routines. All implementations are compared and validated with the operational GRACE Follow-On processing at the AIUB by examining the stochastic behaviour of the respective post-fit residuals and by investigating areas on Earth where a low noise is expected. Finally, the influence and behaviour of the different noise modelling techniques is investigated in a combination of monthly gravity fields computed by various institutions as it is done by the Combination Service for Time-variable Gravity fields (COST-G).Ein wesentlicher Baustein für das Verständnis des Systems Erde ist die kontinuierliche Überwachung des zeit-variablen Anteils des Erdschwerefeldes. Die beiden Satellitenmissionen Gravity Recovery And Climate Experiment (GRACE) und GRACE Follow-On spielen hierbei eine gewichtige Rolle, da sie mit ihrem Beobachtungskonzept, das auf einer hochpräsizen Abstandsmessung zwischen einem Satellitenpaar beruht, diesen zeit-variablen Anteil besonders hoch auflösen können. Diese Messungen ermöglichen es, monatliche Schwerefeldmodelle zu bestimmen. Eine der wichtigsten Komponenten eines jeden Modells ist die akkurate Beschreibung seiner Unsicherheiten. Bei der Modellierung von zeit-variablen Schwerefeldern aus GRACE Follow-On Daten treten Effekte auf, die einerseits die Beobachtungen direkt kontaminieren, und andererseits auch durch Hintergrundmodelle der Kräfte in die Beobachtungsgleichungen einfliessen. Der Ansatz des Celestial Mechanics Approach (CMA), der am Astronomischen Institut der Universität Bern (AIUB) entwickelt wurde und in dieser Arbeit angewandt wird, beruht auf einer Kleinste-Quadrate-Parameterschätzung, um aus entsprechenden Satellitendaten Orbit- und Schwerefeldmodelle abzuleiten. Dabei ist zu beobachten, dass die formale Fehlerabschätzung der Parameter deutlich besser ausfällt als es zu erwarten wäre. Eine Möglichkeit mit Unsicherheiten in den Beobachtungen und der Modellierung umzugehen ist es, den Parameterraum zu erweitern. Das bedeutet, dass zusätzliche Grössen bestimmt werden, wie z.B. pseudo-stochastische Parameter. Diese Grössen sind dazu gedacht, Unsicherheiten zu absorbieren, aber gleichzeitig das Signal in den Schwerefeld- und Orbitparametern zu erhalten. Im CMA werden diese pseudo-stochastischen Parameter als stückweise konstante Beschleunigungen (PCAs) für regelmässige Intervalle (von z.B. 15 min) geschätzt. Das stochastische Verhalten dieser Parameter ist unbekannt, da sie eine Summe an Fehlerquellen ausgleichen sollen. Im CMA wird daher ein empirisch ermitteltes stochastisches Modell für die PCAs eingeführt. Um so ein Modell auch schätzen zu können, wird in dieser Arbeit auf die Methode der Varianzkomponentenschätzung (VCE) zurückgegriffen, die sich dadurch auszeichnet, Varianzkomponenten für unterschiedliche Beobachtungsgruppen zu bestimmen. Im einfachsten Falle zeigt sich, dass die Magnitude des stochastisches Modells der PCAs zusammen mit allen Parametern berechnet werden kann. Zusätzlich werden die Varianzkomponenten als Mass eingeführt, um Ausreisserin den Daten zu glätten. Die Problemstellung des Beobachtungsrauschens wird in dieser Arbeit durch unterschiedliche Rauschmodelle betrachtet. Damit soll sichergestellt werden, dass die geschätzten Schwerefeldmodelle nicht nur das zeit-variable Schwerefeldsignal beschreiben, sondern auch nachvollziehbare Informationen zu den zugehörigen Unsicherheiten bieten. Die Rauschmodelle stammen einerseits aus Simulationen zum Instrumentenverhalten, die vor dem Start durchgeführt wurden, und andererseits im Fall der kinematischen Positionsbeobachtungen aus einer formalen Kovarianzfortpflanzung. Des Weiteren wird auf die Ableitung von empirischen Rauschmodellen aus Post-Fit-Residuen eingegangen, die aus dem berechneten Orbit und den kinematischen Positionen bzw. Inter-satellite Link Range-Rates bestimmt werden. Auch wird die operationelle GRACE Follow-On Prozessierung ausgeführt, verbunden mit einer verbesserten Handhabung der Normalgleichungen mittels BLAS- und LAPACK-Routinen. Alle Neuerungen werden mit den operationellen GRACE Follow-On Lösungen verglichen und validiert. Hierbei werden insbesondere das stochastische Verhalten der Post-Fit-Residuen untersucht, ebenso wie Gebiete der Erde, in denen aufgrund physikalischer Prozesse kaum Rauschen zu erwarten ist. Zuletzt wird im Rahmen des Combination Service for Time-variable Gravity fields (COST-G) noch darauf eingegangen wie sich unterschiedliche Rauschmodellierungen in einer Kombination von Schwerefeldmodellen, die mit verschiedenen Ansätzen und Softwarepaketen bestimmt wurden, verhalten

    Image Guided Respiratory Motion Analysis: Time Series and Image Registration.

    Full text link
    The efficacy of Image guided radiation therapy (IGRT) systems relies on accurately extracting, modeling and predicting tumor movement with imaging techniques. This thesis investigates two key problems associated with such systems: motion modeling and image processing. For thoracic and upper abdominal tumors, respiratory motion is the dominant factor for tumor movement. We have studied several special structured time series analysis techniques to incorporate the semi-periodicity characteristics of respiratory motion. The proposed methods are robust towards large variations among fractions and populations; the algorithms perform stably in the presence of sparse radiographic observations with noise. We have proposed a subspace projection method to quantitatively evaluate the semi-periodicity of a given observation trace; a nonparametric local regression approach for real-time prediction of respiratory motion; a state augmentation scheme to model hysteresis; and an ellipse tracking algorithm to estimate the trend of respiratory motion in real time. For image processing, we have focused on designing regularizations to account for prior information in image registration problems. We investigated a penalty function design that accommodates tissue-type-dependent elasticity information. We studied a class of discontinuity preserving regularizers that yield smooth deformation estimates in most regions, yet allow discontinuities supported by data. We have further proposed a discriminate regularizer that preserves shear discontinuity, but discourages folding or vacuum generating flows. In addition, we have initiated a preliminary principled study on the fundamental performance limit of image registration problems. We proposed a statistical generative model to account for noise effect in both source and target images, and investigated the approximate performance of the maximum-likelihood estimator corresponding to the generative model and the commonly adopted M-estimator. A simple example suggests that the approximation is reasonably accurate. Our studies in both time series analysis and image registration constitute essential building-blocks for clinical applications such as adaptive treatment. Besides their theoretical interests, it is our sincere hope that with further justifications, the proposed techniques would realize its clinical value, and improve the quality of life for patients.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60673/1/druan_1.pd
    corecore