55 research outputs found
Multiresolution Moment Filters: Theory and Applications
We introduce local weighted geometric moments that are computed from an image within a sliding window at multiple scales. When the window function satisfies a two-scale relation, we prove that lower order moments can be computed efficiently at dyadic scales by using a multiresolution wavelet-like algorithm. We show that B-splines are well-suited window functions because, in addition to being refinable, they are positive, symmetric, separable, and very nearly isotropic (Gaussian shape). We present three applications of these multiscale local moments. The first is a feature-extraction method for detecting and characterizing elongated structures in images. The second is a noise-reduction method which can be viewed as a multiscale extension of Savitzky-Golay filtering. The third is a multiscale optical-flow algorithm that uses a local affine model for the motion field, extending the Lucas-Kanade optical-flow method. The results obtained in all cases are promising
Variational Image Reconstruction from Arbitrarily Spaced Samples: A Fast Multiresolution Spline Solution
We propose a novel method for image reconstruction from nonuniform samples with no constraints on their locations. We adopt a variational approach where the reconstruction is formulated as the minimizer of a cost that is a weighted sum of two terms: 1) the sum of squared errors at the specified points and 2) a quadratic functional that penalizes the lack of smoothness. We search for a solution that is a uniform spline and show how it can be determined by solving a large, sparse system of linear equations. We interpret the solution of our approach as an approximation of the analytical solution that involves radial basis functions and demonstrate the computational advantages of our approach. Using the two-scale relation for B-splines, we derive an algebraic relation that links together the linear systems of equations specifying reconstructions at different levels of resolution. We use this relation to develop a fast multigrid algorithm. We demonstrate the effectiveness of our approach on some image reconstruction examples
Myocardial Motion Analysis from B-Mode Echocardiograms
The quantitative assessment of cardiac motion is a fundamental concept to evaluate ventricular malfunction. We present a new optical-flow-based method for estimating heart motion from two-dimensional echocardiographic sequences. To account for typical heart motions, such as contraction/expansion and shear, we analyze the images locally by using a local-affine model for the velocity in space and a linear model in time. The regional motion parameters are estimated in the least-squares sense inside a sliding spatiotemporal B-spline window. Robustness and spatial adaptability is achieved by estimating the model parameters at multiple scales within a coarse-to-fine multiresolution framework. We use a Wavelet-like algorithm for computing B-spline-weighted inner products and moments at dyadic scales to increase computational efficiency. In order to characterize myocardial contractility and to simplify the detection of myocardial dysfunction, the radial component of the velocity with respect to a reference point is color coded and visualized inside a time-varying region of interest. The algorithm was first validated on synthetic data sets that simulate a beating heart with a speckle-like appearance of echocardiograms. The ability to estimate motion from real ultrasound sequences was demonstrated by a rotating phantom experiment. The method was also applied to a set of in vivo echocardiograms from an animal study. Motion estimation results were in good agreement with the expert echocardiographic reading
Multiscale optical flow computation from the monogenic signal
National audienceWe have developed an algorithm for the estimation of cardiac motion from medical images. The algorithm exploits monogenic signal theory, recently introduced as an N-dimensional generalization of the analytic signal. The displacement is computed locally by assuming the conservation of the monogenic phase over time. A local affine displacement model replaces the standard translation model to account for more complex motions as contraction/expansion and shear. A coarse-to-fine B-spline scheme allows a robust and effective computation of the models parameters and a pyramidal refinement scheme helps handle large motions. Robustness against noise is increased by replacing the standard pointwise computation of the monogenic orientation with a more robust least-squares orientation estimate. This paper reviews the results obtained on simulated cardiac images from different modalities, namely 2D and 3D cardiac ultrasound and tagged magnetic resonance. We also show how the proposed algorithm represents a valuable alternative to state-of-the-art algorithms in the respective fields
Full Motion and Flow Field Recovery from Echo Doppler Data
We present a new computational method for reconstructing a vector velocity field from scattered, pulsed-wave ultrasound Doppler data. The main difficulty is that the Doppler measurements are incomplete, for they do only capture the velocity component along the beam direction. We thus propose to combine measurements from different beam directions. However, this is not yet sufficient to make the problem well posed because 1) the angle between the directions is typically small and 2) the data is noisy and nonuniformly sampled. We propose to solve this reconstruction problem in the continuous domain using regularization. The reconstruction is formulated as the minimizer of a cost that is a weighted sum of two terms: 1) the sum of squared difference between the Doppler data and the projected velocities 2) a quadratic regularization functional that imposes some smoothness on the velocity field. We express our solution for this minimization problem in a B-spline basis, obtaining a sparse system of equations that can be solved efficiently. Using synthetic phantom data, we demonstrate the significance of tuning the regularization according to the a priori knowledge about the physical property of the motion. Next, we validate our method using real phantom data for which the ground truth is known. We then present reconstruction results obtained from clinical data that originate from 1) blood flow in carotid bifurcation and 2) cardiac wall motion
Пигменты для окрашивания строительных материалов
Получены керамические пигменты с использованием техногенного кремнезёмсодержащего отхода - ванадиевого катализатора. В составе пигментов наряду с преобладающей фазой муллита идентифицируется корунд. По результатам рентгенофазового анализа установлено, что оксиды хрома и железа встраиваются в структуру вплоть до концентрации 10 мас. % и не выделяются в свободном виде. В кобальтсодержащих пигментах образуется шпинель CoAl2O4. Разработанные пигменты выдерживают температуру обжига 1200 ?С, их можно рекомендовать для получения керамических красок, цветных глазурей, для окрашивания строительных материалов
Handling Label Uncertainty on the Example of Automatic Detection of Shepherd's Crook RCA in Coronary CT Angiography
Coronary artery disease (CAD) is often treated minimally invasively with a
catheter being inserted into the diseased coronary vessel. If a patient
exhibits a Shepherd's Crook (SC) Right Coronary Artery (RCA) - an anatomical
norm variant of the coronary vasculature - the complexity of this procedure is
increased. Automated reporting of this variant from coronary CT angiography
screening would ease prior risk assessment. We propose a 1D convolutional
neural network which leverages a sequence of residual dilated convolutions to
automatically determine this norm variant from a prior extracted vessel
centerline. As the SC RCA is not clearly defined with respect to concrete
measurements, labeling also includes qualitative aspects. Therefore, 4.23%
samples in our dataset of 519 RCA centerlines were labeled as unsure SC RCAs,
with 5.97% being labeled as sure SC RCAs. We explore measures to handle this
label uncertainty, namely global/model-wise random assignment, exclusion, and
soft label assignment. Furthermore, we evaluate how this uncertainty can be
leveraged for the determination of a rejection class. With our best
configuration, we reach an area under the receiver operating characteristic
curve (AUC) of 0.938 on confident labels. Moreover, we observe an increase of
up to 0.020 AUC when rejecting 10% of the data and leveraging the labeling
uncertainty information in the exclusion process.Comment: Accepted at ISBI 202
Spatio-Temporal Nonrigid Registration for Ultrasound Cardiac Motion Estimation
We propose a new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images. The specific application is to estimate displacement fields from two-dimensional ultrasound sequences of the heart. The basic idea is to find a spatio-temporal deformation field that effectively compensates for the motion by minimizing a difference with respect to a reference frame. The key feature of our method is the use of a semi-local spatio-temporal parametric model for the deformation using splines, and the reformulation of the registration task as a global optimization problem. The scale of the spline model controls the smoothness of the displacement field. Our algorithm uses a multiresolution optimization strategy to obtain a higher speed and robustness. We evaluated the accuracy of our algorithm using a synthetic sequence generated with an ultrasound simulation package, together with a realistic cardiac motion model. We compared our new global multiframe approach with a previous method based on pairwise registration of consecutive frames to demonstrate the benefits of introducing temporal consistency. Finally, we applied the algorithm to the regional analysis of the left ventricle. Displacement and strain parameters were evaluated showing significant differences between the normal and pathological segments, thereby illustrating the clinical applicability of our method
The Technome - a predictive internal calibration approach for quantitative imaging biomarker research
The goal of radiomics is to convert medical images into a minable data space by extraction of quantitative imaging features for clinically relevant analyses, e.g. survival time prediction of a patient. One problem of radiomics from computed tomography is the impact of technical variation such as reconstruction kernel variation within a study. Additionally, what is often neglected is the impact of inter-patient technical variation, resulting from patient characteristics, even when scan and reconstruction parameters are constant. In our approach, measurements within 3D regions-of-interests (ROI) are calibrated by further ROIs such as air, adipose tissue, liver, etc. that are used as control regions (CR). Our goal is to derive general rules for an automated internal calibration that enhance prediction, based on the analysed features and a set of CRs. We define qualification criteria motivated by status-quo radiomics stability analysis techniques to only collect information from the CRs which is relevant given a respective task. These criteria are used in an optimisation to automatically derive a suitable internal calibration for prediction tasks based on the CRs. Our calibration enhanced the performance for centrilobular emphysema prediction in a COPD study and prediction of patients’ one-year-survival in an oncological study
- …