6,272 research outputs found
Segmentation of articular cartilage and early osteoarthritis based on the fuzzy soft thresholding approach driven by modified evolutionary ABC optimization and local statistical aggregation
Articular cartilage assessment, with the aim of the cartilage loss identification, is a crucial task for the clinical practice of orthopedics. Conventional software (SW) instruments allow for just a visualization of the knee structure, without post processing, offering objective cartilage modeling. In this paper, we propose the multiregional segmentation method, having ambitions to bring a mathematical model reflecting the physiological cartilage morphological structure and spots, corresponding with the early cartilage loss, which is poorly recognizable by the naked eye from magnetic resonance imaging (MRI). The proposed segmentation model is composed from two pixel's classification parts. Firstly, the image histogram is decomposed by using a sequence of the triangular fuzzy membership functions, when their localization is driven by the modified artificial bee colony (ABC) optimization algorithm, utilizing a random sequence of considered solutions based on the real cartilage features. In the second part of the segmentation model, the original pixel's membership in a respective segmentation class may be modified by using the local statistical aggregation, taking into account the spatial relationships regarding adjacent pixels. By this way, the image noise and artefacts, which are commonly presented in the MR images, may be identified and eliminated. This fact makes the model robust and sensitive with regards to distorting signals. We analyzed the proposed model on the 2D spatial MR image records. We show different MR clinical cases for the articular cartilage segmentation, with identification of the cartilage loss. In the final part of the analysis, we compared our model performance against the selected conventional methods in application on the MR image records being corrupted by additive image noise.Web of Science117art. no. 86
Geometric reconstruction methods for electron tomography
Electron tomography is becoming an increasingly important tool in materials
science for studying the three-dimensional morphologies and chemical
compositions of nanostructures. The image quality obtained by many current
algorithms is seriously affected by the problems of missing wedge artefacts and
nonlinear projection intensities due to diffraction effects. The former refers
to the fact that data cannot be acquired over the full tilt range;
the latter implies that for some orientations, crystalline structures can show
strong contrast changes. To overcome these problems we introduce and discuss
several algorithms from the mathematical fields of geometric and discrete
tomography. The algorithms incorporate geometric prior knowledge (mainly
convexity and homogeneity), which also in principle considerably reduces the
number of tilt angles required. Results are discussed for the reconstruction of
an InAs nanowire
Near real-time flood detection in urban and rural areas using high resolution Synthetic Aperture Radar images
A near real-time flood detection algorithm giving a synoptic overview of the extent of flooding in both urban and rural areas, and capable of working during night-time and day-time even if cloud was present, could be a useful tool for operational flood relief management. The paper describes an automatic algorithm using high resolution Synthetic Aperture Radar (SAR) satellite data that builds on existing approaches, including the use of image segmentation techniques prior to object classification to cope with the very large number of pixels in these scenes. Flood detection in urban areas is guided by the flood extent derived in adjacent rural areas. The algorithm assumes that high resolution topographic height data are available for at least the urban areas of the scene, in order that a SAR simulator may be used to estimate areas of radar shadow and layover. The algorithm proved capable of detecting flooding in rural areas using TerraSAR-X with good accuracy, classifying 89% of flooded pixels correctly, with an associated false positive rate of 6%. Of the urban water pixels visible to TerraSAR-X, 75% were correctly detected, with a false positive rate of 24%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 57% and 18% respectively
Painless Breakups -- Efficient Demixing of Low Rank Matrices
Assume we are given a sum of linear measurements of different rank-
matrices of the form . When and under
which conditions is it possible to extract (demix) the individual matrices
from the single measurement vector ? And can we do the demixing
numerically efficiently? We present two computationally efficient algorithms
based on hard thresholding to solve this low rank demixing problem. We prove
that under suitable conditions these algorithms are guaranteed to converge to
the correct solution at a linear rate. We discuss applications in connection
with quantum tomography and the Internet-of-Things. Numerical simulations
demonstrate empirically the performance of the proposed algorithms
A concave pairwise fusion approach to subgroup analysis
An important step in developing individualized treatment strategies is to
correctly identify subgroups of a heterogeneous population, so that specific
treatment can be given to each subgroup. In this paper, we consider the
situation with samples drawn from a population consisting of subgroups with
different means, along with certain covariates. We propose a penalized approach
for subgroup analysis based on a regression model, in which heterogeneity is
driven by unobserved latent factors and thus can be represented by using
subject-specific intercepts. We apply concave penalty functions to pairwise
differences of the intercepts. This procedure automatically divides the
observations into subgroups. We develop an alternating direction method of
multipliers algorithm with concave penalties to implement the proposed approach
and demonstrate its convergence. We also establish the theoretical properties
of our proposed estimator and determine the order requirement of the minimal
difference of signals between groups in order to recover them. These results
provide a sound basis for making statistical inference in subgroup analysis.
Our proposed method is further illustrated by simulation studies and analysis
of the Cleveland heart disease dataset
Covariance Estimation: The GLM and Regularization Perspectives
Finding an unconstrained and statistically interpretable reparameterization
of a covariance matrix is still an open problem in statistics. Its solution is
of central importance in covariance estimation, particularly in the recent
high-dimensional data environment where enforcing the positive-definiteness
constraint could be computationally expensive. We provide a survey of the
progress made in modeling covariance matrices from two relatively complementary
perspectives: (1) generalized linear models (GLM) or parsimony and use of
covariates in low dimensions, and (2) regularization or sparsity for
high-dimensional data. An emerging, unifying and powerful trend in both
perspectives is that of reducing a covariance estimation problem to that of
estimating a sequence of regression problems. We point out several instances of
the regression-based formulation. A notable case is in sparse estimation of a
precision matrix or a Gaussian graphical model leading to the fast graphical
LASSO algorithm. Some advantages and limitations of the regression-based
Cholesky decomposition relative to the classical spectral (eigenvalue) and
variance-correlation decompositions are highlighted. The former provides an
unconstrained and statistically interpretable reparameterization, and
guarantees the positive-definiteness of the estimated covariance matrix. It
reduces the unintuitive task of covariance estimation to that of modeling a
sequence of regressions at the cost of imposing an a priori order among the
variables. Elementwise regularization of the sample covariance matrix such as
banding, tapering and thresholding has desirable asymptotic properties and the
sparse estimated covariance matrix is positive definite with probability
tending to one for large samples and dimensions.Comment: Published in at http://dx.doi.org/10.1214/11-STS358 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …