105,434 research outputs found
Evolution-Operator-Based Single-Step Method for Image Processing
This work proposes an evolution-operator-based single-time-step
method for image and signal processing. The key component of the
proposed method is a local spectral evolution kernel (LSEK) that
analytically integrates a class of evolution partial differential
equations (PDEs). From the point of view PDEs, the LSEK provides
the analytical solution in a single time step, and is of spectral
accuracy, free of instability constraint. From the point of
image/signal processing, the LSEK gives rise to a family of
lowpass filters. These filters contain controllable time delay and
amplitude scaling. The new evolution operator-based method is
constructed by pointwise adaptation of anisotropy to the
coefficients of the LSEK. The Perona-Malik-type of anisotropic
diffusion schemes is incorporated in the LSEK for image denoising.
A forward-backward diffusion process is adopted to the LSEK for
image deblurring or sharpening. A coupled PDE system is modified
for image edge detection. The resulting image edge is utilized for
image enhancement. Extensive computer experiments are carried out
to demonstrate the performance of the proposed method. The major
advantages of the proposed method are its single-step solution and
readiness for multidimensional data analysis
Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology
The incidence of thyroid nodule is very high and generally increases with the
age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid
nodule can be completely cured if detected early. Fine needle aspiration
cytology is a recognized early diagnosis method of thyroid nodule. There are
still some limitations in the fine needle aspiration cytology, and the
ultrasound diagnosis of thyroid nodule has become the first choice for
auxiliary examination of thyroid nodular disease. If we could combine medical
imaging technology and fine needle aspiration cytology, the diagnostic rate of
thyroid nodule would be improved significantly. The properties of ultrasound
will degrade the image quality, which makes it difficult to recognize the edges
for physicians. Image segmentation technique based on graph theory has become a
research hotspot at present. Normalized cut (Ncut) is a representative one,
which is suitable for segmentation of feature parts of medical image. However,
how to solve the normalized cut has become a problem, which needs large memory
capacity and heavy calculation of weight matrix. It always generates over
segmentation or less segmentation which leads to inaccurate in the
segmentation. The speckle noise in B ultrasound image of thyroid tumor makes
the quality of the image deteriorate. In the light of this characteristic, we
combine the anisotropic diffusion model with the normalized cut in this paper.
After the enhancement of anisotropic diffusion model, it removes the noise in
the B ultrasound image while preserves the important edges and local details.
This reduces the amount of computation in constructing the weight matrix of the
improved normalized cut and improves the accuracy of the final segmentation
results. The feasibility of the method is proved by the experimental results.Comment: 15pages,13figure
Geometrical-based algorithm for variational segmentation and smoothing of vector-valued images
An optimisation method based on a nonlinear functional is considered for segmentation and smoothing of vector-valued images. An edge-based approach is proposed to initially segment the image using geometrical properties such as metric tensor of the linearly smoothed image. The nonlinear functional is then minimised for each segmented region to yield the smoothed image. The functional is characterised with a unique solution in contrast with the MumfordâShah functional for vector-valued images. An operator for edge detection is introduced as a result of this unique solution. This operator is analytically calculated and its detection performance and localisation are then compared with those of the DroGoperator. The implementations are applied on colour images as examples of vector-valued images, and the results demonstrate robust performance in noisy environments
High-Level Object Oriented Genetic Programming in Logistic Warehouse Optimization
DisertaÄnĂ prĂĄce je zamÄĹena na optimalizaci prĹŻbÄhu pracovnĂch operacĂ v logistickĂ˝ch skladech a distribuÄnĂch centrech. HlavnĂm cĂlem je optimalizovat procesy plĂĄnovĂĄnĂ, rozvrhovĂĄnĂ a odbavovĂĄnĂ. JelikoĹž jde o problĂŠm patĹĂcĂ do tĹĂdy sloĹžitosti NP-teĹžkĂ˝, je vĂ˝poÄetnÄ velmi nĂĄroÄnĂŠ nalĂŠzt optimĂĄlnĂ ĹeĹĄenĂ. MotivacĂ pro ĹeĹĄenĂ tĂŠto prĂĄce je vyplnÄnĂ pomyslnĂŠ mezery mezi metodami zkoumanĂ˝mi na vÄdeckĂŠ a akademickĂŠ pĹŻdÄ a metodami pouĹžĂvanĂ˝mi v produkÄnĂch komerÄnĂch prostĹedĂch. JĂĄdro optimalizaÄnĂho algoritmu je zaloĹženo na zĂĄkladÄ genetickĂŠho programovĂĄnĂ ĹĂzenĂŠho bezkontextovou gramatikou. HlavnĂm pĹĂnosem tĂŠto prĂĄce je a) navrhnout novĂ˝ optimalizaÄnĂ algoritmus, kterĂ˝ respektuje nĂĄsledujĂcĂ optimalizaÄnĂ podmĂnky: celkovĂ˝ Äas zpracovĂĄnĂ, vyuĹžitĂ zdrojĹŻ, a zahlcenĂ skladovĂ˝ch uliÄek, kterĂŠ mĹŻĹže nastat bÄhem zpracovĂĄnĂ ĂşkolĹŻ, b) analyzovat historickĂĄ data z provozu skladu a vyvinout sadu testovacĂch pĹĂkladĹŻ, kterĂŠ mohou slouĹžit jako referenÄnĂ vĂ˝sledky pro dalĹĄĂ vĂ˝zkum, a dĂĄle c) pokusit se pĹedÄit stanovenĂŠ referenÄnĂ vĂ˝sledky dosaĹženĂŠ kvalifikovanĂ˝m a trĂŠnovanĂ˝m operaÄnĂm manaĹžerem jednoho z nejvÄtĹĄĂch skladĹŻ ve stĹednĂ EvropÄ.This work is focused on the work-flow optimization in logistic warehouses and distribution centers. The main aim is to optimize process planning, scheduling, and dispatching. The problem is quite accented in recent years. The problem is of NP hard class of problems and where is very computationally demanding to find an optimal solution. The main motivation for solving this problem is to fill the gap between the new optimization methods developed by researchers in academic world and the methods used in business world. The core of the optimization algorithm is built on the genetic programming driven by the context-free grammar. The main contribution of the thesis is a) to propose a new optimization algorithm which respects the makespan, the utilization, and the congestions of aisles which may occur, b) to analyze historical operational data from warehouse and to develop the set of benchmarks which could serve as the reference baseline results for further research, and c) to try outperform the baseline results set by the skilled and trained operational manager of the one of the biggest warehouses in the middle Europe.
Highly corrupted image inpainting through hypoelliptic diffusion
We present a new image inpainting algorithm, the Averaging and Hypoelliptic
Evolution (AHE) algorithm, inspired by the one presented in [SIAM J. Imaging
Sci., vol. 7, no. 2, pp. 669--695, 2014] and based upon a semi-discrete
variation of the Citti-Petitot-Sarti model of the primary visual cortex V1. The
AHE algorithm is based on a suitable combination of sub-Riemannian hypoelliptic
diffusion and ad-hoc local averaging techniques. In particular, we focus on
reconstructing highly corrupted images (i.e. where more than the 80% of the
image is missing), for which we obtain reconstructions comparable with the
state-of-the-art.Comment: 15 pages, 10 figure
An Augmented Lagrangian Approach to the Constrained Optimization Formulation of Imaging Inverse Problems
We propose a new fast algorithm for solving one of the standard approaches to
ill-posed linear inverse problems (IPLIP), where a (possibly non-smooth)
regularizer is minimized under the constraint that the solution explains the
observations sufficiently well. Although the regularizer and constraint are
usually convex, several particular features of these problems (huge
dimensionality, non-smoothness) preclude the use of off-the-shelf optimization
tools and have stimulated a considerable amount of research. In this paper, we
propose a new efficient algorithm to handle one class of constrained problems
(often known as basis pursuit denoising) tailored to image recovery
applications. The proposed algorithm, which belongs to the family of augmented
Lagrangian methods, can be used to deal with a variety of imaging IPLIP,
including deconvolution and reconstruction from compressive observations (such
as MRI), using either total-variation or wavelet-based (or, more generally,
frame-based) regularization. The proposed algorithm is an instance of the
so-called "alternating direction method of multipliers", for which convergence
sufficient conditions are known; we show that these conditions are satisfied by
the proposed algorithm. Experiments on a set of image restoration and
reconstruction benchmark problems show that the proposed algorithm is a strong
contender for the state-of-the-art.Comment: 13 pages, 8 figure, 8 tables. Submitted to the IEEE Transactions on
Image Processin
Fast Image Recovery Using Variable Splitting and Constrained Optimization
We propose a new fast algorithm for solving one of the standard formulations
of image restoration and reconstruction which consists of an unconstrained
optimization problem where the objective includes an data-fidelity
term and a non-smooth regularizer. This formulation allows both wavelet-based
(with orthogonal or frame-based representations) regularization or
total-variation regularization. Our approach is based on a variable splitting
to obtain an equivalent constrained optimization formulation, which is then
addressed with an augmented Lagrangian method. The proposed algorithm is an
instance of the so-called "alternating direction method of multipliers", for
which convergence has been proved. Experiments on a set of image restoration
and reconstruction benchmark problems show that the proposed algorithm is
faster than the current state of the art methods.Comment: Submitted; 11 pages, 7 figures, 6 table
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
- âŚ