10,924 research outputs found
Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology
The incidence of thyroid nodule is very high and generally increases with the
age. Thyroid nodule may presage the emergence of thyroid cancer. The thyroid
nodule can be completely cured if detected early. Fine needle aspiration
cytology is a recognized early diagnosis method of thyroid nodule. There are
still some limitations in the fine needle aspiration cytology, and the
ultrasound diagnosis of thyroid nodule has become the first choice for
auxiliary examination of thyroid nodular disease. If we could combine medical
imaging technology and fine needle aspiration cytology, the diagnostic rate of
thyroid nodule would be improved significantly. The properties of ultrasound
will degrade the image quality, which makes it difficult to recognize the edges
for physicians. Image segmentation technique based on graph theory has become a
research hotspot at present. Normalized cut (Ncut) is a representative one,
which is suitable for segmentation of feature parts of medical image. However,
how to solve the normalized cut has become a problem, which needs large memory
capacity and heavy calculation of weight matrix. It always generates over
segmentation or less segmentation which leads to inaccurate in the
segmentation. The speckle noise in B ultrasound image of thyroid tumor makes
the quality of the image deteriorate. In the light of this characteristic, we
combine the anisotropic diffusion model with the normalized cut in this paper.
After the enhancement of anisotropic diffusion model, it removes the noise in
the B ultrasound image while preserves the important edges and local details.
This reduces the amount of computation in constructing the weight matrix of the
improved normalized cut and improves the accuracy of the final segmentation
results. The feasibility of the method is proved by the experimental results.Comment: 15pages,13figure
Task-based Augmented Reeb Graphs with Dynamic ST-Trees
International audienceThis paper presents, to the best of our knowledge, the first parallel algorithm for the computation of the augmented Reeb graph of piecewise linear scalar data. Such augmented Reeb graphs have a wide range of applications , including contour seeding and feature based segmentation. Our approach targets shared-memory multi-core workstations. For this, it completely revisits the optimal, but sequential, Reeb graph algorithm, which is capable of handing data in arbitrary dimension and with optimal time complexity. We take advantage of Fibonacci heaps to exploit the ST-Tree data structure through independent local propagations, while maintaining the optimal, linearithmic time complexity of the sequential reference algorithm. These independent propagations can be expressed using OpenMP tasks, hence benefiting in parallel from the dynamic load balancing of the task runtime while enabling us to increase the parallelism degree thanks to a dual sweep. We present performance results on triangulated surfaces and tetrahedral meshes. We provide comparisons to related work and show that our new algorithm results in superior time performance in practice, both in sequential and in parallel. An open-source C++ implementation is provided for reproducibility
Monotone Pieces Analysis for Qualitative Modeling
It is a crucial task to build qualitative models of industrial applications for model-based diagnosis. A Model Abstraction procedure is designed to automatically transform a quantitative model into qualitative model. If the data is monotone, the behavior can be easily abstracted using the corners of the bounding rectangle. Hence, many existing model abstraction approaches rely on monotonicity. But it is not a trivial problem to robustly detect monotone pieces from scattered data obtained by numerical simulation or experiments. This paper introduces an approach based on scale-dependent monotonicity: the notion that monotonicity can be defined relative to a scale. Real-valued functions defined on a finite set of reals e.g. simulation results, can be partitioned into quasi-monotone segments. The end points for the monotone segments are used as the initial set of landmarks for qualitative model abstraction. The qualitative model abstraction works as an iteratively refining process starting from the initial landmarks. The monotonicity analysis presented here can be used in constructing many other kinds of qualitative models; it is robust and computationally efficient
Effects of non-Hermitian perturbations on Weyl Hamiltonians with arbitrary topological charges
We provide a systematic study of non-Hermitian topologically charged systems.
Starting from a Hermitian Hamiltonian supporting Weyl points with arbitrary
topological charge, adding a non-Hermitian perturbation transforms the Weyl
points to one-dimensional exceptional contours. We analytical prove that the
topological charge is preserved on the exceptional contours. In contrast to
Hermitian systems, the addition of gain and loss allows for a new class of
topological phase transition: when two oppositely charged exceptional contours
touch, the topological charge can dissipate without opening a gap. These
effects can be demonstrated in realistic photonics and acoustics systems.Comment: 11 pages, 9 figure
Shape Recognition: A Landmark-Based Approach
Shape recognition has applications in computer vision tasks such as industrial automated inspection and automatic target recognition. When objects are occluded, many recognition methods that use global information will fail. To recognize partially occluded objects, we represent each object by a Set of landmarks. The landmarks of an object are points of interest which have important shape attributes and are usually obtained from the object boundary. In this study, we use high curvature points along an object boundary as the landmarks of the object. Given a scene consisting of partially occluded objects, the hypothesis of a model object in the scene is verified by matching the landmarks of an object with those in the scene. A measure of similarity between two landmarks, one from a model and the other from a scene, is needed to perform this matching. One such local shape measure is the sphericity of a triangular transformation mapping the model landmark and its two neighboring landmarks to the scene landmark and its two neighboring landmarks. Sphericity is in general defined for a diffeomorphism. Its invariant properties under a group of transformation, namely, translation, rotation, and scaling are derived. The sphericity of a triangular transformation is shown to be a robust local shape measure in the sense that minor distortion in the landmarks does not significantly alter its value. To match landmarks between a model and a scene, a table of compatibility, where each entry of the table is the sphericity value derived from the mapping of a model landmark to a scene landmark, is constructed. A hopping dynamic programming procedure which switches between a forward and a backward dynamic programming procedure is applied to guide the landmark matching through the compatibility table. The location of the model in the scene is estimated with a least squares fit among the matched landmarks. A heuristic measure is then computed to decide if the model is in the scene
- âŠ