10,341 research outputs found
Hierarchical Image Segmentation using The Watershed Algorithim with A Streaming Implementation
We have implemented a graphical user interface (GUI) based semi-automatic hierarchical segmentation scheme, which works in three stages. In the first stage, we process the original image by filtering and threshold the gradient to reduce the level of noise. In the second stage, we compute the watershed segmentation of the image using the rainfalling simulation approach. In the third stage, we apply two region merging schemes, namely implicit region merging and seeded region merging, to the result of the watershed algorithm. Both the region merging schemes are based on the watershed depth of regions and serve to reduce the over segmentation produced by the watershed algorithm. Implicit region merging automatically produces a hierarchy of regions. In seeded region merging, a selected seed region can be grown from the watershed result, producing a hierarchy. A meaningful segmentation can be simply chosen from the hierarchy produced.
We have also proposed and tested a streaming algorithm based on the watershed algorithm, which computes the segmentation of an image without iterative processing of adjacent blocks. We have proved that the streaming algorithm produces the same result as the serial watershed algorithm. We have also discussed the extensibility of the streaming algorithm to efficient parallel implementations
Color and depth based image segmentation using a game-theoretic approach
In this thesis a new game theoretic approach to image segmentation is proposed.
It is an attempt to give a contribution to a new interesting research area in image processing, which tries to boost image segmentation combining information about appareance (e.g. color) and information about spatial arrangement.
The proposed algorithm firstly partition the image into small subsets of pixels, in order to reduce computational complexity of the subsequent phases. Two different distance measures between each pair of pixels subsets are then computed, one regarding color information and one based on spatial-geometric information. A similarity measure between each pair of pixel subset is then computed, exploiting both color and spatial data. Finally, pixels subsets are modeled into an evolutionary game in order to group similar pixels into meaningful segments.
After a brief review of image segmentation approaches, the proposed algorithm is described and different experimental tests are carried up to evaluate its segmentation performanc
QuickCSG: Fast Arbitrary Boolean Combinations of N Solids
QuickCSG computes the result for general N-polyhedron boolean expressions
without an intermediate tree of solids. We propose a vertex-centric view of the
problem, which simplifies the identification of final geometric contributions,
and facilitates its spatial decomposition. The problem is then cast in a single
KD-tree exploration, geared toward the result by early pruning of any region of
space not contributing to the final surface. We assume strong regularity
properties on the input meshes and that they are in general position. This
simplifying assumption, in combination with our vertex-centric approach,
improves the speed of the approach. Complemented with a task-stealing
parallelization, the algorithm achieves breakthrough performance, one to two
orders of magnitude speedups with respect to state-of-the-art CPU algorithms,
on boolean operations over two to dozens of polyhedra. The algorithm also
outperforms GPU implementations with approximate discretizations, while
producing an output without redundant facets. Despite the restrictive
assumptions on the input, we show the usefulness of QuickCSG for applications
with large CSG problems and strong temporal constraints, e.g. modeling for 3D
printers, reconstruction from visual hulls and collision detection
Model-Driven Engineering in the Large: Refactoring Techniques for Models and Model Transformation Systems
Model-Driven Engineering (MDE) is a software engineering paradigm that
aims to increase the productivity of developers by raising the
abstraction level of software development. It envisions the use of
models as key artifacts during design, implementation and deployment.
From the recent arrival of MDE in large-scale industrial software
development – a trend we refer to as MDE in the large –, a set of
challenges emerges: First, models are now developed at distributed
locations, by teams of teams. In such highly collaborative settings, the
presence of large monolithic models gives rise to certain issues, such
as their proneness to editing conflicts. Second, in large-scale system
development, models are created using various domain-specific modeling
languages. Combining these models in a disciplined manner calls for
adequate modularization mechanisms. Third, the development of models is
handled systematically by expressing the involved operations using model
transformation rules. Such rules are often created by cloning, a
practice related to performance and maintainability issues.
In this thesis, we contribute three refactoring techniques, each aiming
to tackle one of these challenges. First, we propose a technique to
split a large monolithic model into a set of sub-models. The aim of this
technique is to enable a separation of concerns within models, promoting
a concern-based collaboration style: Collaborators operate on the
submodels relevant for their task at hand. Second, we suggest a
technique to encapsulate model components by introducing modular
interfaces in a set of related models. The goal of this technique is to
establish modularity in these models. Third, we introduce a refactoring
to merge a set of model transformation rules exhibiting a high degree of
similarity. The aim of this technique is to improve maintainability and
performance by eliminating the drawbacks associated with cloning. The
refactoring creates variability-based rules, a novel type of rule
allowing to capture variability by using annotations.
The refactoring techniques contributed in this work help to reduce the
manual effort during the refactoring of models and transformation rules
to a large extent. As indicated in a series of realistic case studies,
the output produced by the techniques is comparable or, in the case of
transformation rules, partly even preferable to the result of manual
refactoring, yielding a promising outlook on the applicability in
real-world settings
Recommended from our members
Surface-Only Simulation of Fluids
Surface-only simulation methods for fluid dynamics are those that perform computation only on a surface representation, without relying on any volumetric discretization. Such methods have superior asymptotic complexity in time and memory than the traditional volumetric discretization approaches, and thus are more tractable for simulation of complex fluid phenomena. Although for most computer graphics applications and many engineering applications, the interior flow inside the fluid phases is typically not of interest, the vast majority of existing numerical techniques still rely on discretization of the volumetric domain. My research first tackles the mesh-based surface tracking problem in the multimaterial setting, and then proposes surface-only simulation solutions for two scenarios: the soap-films and bubbles, and the general 3D liquids. Throughout these simulation approaches, all computation takes place on the surface, and volumetric discretization is entirely eliminated
Topology-Aware Neighborhoods for Point-Based Simulation and Reconstruction
International audienceParticle based simulations are widely used in computer graphics. In this field, several recent results have improved the simula- tion itself or improved the tension of the final fluid surface. In current particle based implementations, the particle neighborhood is computed by considering the Euclidean distance between fluid particles only. Thus particles from different fluid components interact, which generates both local incorrect behavior in the simulation and blending artifacts in the reconstructed fluid sur- face. Our method introduces a better neighborhood computation for both the physical simulation and surface reconstruction steps. We track and store the local fluid topology around each particle using a graph structure. In this graph, only particles within the same local fluid component are neighbors and other disconnected fluid particles are inserted only if they come into contact. The graph connectivity also takes into account the asymmetric behavior of particles when they merge and split, and the fluid surface is reconstructed accordingly, thus avoiding their blending at distance before a merge. In the simulation, this neighborhood information is exploited for better controlling the fluid density and the force interactions at the vicinity of its boundaries. For instance, it prevents the introduction of collision events when two distinct fluid components are crossing without contact, and it avoids fluid interactions through thin waterproof walls. This leads to an overall more consistent fluid simulation and reconstruction
Two and three dimensional segmentation of multimodal imagery
The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
- …