45 research outputs found
Accelerated hardware video object segmentation: From foreground detection to connected components labelling
This is the preprint version of the Article - Copyright @ 2010 ElsevierThis paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency
On the Convergence and Consistency of the Blurring Mean-Shift Process
The mean-shift algorithm is a popular algorithm in computer vision and image
processing. It can also be cast as a minimum gamma-divergence estimation. In
this paper we focus on the "blurring" mean shift algorithm, which is one
version of the mean-shift process that successively blurs the dataset. The
analysis of the blurring mean-shift is relatively more complicated compared to
the nonblurring version, yet the algorithm convergence and the estimation
consistency have not been well studied in the literature. In this paper we
prove both the convergence and the consistency of the blurring mean-shift. We
also perform simulation studies to compare the efficiency of the blurring and
the nonblurring versions of the mean-shift algorithms. Our results show that
the blurring mean-shift has more efficiency.Comment: arXiv admin note: text overlap with arXiv:1201.197
A novel Bayesian approach to adaptive mean shift segmentation of brain images
We present a novel adaptive mean shift (AMS) algorithm for the segmentation of tissues in magnetic resonance (MR) brain images. In particular we introduce a novel Bayesian approach for the estimation of the adaptive kernel bandwidth and investigate its impact on segmentation accuracy. We studied the three class problem where the brain tissues are segmented into white matter, gray matter and cerebrospinal fluid. The segmentation experiments were performed on both multi-modal simulated and real patient T1-weighted MR volumes with different noise characteristics and spatial inhomogeneities. The performance of the algorithm was evaluated relative to several competing methods using real and synthetic data. Our results demonstrate the efficacy of the proposed algorithm and that it can outperform competing methods, especially when the noise and spatial intensity inhomogeneities are high
Bandwidth selection for kernel estimation in mixed multi-dimensional spaces
Kernel estimation techniques, such as mean shift, suffer from one major
drawback: the kernel bandwidth selection. The bandwidth can be fixed for all
the data set or can vary at each points. Automatic bandwidth selection becomes
a real challenge in case of multidimensional heterogeneous features. This paper
presents a solution to this problem. It is an extension of \cite{Comaniciu03a}
which was based on the fundamental property of normal distributions regarding
the bias of the normalized density gradient. The selection is done iteratively
for each type of features, by looking for the stability of local bandwidth
estimates across a predefined range of bandwidths. A pseudo balloon mean shift
filtering and partitioning are introduced. The validity of the method is
demonstrated in the context of color image segmentation based on a
5-dimensional space
Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes
Glucometers present an important self-monitoring tool for diabetes patients
and therefore must exhibit high accu- racy as well as good usability features.
Based on an invasive, photometric measurement principle that drastically
reduces the volume of the blood sample needed from the patient, we present a
framework that is capable of dealing with small blood samples, while
maintaining the required accuracy. The framework consists of two major parts:
1) image segmentation; and 2) convergence detection. Step 1) is based on
iterative mode-seeking methods to estimate the intensity value of the region of
interest. We present several variations of these methods and give theoretical
proofs of their convergence. Our approach is able to deal with changes in the
number and position of clusters without any prior knowledge. Furthermore, we
propose a method based on sparse approximation to decrease the computational
load, while maintaining accuracy. Step 2) is achieved by employing temporal
tracking and prediction, herewith decreasing the measurement time, and, thus,
improving usability. Our framework is validated on several real data sets with
different characteristics. We show that we are able to estimate the underlying
glucose concentration from much smaller blood samples than is currently
state-of-the- art with sufficient accuracy according to the most recent ISO
standards and reduce measurement time significantly compared to
state-of-the-art methods
Tree-Based Overlay Networks for Scalable Applications
The increasing availability of high-performance computing systems with thousands, tens of thousands, and even hundreds of thousands of computational nodes is driving the demand for programming models and infrastructures that allow effective use of such large-scale environments. Tree-based Overlay Networks (TBĆNs) have proven to provide such a model for distributed tools like performance profilers, parallel debuggers, system monitors and system administration tools. We demonstrate that the extensibility and flexibility of the TBĆN distributed computing model, along with its performance characteristics, make it surprisingly general, particularly for applications outside the tool domain. We describe many interesting applications and commonly-used algorithms for which TBĆNs are well-suited and provide a new (non-tool) case study, a distributed implementation of the mean-shift algorithm commonly used in computer vision to delineate arbitrarily shaped clusters in complex, multi-modal feature spaces. 1