84,787 research outputs found
Accelerated hardware video object segmentation: From foreground detection to connected components labelling
This is the preprint version of the Article - Copyright @ 2010 ElsevierThis paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency
Fluid Data Compression and ROI Detection Using Run Length Method
AbstractIt is difficult to carry out visualization of the large-scale time-varying data directly, even with the supercomputers. Data compression and ROI (Region of Interest) detection are often used to improve efficiency of the visualization of numerical data. It is well known that the Run Length encoding is a good technique to compress the data where the same sequence appeared repeatedly, such as an image with little change, or a set of smooth fluid data. Another advantage of Run Length encoding is that it can be applied to every dimension of data separately. Therefore, the Run Length method can be implemented easily as a parallel processing algorithm. We proposed two different Run Length based methods. When using the Run Length method to compress a data set, its size may increase after the compression if the data does not contain many repeated parts. We only apply the compression for the case that the data can be compressed effectively. By checking the compression ratio, we can detect ROI. The effectiveness and efficiency of the proposed methods are demonstrated through comparing with several existing compression methods using different sets of fluid data
A Quasi-Linear Time Algorithm Deciding Whether Weak B\"uchi Automata Reading Vectors of Reals Recognize Saturated Languages
This work considers weak deterministic B\"uchi automata reading encodings of
non-negative -vectors of reals in a fixed base. A saturated language is a
language which contains all encoding of elements belonging to a set of
-vectors of reals. A Real Vector Automaton is an automaton which recognizes
a saturated language. It is explained how to decide in quasi-linear time
whether a minimal weak deterministic B\"uchi automaton is a Real Vector
Automaton. The problem is solved both for the two standard encodings of vectors
of numbers: the sequential encoding and the parallel encoding. This algorithm
runs in linear time for minimal weak B\"uchi automata accepting set of reals.
Finally, the same problem is also solved for parallel encoding of automata
reading vectors of relative reals
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization
Today's HPC applications are producing extremely large amounts of data, such
that data storage and analysis are becoming more challenging for scientific
research. In this work, we design a new error-controlled lossy compression
algorithm for large-scale scientific data. Our key contribution is
significantly improving the prediction hitting rate (or prediction accuracy)
for each data point based on its nearby data values along multiple dimensions.
We derive a series of multilayer prediction formulas and their unified formula
in the context of data compression. One serious challenge is that the data
prediction has to be performed based on the preceding decompressed values
during the compression in order to guarantee the error bounds, which may
degrade the prediction accuracy in turn. We explore the best layer for the
prediction by considering the impact of compression errors on the prediction
accuracy. Moreover, we propose an adaptive error-controlled quantization
encoder, which can further improve the prediction hitting rate considerably.
The data size can be reduced significantly after performing the variable-length
encoding because of the uneven distribution produced by our quantization
encoder. We evaluate the new compressor on production scientific data sets and
compare it with many other state-of-the-art compressors: GZIP, FPZIP, ZFP,
SZ-1.1, and ISABELA. Experiments show that our compressor is the best in class,
especially with regard to compression factors (or bit-rates) and compression
errors (including RMSE, NRMSE, and PSNR). Our solution is better than the
second-best solution by more than a 2x increase in the compression factor and
3.8x reduction in the normalized root mean squared error on average, with
reasonable error bounds and user-desired bit-rates.Comment: Accepted by IPDPS'17, 11 pages, 10 figures, double colum
- …