238,660 research outputs found
Enhancement in Visualization of Parallel Coordinates using Curves
In this paper I analysis about the visualization techniques of large set of data with parallel coordinates. Parallel Coordinate is an interesting method which can be widely used throughout the world, not only at research area but also other field such as business, market, finance and so on. The aim of this research work is to implement Parallel Coordinate and refinements to Parallel Coordinates using curve. In parallel coordinates visualization of data set is performed by using straight lines. Then lines replaced with the collection of smooth curves across the attribute axis, allowing individual data element to be traced under certain limitations normally impossible due to “Crossing Problem” .Then the notion of spreading out points on axis with few discrete value is introduced, which leads to a simple filter technique when the user selects value on such axis. In this paper I proposed a new concept of visualization of large set of data with parallel Coordinate. Parallel coordinates were proposed by Alfred Inselberg as a new way to represent multidimensional information. A parallel coordinate’s visualization assigns one vertical axis to each variable, and evenly spaces these axes horizontally. This is in contrast to the traditional Cartesian coordinates system where all axes are mutually perpendicular. By drawing the axes parallel to one another, one can represent data in much greater than three dimensions. Each variable is plotted on its own axis, and the values of the variables on adjacent axes are connected by straight lines. Thus, a point in an n-dimensional space becomes a polygonal line laid out across the n parallel axes with n-1 line segments connecting the n data values. In this way, the search for relations among the variables is transformed into a 2-D pattern recognition problem, and the variables become amenable to visualization
Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy
In this paper we present a simple and robust method for self-correction of
camera distortion using single images of scenes which contain straight lines.
Since the most common distortion can be modelled as radial distortion, we
illustrate the method using the Harris radial distortion model, but the method
is applicable to any distortion model. The method is based on transforming the
edgels of the distorted image to a 1-D angular Hough space, and optimizing the
distortion correction parameters which minimize the entropy of the
corresponding normalized histogram. Properly corrected imagery will have fewer
curved lines, and therefore less spread in Hough space. Since the method does
not rely on any image structure beyond the existence of edgels sharing some
common orientations and does not use edge fitting, it is applicable to a wide
variety of image types. For instance, it can be applied equally well to images
of texture with weak but dominant orientations, or images with strong vanishing
points. Finally, the method is performed on both synthetic and real data
revealing that it is particularly robust to noise.Comment: 9 pages, 5 figures Corrected errors in equation 1
Scaling detection in time series: diffusion entropy analysis
The methods currently used to determine the scaling exponent of a complex
dynamic process described by a time series are based on the numerical
evaluation of variance. This means that all of them can be safely applied only
to the case where ordinary statistical properties hold true even if strange
kinetics are involved. We illustrate a method of statistical analysis based on
the Shannon entropy of the diffusion process generated by the time series,
called Diffusion Entropy Analysis (DEA). We adopt artificial Gauss and L\'{e}vy
time series, as prototypes of ordinary and anomalus statistics, respectively,
and we analyse them with the DEA and four ordinary methods of analysis, some of
which are very popular. We show that the DEA determines the correct scaling
exponent even when the statistical properties, as well as the dynamic
properties, are anomalous. The other four methods produce correct results in
the Gauss case but fail to detect the correct scaling in the case of L\'{e}vy
statistics.Comment: 21 pages,10 figures, 1 tabl
The curvelet transform for image denoising
We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement
- …