1,469 research outputs found
Generating Surface Geometry in Higher Dimensions using Local Cell Tilers
In two dimensions contour elements surround two dimensional objects, in three dimensions surfaces surround three dimensional objects and in four dimensions hypersurfaces surround hyperobjects. These surfaces can be represented by a collection of connected simplices, hence, continuous n dimensional surfaces can be represented by a lattice of connected n-1 dimensional simplices. The lattice of connected simplices can be calculated over a set of adjacent n-dimensional cubes, via for example the Marching Cubes Algorithm. These algorithms are often named local cell tilers. We propose that the local-cell tiling method can be usefully-applied to four dimensions and potentially to N-dimensions. We present an algorithm for the generation of major cases (cases that are topologically invariant under standard geometrical transformations) and introduce the notion of a sub-case which simplifies their representations. Each sub-case can be easily subdivided into simplices for rendering and we describe a backtracking tetrahedronization algorithm for the four dimensional case. An implementation for surfaces from the fourth dimension is presented and we describe and discuss ambiguities inherent within this and related algorithms
A Parallel Rendering Algorithm for MIMD Architectures
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well
Recommended from our members
Image Understanding and Robotics Research at Columbia University
The research investigations of the Vision/Robotics Laboratory at Columbia University reflect the diversity of interests of its four faculty members, two staff programmers and 15 Ph.D. students. Several of the projects involve either a visiting computer science post-doc, other faculty members in the department or the university, or researchers at AT&T Bell Laboratories or Philips laboratories. We list below a summary of our interest and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
Efficient Evaluation of the Number of False Alarm Criterion
This paper proposes a method for computing efficiently the significance of a
parametric pattern inside a binary image. On the one hand, a-contrario
strategies avoid the user involvement for tuning detection thresholds, and
allow one to account fairly for different pattern sizes. On the other hand,
a-contrario criteria become intractable when the pattern complexity in terms of
parametrization increases. In this work, we introduce a strategy which relies
on the use of a cumulative space of reduced dimensionality, derived from the
coupling of a classic (Hough) cumulative space with an integral histogram
trick. This space allows us to store partial computations which are required by
the a-contrario criterion, and to evaluate the significance with a lower
computational cost than by following a straightforward approach. The method is
illustrated on synthetic examples on patterns with various parametrizations up
to five dimensions. In order to demonstrate how to apply this generic concept
in a real scenario, we consider a difficult crack detection task in still
images, which has been addressed in the literature with various local and
global detection strategies. We model cracks as bounded segments, detected by
the proposed a-contrario criterion, which allow us to introduce additional
spatial constraints based on their relative alignment. On this application, the
proposed strategy yields state-of the-art results, and underlines its potential
for handling complex pattern detection tasks
Spin Glasses on the Hypercube
We present a mean field model for spin glasses with a natural notion of
distance built in, namely, the Edwards-Anderson model on the diluted
D-dimensional unit hypercube in the limit of large D. We show that finite D
effects are strongly dependent on the connectivity, being much smaller for a
fixed coordination number. We solve the non trivial problem of generating these
lattices. Afterwards, we numerically study the nonequilibrium dynamics of the
mean field spin glass. Our three main findings are: (i) the dynamics is ruled
by an infinite number of time-sectors, (ii) the aging dynamics consists on the
growth of coherent domains with a non vanishing surface-volume ratio, and (iii)
the propagator in Fourier space follows the p^4 law. We study as well finite D
effects in the nonequilibrium dynamics, finding that a naive finite size
scaling ansatz works surprisingly well.Comment: 14 pages, 22 figure
SU(2) Lattice Gauge Theory Simulations on Fermi GPUs
In this work we explore the performance of CUDA in quenched lattice SU(2)
simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware
and software architecture developed by NVIDIA for computing on the GPU. We
present an analysis and performance comparison between the GPU and CPU in
single and double precision. Analyses with multiple GPUs and two different
architectures (G200 and Fermi architectures) are also presented. In order to
obtain a high performance, the code must be optimized for the GPU architecture,
i.e., an implementation that exploits the memory hierarchy of the CUDA
programming model.
We produce codes for the Monte Carlo generation of SU(2) lattice gauge
configurations, for the mean plaquette, for the Polyakov Loop at finite T and
for the Wilson loop. We also present results for the potential using many
configurations () without smearing and almost configurations
with APE smearing. With two Fermi GPUs we have achieved an excellent
performance of the speed over one CPU, in single precision, around
110 Gflops/s. We also find that, using the Fermi architecture, double precision
computations for the static quark-antiquark potential are not much slower (less
than slower) than single precision computations.Comment: 20 pages, 11 figures, 3 tables, accepted in Journal of Computational
Physic
- …