638 research outputs found
Dimensionality and design of isotropic interactions that stabilize honeycomb, square, simple cubic, and diamond lattices
We use inverse methods of statistical mechanics and computer simulations to
investigate whether an isotropic interaction designed to stabilize a given
two-dimensional (2D) lattice will also favor an analogous three-dimensional
(3D) structure, and vice versa. Specifically, we determine the 3D ordered
lattices favored by isotropic potentials optimized to exhibit stable 2D
honeycomb (or square) periodic structures, as well as the 2D ordered structures
favored by isotropic interactions designed to stabilize 3D diamond (or simple
cubic) lattices. We find a remarkable `transferability' of isotropic potentials
designed to stabilize analogous morphologies in 2D and 3D, irrespective of the
exact interaction form, and we discuss the basis of this cross-dimensional
behavior. Our results suggest that the discovery of interactions that drive
assembly into certain 3D periodic structures of interest can be assisted by
less computationally intensive optimizations targeting the analogous 2D
lattices.Comment: 22 pages (preprint version; includes supplementary information), 5
figures, 3 table
Stencil Computation with Vector Outer Product
Matrix computation units have been equipped in current architectures to
accelerate AI and high performance computing applications. The matrix
multiplication and vector outer product are two basic instruction types. The
latter one is lighter since the inputs are vectors. Thus it provides more
opportunities to develop flexible algorithms for problems other than dense
linear algebra computing and more possibilities to optimize the implementation.
Stencil computations represent a common class of nested loops in scientific and
engineering applications. This paper proposes a novel stencil algorithm using
vector outer products. Unlike previous work, the new algorithm arises from the
stencil definition in the scatter mode and is initially expressed with formulas
of vector outer products. The implementation incorporates a set of
optimizations to improve the memory reference pattern, execution pipeline and
data reuse by considering various algorithmic options and the data sharing
between input vectors. Evaluation on a simulator shows that our design achieves
a substantial speedup compared with vectorized stencil algorithm
Enabling Neural Radiance Fields (NeRF) for Large-scale Aerial Images -- A Multi-tiling Approach and the Geometry Assessment of NeRF
Neural Radiance Fields (NeRF) offer the potential to benefit 3D
reconstruction tasks, including aerial photogrammetry. However, the scalability
and accuracy of the inferred geometry are not well-documented for large-scale
aerial assets,since such datasets usually result in very high memory
consumption and slow convergence.. In this paper, we aim to scale the NeRF on
large-scael aerial datasets and provide a thorough geometry assessment of NeRF.
Specifically, we introduce a location-specific sampling technique as well as a
multi-camera tiling (MCT) strategy to reduce memory consumption during image
loading for RAM, representation training for GPU memory, and increase the
convergence rate within tiles. MCT decomposes a large-frame image into multiple
tiled images with different camera models, allowing these small-frame images to
be fed into the training process as needed for specific locations without a
loss of accuracy. We implement our method on a representative approach,
Mip-NeRF, and compare its geometry performance with threephotgrammetric MVS
pipelines on two typical aerial datasets against LiDAR reference data. Both
qualitative and quantitative results suggest that the proposed NeRF approach
produces better completeness and object details than traditional approaches,
although as of now, it still falls short in terms of accuracy.Comment: 9 Figur
Parameterizing Quasiperiodicity: Generalized Poisson Summation and Its Application to Modified-Fibonacci Antenna Arrays
The fairly recent discovery of "quasicrystals", whose X-ray diffraction
patterns reveal certain peculiar features which do not conform with spatial
periodicity, has motivated studies of the wave-dynamical implications of
"aperiodic order". Within the context of the radiation properties of antenna
arrays, an instructive novel (canonical) example of wave interactions with
quasiperiodic order is illustrated here for one-dimensional (1-D) array
configurations based on the "modified-Fibonacci" sequence, with utilization of
a two-scale generalization of the standard Poisson summation formula for
periodic arrays. This allows for a "quasi-Floquet" analytic parameterization of
the radiated field, which provides instructive insights into some of the basic
wave mechanisms associated with quasiperiodic order, highlighting similarities
and differences with the periodic case. Examples are shown for quasiperiodic
infinite and spatially-truncated arrays, with brief discussion of computational
issues and potential applications.Comment: 29 pages, 10 figures. To be published in IEEE Trans. Antennas
Propagat., vol. 53, No. 6, June 200
Analytical cost metrics: days of future past
2019 Summer.Includes bibliographical references.Future exascale high-performance computing (HPC) systems are expected to be increasingly heterogeneous, consisting of several multi-core CPUs and a large number of accelerators, special-purpose hardware that will increase the computing power of the system in a very energy-efficient way. Specialized, energy-efficient accelerators are also an important component in many diverse systems beyond HPC: gaming machines, general purpose workstations, tablets, phones and other media devices. With Moore's law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. This work builds analytical cost models for cost metrics such as time, energy, memory access, and silicon area. These models are used to predict the performance of applications, for performance tuning, and chip design. The idea is to work with domain specific accelerators where analytical cost models can be accurately used for performance optimization. The performance optimization problems are formulated as mathematical optimization problems. This work explores the analytical cost modeling and mathematical optimization approach in a few ways. For stencil applications and GPU architectures, the analytical cost models are developed for execution time as well as energy. The models are used for performance tuning over existing architectures, and are coupled with silicon area models of GPU architectures to generate highly efficient architecture configurations. For matrix chain products, analytical closed form solutions for off-chip data movement are built and used to minimize the total data movement cost of a minimum op count tree
Real Time Airborne Monitoring for Disaster and Traffic Applications
Remote sensing applications like disaster or mass event monitoring need the acquired data and extracted information within a very short time span. Airborne sensors can acquire the data quickly and on-board processing combined with data downlink is the fastest possibility to achieve this requirement. For this purpose, a new low-cost airborne frame camera system has been developed at the German Aerospace Center (DLR) named 3K-camera. The pixel size and swath width range between 15 cm to 50 cm and 2.5 km to 8 km respectively. Within two minutes an area of approximately 10 km x 8 km can be monitored. Image data are processed onboard on five computers using data from a real time GPS/IMU system including direct georeferencing. Due to high frequency image acquisition (3 images/second) the monitoring of moving objects like vehicles and people is performed allowing wide area detailed traffic monitoring
Robust Machine Learning Applied to Astronomical Datasets I: Star-Galaxy Classification of the SDSS DR3 Using Decision Trees
We provide classifications for all 143 million non-repeat photometric objects
in the Third Data Release of the Sloan Digital Sky Survey (SDSS) using decision
trees trained on 477,068 objects with SDSS spectroscopic data. We demonstrate
that these star/galaxy classifications are expected to be reliable for
approximately 22 million objects with r < ~20. The general machine learning
environment Data-to-Knowledge and supercomputing resources enabled extensive
investigation of the decision tree parameter space. This work presents the
first public release of objects classified in this way for an entire SDSS data
release. The objects are classified as either galaxy, star or nsng (neither
star nor galaxy), with an associated probability for each class. To demonstrate
how to effectively make use of these classifications, we perform several
important tests. First, we detail selection criteria within the probability
space defined by the three classes to extract samples of stars and galaxies to
a given completeness and efficiency. Second, we investigate the efficacy of the
classifications and the effect of extrapolating from the spectroscopic regime
by performing blind tests on objects in the SDSS, 2dF Galaxy Redshift and 2dF
QSO Redshift (2QZ) surveys. Given the photometric limits of our spectroscopic
training data, we effectively begin to extrapolate past our star-galaxy
training set at r ~ 18. By comparing the number counts of our training sample
with the classified sources, however, we find that our efficiencies appear to
remain robust to r ~ 20. As a result, we expect our classifications to be
accurate for 900,000 galaxies and 6.7 million stars, and remain robust via
extrapolation for a total of 8.0 million galaxies and 13.9 million stars.
[Abridged]Comment: 27 pages, 12 figures, to be published in ApJ, uses emulateapj.cl
Deep-image-matching: A toolbox for multiview image matching of complex scenarios
Finding corresponding points between images is a fundamental step in photogrammetry and computer vision tasks. Traditionally, image matching has relied on hand-crafted algorithms such as SIFT or ORB. However, these algorithms face challenges when dealing with multi-Temporal images, varying radiometry and contents as well as significant viewpoint differences. Recently, the computer vision community has proposed several deep learning-based approaches that are trained for challenging illumination and wide viewing angle scenarios. However, they suffer from certain limitations, such as rotations, and they are not applicable to high resolution images due to computational constraints. In addition, they are not widely used by the photogrammetric community due to limited integration with standard photogrammetric software packages. To overcome these challenges, this paper introduces Deep-Image-Matching, an opensource toolbox designed to match images using different matching strategies, ranging from traditional hand-crafted to deep-learning methods (https://github.com/3DOM-FBK/deep-image-matching). The toolbox accommodates high-resolution datasets, e.g. data acquired with full-frame or aerial sensors, and addresses known rotation-related problems of the learned features. The toolbox provides image correspondences outcomes that are directly compatible with commercial and open-source software packages, such as COLMAP and openMVG, for a bundle adjustment. The paper includes also a series of cultural heritage case studies that present challenging conditions where traditional hand-crafted approaches typically fail
- …