3,435 research outputs found
3D oil reservoir visualisation using octree compression techniques utilising logical grid co-ordinates
Octree compression techniques have been used for several years for compressing large three dimensional
data sets into homogeneous regions. This compression technique is ideally suited to datasets
which have similar values in clusters. Oil engineers represent reservoirs as a three dimensional grid
where hydrocarbons occur naturally in clusters. This research looks at the efficiency of storing these
grids using octree compression techniques where grid cells are broken into active and inactive regions.
Initial experiments yielded high compression ratios as only active leaf nodes and their ancestor, header
nodes are stored as a bitstream to file on disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the graphics card eliminating the need of
reconstructing the original matrix. This results in a more compact vertex table, which can be loaded
into the graphics card quicker and generating shorter refresh delay times
Subsampling in Smoothed Range Spaces
We consider smoothed versions of geometric range spaces, so an element of the
ground set (e.g. a point) can be contained in a range with a non-binary value
in . Similar notions have been considered for kernels; we extend them to
more general types of ranges. We then consider approximations of these range
spaces through -nets and -samples (aka
-approximations). We characterize when size bounds for
-samples on kernels can be extended to these more general
smoothed range spaces. We also describe new generalizations for -nets to these range spaces and show when results from binary range spaces can
carry over to these smoothed ones.Comment: This is the full version of the paper which appeared in ALT 2015. 16
pages, 3 figures. In Algorithmic Learning Theory, pp. 224-238. Springer
International Publishing, 201
CARMA Large Area Star Formation Survey: Project Overview with Analysis of Dense Gas Structure and Kinematics in Barnard 1
We present details of the CARMA Large Area Star Formation Survey (CLASSy),
while focusing on observations of Barnard 1. CLASSy is a CARMA Key Project that
spectrally imaged N2H+, HCO+, and HCN (J=1-0 transitions) across over 800
square arcminutes of the Perseus and Serpens Molecular Clouds. The observations
have angular resolution near 7" and spectral resolution near 0.16 km/s. We
imaged ~150 square arcminutes of Barnard 1, focusing on the main core, and the
B1 Ridge and clumps to its southwest. N2H+ shows the strongest emission, with
morphology similar to cool dust in the region, while HCO+ and HCN trace several
molecular outflows from a collection of protostars in the main core. We
identify a range of kinematic complexity, with N2H+ velocity dispersions
ranging from ~0.05-0.50 km/s across the field. Simultaneous continuum mapping
at 3 mm reveals six compact object detections, three of which are new
detections. A new non-binary dendrogram algorithm is used to analyze dense gas
structures in the N2H+ position-position-velocity (PPV) cube. The projected
sizes of dendrogram-identified structures range from about 0.01-0.34 pc.
Size-linewidth relations using those structures show that non-thermal
line-of-sight velocity dispersion varies weakly with projected size, while rms
variation in the centroid velocity rises steeply with projected size. Comparing
these relations, we propose that all dense gas structures in Barnard 1 have
comparable depths into the sky, around 0.1-0.2 pc; this suggests that
over-dense, parsec-scale regions within molecular clouds are better described
as flattened structures rather than spherical collections of gas. Science-ready
PPV cubes for Barnard 1 molecular emission are available for download.Comment: Accepted to The Astrophysical Journal (ApJ), 51 pages, 27 figures
(some with reduced resolution in this preprint); Project website is at
http://carma.astro.umd.edu/class
Initial Conditions for Large Cosmological Simulations
This technical paper describes a software package that was designed to
produce initial conditions for large cosmological simulations in the context of
the Horizon collaboration. These tools generalize E. Bertschinger's Grafic1
software to distributed parallel architectures and offer a flexible alternative
to the Grafic2 software for ``zoom'' initial conditions, at the price of large
cumulated cpu and memory usage. The codes have been validated up to resolutions
of 4096^3 and were used to generate the initial conditions of large
hydrodynamical and dark matter simulations. They also provide means to generate
constrained realisations for the purpose of generating initial conditions
compatible with, e.g. the local group, or the SDSS catalog.Comment: 12 pages, 11 figures, submitted to ApJ
ColDICE: a parallel Vlasov-Poisson solver using moving adaptive simplicial tessellation
Resolving numerically Vlasov-Poisson equations for initially cold systems can
be reduced to following the evolution of a three-dimensional sheet evolving in
six-dimensional phase-space. We describe a public parallel numerical algorithm
consisting in representing the phase-space sheet with a conforming,
self-adaptive simplicial tessellation of which the vertices follow the
Lagrangian equations of motion. The algorithm is implemented both in six- and
four-dimensional phase-space. Refinement of the tessellation mesh is performed
using the bisection method and a local representation of the phase-space sheet
at second order relying on additional tracers created when needed at runtime.
In order to preserve in the best way the Hamiltonian nature of the system,
refinement is anisotropic and constrained by measurements of local Poincar\'e
invariants. Resolution of Poisson equation is performed using the fast Fourier
method on a regular rectangular grid, similarly to particle in cells codes. To
compute the density projected onto this grid, the intersection of the
tessellation and the grid is calculated using the method of Franklin and
Kankanhalli (1993) generalised to linear order. As preliminary tests of the
code, we study in four dimensional phase-space the evolution of an initially
small patch in a chaotic potential and the cosmological collapse of a
fluctuation composed of two sinusoidal waves. We also perform a "warm" dark
matter simulation in six-dimensional phase-space that we use to check the
parallel scaling of the code.Comment: Code and illustration movies available at:
http://www.vlasix.org/index.php?n=Main.ColDICE - Article submitted to Journal
of Computational Physic
High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference
We propose a data-driven method for recovering miss-ing parts of 3D shapes.
Our method is based on a new deep learning architecture consisting of two
sub-networks: a global structure inference network and a local geometry
refinement network. The global structure inference network incorporates a long
short-term memorized context fusion module (LSTM-CF) that infers the global
structure of the shape based on multi-view depth information provided as part
of the input. It also includes a 3D fully convolutional (3DFCN) module that
further enriches the global structure representation according to volumetric
information in the input. Under the guidance of the global structure network,
the local geometry refinement network takes as input lo-cal 3D patches around
missing regions, and progressively produces a high-resolution, complete surface
through a volumetric encoder-decoder architecture. Our method jointly trains
the global structure inference and local geometry refinement networks in an
end-to-end manner. We perform qualitative and quantitative evaluations on six
object categories, demonstrating that our method outperforms existing
state-of-the-art work on shape completion.Comment: 8 pages paper, 11 pages supplementary material, ICCV spotlight pape
In-Situ Defect Detection in Laser Powder Bed Fusion by Using Thermography and Optical TomographyâComparison to Computed Tomography
Among additive manufacturing (AM) technologies, the laser powder bed fusion (L-PBF) is one of the most important technologies to produce metallic components. The layer-wise build-up of components and the complex process conditions increase the probability of the occurrence of defects. However, due to the iterative nature of its manufacturing process and in contrast to conventional manufacturing technologies such as casting, L-PBF offers unique opportunities for in-situ monitoring. In this study, two cameras were successfully tested simultaneously as a machine manufacturer independent process monitoring setup: a high-frequency infrared camera and a camera for long time exposure, working in the visible and infrared spectrum and equipped with a near infrared filter. An AISI 316L stainless steel specimen with integrated artificial defects has been monitored during the build. The acquired camera data was compared to data obtained by computed tomography. A promising and easy to use examination method for data analysis was developed and correlations between measured signals and defects were identified. Moreover, sources of possible data misinterpretation were specified. Lastly, attempts for automatic data analysis by data integration are presented
- âŠ