4,856 research outputs found
Analysing Magnetism Using Scanning SQUID Microscopy
Scanning superconducting quantum interference device microscopy (SSM) is a
scanning probe technique that images local magnetic flux, which allows for
mapping of magnetic fields with high field and spatial accuracy. Many studies
involving SSM have been published in the last decades, using SSM to make
qualitative statements about magnetism. However, quantitative analysis using
SSM has received less attention. In this work, we discuss several aspects of
interpreting SSM images and methods to improve quantitative analysis. First, we
analyse the spatial resolution and how it depends on several factors. Second,
we discuss the analysis of SSM scans and the information obtained from the SSM
data. Using simulations, we show how signals evolve as a function of changing
scan height, SQUID loop size, magnetization strength and orientation. We also
investigated 2-dimensional autocorrelation analysis to extract information
about the size, shape and symmetry of magnetic features. Finally, we provide an
outlook on possible future applications and improvements.Comment: 16 pages, 10 figure
Multi-domain grid refinement for lattice-Boltzmann simulations on heterogeneous platforms
The main contribution of the present work consists of several parallel approaches for grid refinement based on a multi-domain decomposition for lattice-Boltzmann simulations. The proposed method for discretizing the fluid incorporates different regular Cartesian grids with no homogeneous spatial domains, which are in need to be communicated each other. Three different parallel approaches are proposed, homogeneous Multicore, homogeneous GPU, and heterogeneous Multicore-GPU. Although, the homogeneous implementations exhibit satisfactory results, the heterogeneous approach achieves up to 30% extra efficiency, in terms of Millions of Fluid Lattice Updates per Second (MFLUPS), by overlapping some of the steps on both architectures, Multicore and GPU
LBM-HPC - An open-source tool for fluid simulations. Case study: Unified parallel C (UPC-PGAS)
The main motivation of this work is the evaluation of the Unified Parallel C (UPC) model, for Boltzmann-fluid simulations. UPC is one of the current models in the so-called Partitioned Global Address Space paradigm. This paradigm attempts to increase the simplicity of codes and achieve a better efficiency and scalability. Two different UPC-based implementations, explicit and implicit, are presented and evaluated. We compare the fundamental features of our UPC implementations with other parallel programming model, MPI-OpenMP. In particular each of the major steps of any LBM code, i.e., Boundary Conditions, Communication, and LBM solver, are analyzed
A Non-uniform Staggered Cartesian Grid approach for Lattice-Boltzmann method
We propose a numerical approach based on the Lattice-Boltzmann method (LBM) for dealing with mesh refinement of Non-uniform Staggered Cartesian Grid. We explain, in detail, the strategy for mapping LBM over such geometries. The main benefit of this approach, compared to others, consists of solving all fluid units only once per time-step, and also reducing considerably the complexity of the communication and memory management between different refined levels. Also, it exhibits a better matching for parallel processors. To validate our method, we analyze several standard test scenarios, reaching satisfactory results with respect to other stateof-the-art methods. The performance evaluation proves that our approach not only exhibits a simpler and efficient scheme for dealing with mesh refinement, but also fast resolution, even in those scenarios where our approach needs to use a higher number of fluid units
Deconvolving Instrumental and Intrinsic Broadening in Excited State X-ray Spectroscopies
Intrinsic and experimental mechanisms frequently lead to broadening of
spectral features in excited-state spectroscopies. For example, intrinsic
broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy
elements where the core-hole lifetime is very short. On the other hand,
nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are
more limited by instrumental resolution. Here, we demonstrate that the
Richardson-Lucy (RL) iterative algorithm provides a robust method for
deconvolving instrumental and intrinsic resolutions from typical XAS and XRS
data. For the K-edge XAS of Ag, we find nearly complete removal of ~9.3 eV FWHM
broadening from the combined effects of the short core-hole lifetime and
instrumental resolution. We are also able to remove nearly all instrumental
broadening in an XRS measurement of diamond, with the resulting improved
spectrum comparing favorably with prior soft x-ray XAS measurements. We present
a practical methodology for implementing the RL algorithm to these problems,
emphasizing the importance of testing for stability of the deconvolution
process against noise amplification, perturbations in the initial spectra, and
uncertainties in the core-hole lifetime.Comment: 35 pages, 13 figure
Towards HPC-Embedded Case Study: Kalray and Message-Passing on NoC
Today one of the most important challenges in HPC is the development of computers with a low power consumption. In this context, recently, new embedded many-core systems have emerged. One of them is Kalray. Unlike other many-core architectures, Kalray is not a co-processor (self-hosted). One interesting feature of the Kalray architecture is the Network on Chip (NoC) connection. Habitually, the communication in many-core architectures is carried out via shared memory. However, in Kalray, the communication among processing elements can also be via Message-Passing on the NoC. One of the main motivations of this work is to present the main constraints to deal with the Kalray architecture. In particular, we focused on memory management and communication. We assess the use of NoC and shared memory on Kalray. Unlike shared memory, the implementation of Message-Passing on NoC is not transparent from programmer point of view. The synchronization among processing elements and NoC is other of the challenges to deal with in the Karlay processor. Although the synchronization using Message-Passing is more complex and consuming time than using shared memory, we obtain an overall speedup close to 6 when using Message-Passing on NoC with respect to the use of shared memory. Additionally, we have measured the power consumption of both approaches. Despite of being faster, the use of NoC presents a higher power consumption with respect to the approach that exploits shared memory. This additional consumption in Watts is about a 50%. However, the reduction in time by using NoC has an important impact on the overall power consumption as well
Algorithms for Visualizing Phylogenetic Networks
We study the problem of visualizing phylogenetic networks, which are
extensions of the Tree of Life in biology. We use a space filling visualization
method, called DAGmaps, in order to obtain clear visualizations using limited
space. In this paper, we restrict our attention to galled trees and galled
networks and present linear time algorithms for visualizing them as DAGmaps.Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
Reconstructing phylogenetic level-1 networks from nondense binet and trinet sets
Binets and trinets are phylogenetic networks with two and three leaves, respectively. Here we consider the problem of deciding if there exists a binary level-1 phylogenetic network displaying a given set T of binary binets or trinets over a taxon set X, and constructing such a network whenever it exists. We show that this is NP-hard for trinets but polynomial-time solvable for binets. Moreover, we show that the problem is still polynomial-time solvable for inputs consisting of binets and trinets as long as the cycles in the trinets have size three. Finally, we present an O(3^{|X|} poly(|X|)) time algorithm for general sets of binets and trinets. The latter two algorithms generalise to instances containing level-1 networks with arbitrarily many leaves, and thus provide some of the first supernetwork algorithms for computing networks from a set of rooted 1 phylogenetic networks
FliPpr: A Prettier Invertible Printing System
When implementing a programming language, we often write
a parser and a pretty-printer. However, manually writing both programs
is not only tedious but also error-prone; it may happen that a pretty-printed
result is not correctly parsed. In this paper, we propose FliPpr,
which is a program transformation system that uses program inversion
to produce a CFG parser from a pretty-printer. This novel approach
has the advantages of fine-grained control over pretty-printing, and easy
reuse of existing efficient pretty-printer and parser implementations
Considerations for an Ac Dipole for the LHC
Following successful experience at the BNL AGS, FNAL Tevatron, and CERN SPS,
an AC Dipole will be adopted at the LHC for rapid measurements of ring optics.
This paper describes some of the parameters of the AC dipole for the LHC,
scaling from performance of the FNAL and BNL devices.Comment: proceedings of the 2007 Particle Accelerator Conferenc
- …