259 research outputs found
Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts
Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm
Pushing Light-Sheet Microscopy to Greater Depths
Light-sheet fluorescence microscopy (LSFM) has established itself as an irreplaceable imaging technique in developmental biology over the past two decades. With its emergence, the extended recording of in toto datasets of developing organisms across scales became possible. Remarkably, LSFM opened the door to new spatio-temporal domains in biology, offering cellular resolution on the one hand, and temporal resolution on the order of seconds on the other hand. As in any fluorescence microscopy technique, LSFM is also affected by image degradation at greater tissue depths. Thus far, this has been addressed by the suppression of scattered light, use of fluorophores emitting in the far red spectrum, multi-view detection and fusion, adaptive optics, as well as different illumination schemes. In this work, I investigate for the first time in vivo optical aberration reduction via refractive index matching in LSFM. Examples are shown on common model organisms as Arabidopsis thalina, Oryzias latipes, Mus musculus, as well as Drosophila. Additionally, I present a novel open-top light-sheet microscope, tailored for high-throughput imaging of mammalian samples, such as early stage mouse embryos. It is based on a
three objective geometry, encompassing two opposing detection objective lenses with high light collection efficiency, and an invertedly mounted illumination lens. It bridges the spatial scale between samples by employing an extendible light-sheet illumination via a tunable acoustic gradient index lens. Both parts of this work improve the image quality across the 3D volume of specimens, paving the way for more quantitative recordings at greater tissue depths
Algorithms for Triangles, Cones & Peaks
Three different geometric objects are at the center of this dissertation: triangles, cones and peaks.
In computational geometry, triangles are the most basic shape for planar subdivisions.
Particularly, Delaunay triangulations are a widely used for manifold applications in engineering, geographic information systems, telecommunication networks, etc.
We present two novel parallel algorithms to construct the Delaunay triangulation of a given point set.
Yao graphs are geometric spanners that connect each point of a given set to its nearest neighbor in each of cones drawn around it.
They are used to aid the construction of Euclidean minimum spanning trees
or in wireless networks for topology control and routing.
We present the first implementation of an optimal -time sweepline algorithm to construct Yao graphs.
One metric to quantify the importance of a mountain peak is its isolation.
Isolation measures the distance between a peak and the closest point of higher elevation.
Computing this metric from high-resolution digital elevation models (DEMs) requires efficient algorithms.
We present a novel sweep-plane algorithm that can calculate the isolation of all peaks on Earth in mere minutes
GEAR-RT: Towards Exa-Scale Moment Based Radiative Transfer For Cosmological Simulations Using Task-Based Parallelism And Dynamic Sub-Cycling with SWIFT
The development and implementation of GEAR-RT, a radiative transfer solver
using the M1 closure in the open source code SWIFT, is presented, and validated
using standard tests for radiative transfer. GEAR-RT is modeled after RAMSES-RT
(Rosdahl et al. 2013) with some key differences. Firstly, while RAMSES-RT uses
Finite Volume methods and an Adaptive Mesh Refinement (AMR) strategy, GEAR-RT
employs particles as discretization elements and solves the equations using a
Finite Volume Particle Method (FVPM). Secondly, GEAR-RT makes use of the
task-based parallelization strategy of SWIFT, which allows for optimized load
balancing, increased cache efficiency, asynchronous communications, and a
domain decomposition based on work rather than on data. GEAR-RT is able to
perform sub-cycles of radiative transfer steps w.r.t. a single hydrodynamics
step. Radiation requires much smaller time step sizes than hydrodynamics, and
sub-cycling permits calculations which are not strictly necessary to be
skipped. Indeed, in a test case with gravity, hydrodynamics, and radiative
transfer, the sub-cycling is able to reduce the runtime of a simulation by over
90%. Allowing only a part of the involved physics to be sub-cycled is a
contrived matter when task-based parallelism is involved, and is an entirely
novel feature in SWIFT.
Since GEAR-RT uses a FVPM, a detailed introduction into Finite Volume methods
and Finite Volume Particle Methods is presented. In astrophysical literature,
two FVPM methods are written about: Hopkins (2015) have implemented one in
their GIZMO code, while the one mentioned in Ivanova et al. (2013) isn't used
to date. In this work, I test an implementation of the Ivanova et al. (2013)
version, and conclude that in its current form, it is not suitable for use with
particles which are co-moving with the fluid, which in turn is an essential
feature for cosmological simulations.Comment: PhD Thesi
Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics
Three recent breakthroughs due to AI in arts and science serve as motivation:
An award winning digital image, protein folding, fast matrix multiplication.
Many recent developments in artificial neural networks, particularly deep
learning (DL), applied and relevant to computational mechanics (solid, fluids,
finite-element technology) are reviewed in detail. Both hybrid and pure machine
learning (ML) methods are discussed. Hybrid methods combine traditional PDE
discretizations with ML methods either (1) to help model complex nonlinear
constitutive relations, (2) to nonlinearly reduce the model order for efficient
simulation (turbulence), or (3) to accelerate the simulation by predicting
certain components in the traditional integration methods. Here, methods (1)
and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3)
relying on convolutional neural networks. Pure ML methods to solve (nonlinear)
PDEs are represented by Physics-Informed Neural network (PINN) methods, which
could be combined with attention mechanism to address discontinuous solutions.
Both LSTM and attention architectures, together with modern and generalized
classic optimizers to include stochasticity for DL networks, are extensively
reviewed. Kernel machines, including Gaussian processes, are provided to
sufficient depth for more advanced works such as shallow networks with infinite
width. Not only addressing experts, readers are assumed familiar with
computational mechanics, but not with DL, whose concepts and applications are
built up from the basics, aiming at bringing first-time learners quickly to the
forefront of research. History and limitations of AI are recounted and
discussed, with particular attention at pointing out misstatements or
misconceptions of the classics, even in well-known references. Positioning and
pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at
CMES-Computer Modeling in Engineering & Science
The interplay of gas, dust, and magnetorotational instability in magnetized protoplanetary disks
The rich diversity of exoplanets discovered in various physical environments clearly
shows that planet formation is an efficient process with multiple outcomes. To un-
derstand the emergence of newborn planets, one can rewind the clock of planetary
systems by investigating the formation and evolution of their natal environment,
the so-called protoplanetary disks. In the core accretion scenario, rocky planets
such as the Earth are thought to be formed from cosmic dust particles that grow
into pebbles and planetesimals, the building blocks of planets, later assembling to-
gether. An intricate puzzle in this theory is how exactly these building blocks are
formed and kept long enough in the natal protoplanetary disk.
Protoplanetary disks are weakly magnetized accretion disks that are subject to
the magnetorotational instability (MRI). It is to date one of the main candidates
for explaining their turbulence and angular momentum transport. The nonideal
magnetohydrodynamic effects prevent the MRI from operating everywhere in the
protoplanetary disk, leading to MRI active regions with high turbulence and non-
MRI regions with low turbulence. It has been hypothesized that these variations
in the disk turbulence can lead to pressure maxima where dust particles can be
trapped. In these so-called dust traps, dust particles can grow efficiently into peb-
bles and potentially planetesimals. Yet, it is still an open question how this MRI-
powered mechanism shapes the secular evolution of protoplanetary disks, and how
it is involved in the first steps of planet formation. It is because the interplay of gas
evolution, dust evolution (dynamics and grain growth processes combined) and
MRI-driven turbulence over millions of years has never been investigated.
The central goal of this thesis is to bridge the gap in the core accretion scenario
of planet formation by building the very first unified disk evolution framework
that captures self-consistently this interplay. The unique approach adopted in this
thesis leads to an exciting new pathway for the generation of spontaneous dust
traps everywhere in the protoplanetary disk, which can be potential birth-sites for
planets by forming and keeping their necessary building blocks
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
The Fifteenth Marcel Grossmann Meeting
The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
Comparing the Performance of Julia on CPUs versus GPUs and Julia-MPI versus Fortran-MPI: a case study with MPAS-Ocean (Version 7.1)
Some programming languages are easy to develop at the cost of slow execution, while others are fast at runtime but much more difficult to write. Julia is a programming language that aims to be the best of both worlds – a development and production language at the same time. To test Julia's utility in scientific high-performance computing (HPC), we built an unstructured-mesh shallow water model in Julia and compared it against an established Fortran-MPI ocean model, the Model for Prediction Across Scales–Ocean (MPAS-Ocean), as well as a Python shallow water code. Three versions of the Julia shallow water code were created: for single-core CPU, graphics processing unit (GPU), and Message Passing Interface (MPI) CPU clusters. Comparing identical simulations revealed that our first version of the Julia model was 13 times faster than Python using NumPy, where both used an unthreaded single-core CPU. Further Julia optimizations, including static typing and removing implicit memory allocations, provided an additional 10–20× speed-up of the single-core CPU Julia model. The GPU-accelerated Julia code was almost identical in terms of performance to the MPI parallelized code on 64 processes, an unexpected result for such different architectures. Parallelized Julia-MPI performance was identical to Fortran-MPI MPAS-Ocean for low processor counts and ranges from 2× faster to 2× slower for higher processor counts. Our experience is that Julia development is fast and convenient for prototyping but that Julia requires further investment and expertise to be competitive with compiled codes. We provide advice on Julia code optimization for HPC systems.</p
- …