705 research outputs found
Predilection Muscles and Physical Condition of Raccoon Dogs (Nyctereutes procyonoides) Experimentally Infected with Trichinella spiralis and Trichinella nativa
The predilection muscles of Trichinella spiralis and T. nativa were studied in 2 experimental groups of 6 raccoon dogs (Nyctereutes procyonoides), the third group serving as a control for clinical signs. The infection dose for both parasites was 1 larva/g body weight. After 12 weeks, the animals were euthanized and 13 sampling sites were analysed by the digestion method. Larvae were found in all sampled skeleton muscles of the infected animals, but not in the specimens from the heart or intestinal musculature. Both parasite species reproduced equally well in the raccoon dog. The median density of infection in positive tissues was 353 larvae per gram (lpg) with T. spiralis and 343 lpg with T. nativa. All the infected animals had the highest larvae numbers in the carpal flexors (M. flexor carpi ulnaris). Also tongue and eye muscles had high infection levels. There were no significant differences in the predilection sites between these 2 parasite species. Trichinellosis increased the relative amount of fat, but not the body weight in the captive raccoon dogs. Thus, Trichinella as a muscle parasite might have catabolic effect on these animals
A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations
Many scientific and engineering problems require to perform Bayesian
inferences in function spaces, in which the unknowns are of infinite dimension.
In such problems, choosing an appropriate prior distribution is an important
task. In particular we consider problems where the function to infer is subject
to sharp jumps which render the commonly used Gaussian measures unsuitable. On
the other hand, the so-called total variation (TV) prior can only be defined in
a finite dimensional setting, and does not lead to a well-defined posterior
measure in function spaces. In this work we present a TV-Gaussian (TG) prior to
address such problems, where the TV term is used to detect sharp jumps of the
function, and the Gaussian distribution is used as a reference measure so that
it results in a well-defined posterior measure in the function space. We also
present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples
from the posterior distribution of the TG prior. With numerical examples we
demonstrate the performance of the TG prior and the efficiency of the proposed
MCMC algorithm
Low-loss singlemode PECVD silicon nitride photonic wire waveguides for 532-900 nm wavelength window fabricated within a CMOS pilot line
PECVD silicon nitride photonic wire waveguides have been fabricated in a CMOS pilot line. Both clad and unclad single mode wire waveguides were measured at lambda = 532, 780, and 900 nm, respectively. The dependence of loss on wire width, wavelength, and cladding is discussed in detail. Cladded multimode and singlemode waveguides show a loss well below 1 dB/cm in the 532-900 nm wavelength range. For singlemode unclad waveguides, losses < 1 dB/cm were achieved at lambda = 900 nm, whereas losses were measured in the range of 1-3 dB/cm for lambda = 780 and 532 nm, respectively
Fast Gibbs sampling for high-dimensional Bayesian inversion
Solving ill-posed inverse problems by Bayesian inference has recently
attracted considerable attention. Compared to deterministic approaches, the
probabilistic representation of the solution by the posterior distribution can
be exploited to explore and quantify its uncertainties. In applications where
the inverse solution is subject to further analysis procedures, this can be a
significant advantage. Alongside theoretical progress, various new
computational techniques allow to sample very high dimensional posterior
distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior
sampler was developed for linear inverse problems with -type priors. In
this article, we extend this single component Gibbs-type sampler to a wide
range of priors used in Bayesian inversion, such as general priors
with additional hard constraints. Besides a fast computation of the
conditional, single component densities in an explicit, parameterized form, a
fast, robust and exact sampling from these one-dimensional densities is key to
obtain an efficient algorithm. We demonstrate that a generalization of slice
sampling can utilize their specific structure for this task and illustrate the
performance of the resulting slice-within-Gibbs samplers by different computed
examples. These new samplers allow us to perform sample-based Bayesian
inference in high-dimensional scenarios with certain priors for the first time,
including the inversion of computed tomography (CT) data with the popular
isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems
A modelling framework for the assessment of the impacts of alternative policy and management options on the sustainability of Finnish agrifood systems
Recently, a new project focussing on integrated assessment modelling of agrifood systems (IAM-Tools) has been launched at MTT Agrifood Research Finland to gather, evaluate, refine and develop these component models and to link tem in an IAM framework for Finnish conditions
Laboratory Experiments of Model-based Reinforcement Learning for Adaptive Optics Control
Direct imaging of Earth-like exoplanets is one of the most prominent
scientific drivers of the next generation of ground-based telescopes.
Typically, Earth-like exoplanets are located at small angular separations from
their host stars, making their detection difficult. Consequently, the adaptive
optics (AO) system's control algorithm must be carefully designed to
distinguish the exoplanet from the residual light produced by the host star.
A new promising avenue of research to improve AO control builds on
data-driven control methods such as Reinforcement Learning (RL). RL is an
active branch of the machine learning research field, where control of a system
is learned through interaction with the environment. Thus, RL can be seen as an
automated approach to AO control, where its usage is entirely a turnkey
operation. In particular, model-based reinforcement learning (MBRL) has been
shown to cope with both temporal and misregistration errors. Similarly, it has
been demonstrated to adapt to non-linear wavefront sensing while being
efficient in training and execution.
In this work, we implement and adapt an RL method called Policy Optimization
for AO (PO4AO) to the GHOST test bench at ESO headquarters, where we
demonstrate a strong performance of the method in a laboratory environment. Our
implementation allows the training to be performed parallel to inference, which
is crucial for on-sky operation. In particular, we study the predictive and
self-calibrating aspects of the method. The new implementation on GHOST running
PyTorch introduces only around 700 microseconds in addition to hardware,
pipeline, and Python interface latency. We open-source well-documented code for
the implementation and specify the requirements for the RTC pipeline. We also
discuss the important hyperparameters of the method, the source of the latency,
and the possible paths for a lower latency implementation.Comment: Accepted for publication in JATI
- …