451 research outputs found
The origin of organic emission in NGC 2071
Context: The physical origin behind organic emission in embedded low-mass
star formation has been fiercely debated in the last two decades. A multitude
of scenarios have been proposed, from a hot corino to PDRs on cavity walls to
shock excitation.
Aims: The aim of this paper is to determine the location and the
corresponding physical conditions of the gas responsible for organics emission
lines. The outflows around the small protocluster NGC 2071 are an ideal testbed
to differentiate between various scenarios.
Methods: Using Herschel-HIFI and the SMA, observations of CH3OH, H2CO and
CH3CN emission lines over a wide range of excitation energies were obtained.
Comparisons to a grid of radiative transfer models provide constraints on the
physical conditions. Comparison to H2O line shape is able to trace gas-phase
synthesis versus a sputtered origin.
Results: Emission of organics originates in three spots: the continuum
sources IRS 1 ('B') and IRS 3 ('A') as well as a outflow position ('F').
Densities are above 10 cm and temperatures between 100 to 200 K.
CH3OH emission observed with HIFI originates in all three regions and cannot be
associated with a single region. Very little organic emission originates
outside of these regions.
Conclusions: Although the three regions are small (<1,500 AU), gas-phase
organics likely originate from sputtering of ices due to outflow activity. The
derived high densities (>10 cm) are likely a requirement for organic
molecules to survive from being destroyed by shock products. The lack of
spatially extended emission confirms that organic molecules cannot (re)form
through gas-phase synthesis, as opposed to H2O, which shows strong line wing
emission. The lack of CH3CN emission at 'F' is evidence for a different history
of ice processing due to the absence of a protostar at that location and recent
ice mantle evaporation.Comment: 10 Pages, 8 figures, Accepted for Astronomy and Astrophysic
Geant4 Monte Carlo simulation study of the secondary radiation fields at the laser-driven ion source LION
At the Center for Advanced Laser Applications (CALA), Garching, Germany, the LION (Laser-driven ION Acceleration) experiment is being commissioned, aiming at the production of laser-driven bunches of protons and light ions with multi-MeV energies and repetition frequency up to 1 Hz. A Geant4 Monte Carlo-based study of the secondary neutron and photon fields expected during LIONâs different commissioning phases is presented. Goal of this study is the characterization of the secondary radiation environment present inside and outside the LION cave. Three different primary proton spectra, taken from experimental results reported in the literature and representative of three different future stages of the LIONâs commissioning path are used. Together with protons, also electrons are emitted through laser-target interaction and are also responsible for the production of secondary radiation. For the electron component of the three source terms, a simplified exponential model is used. Moreover, in order to reduce the simulation complexity, a two-components simplified geometrical model of proton and electron sources is proposed. It has been found that the radiation environment inside the experimental cave is either dominated by photons or neutrons depending on the position in the room and the source term used. The higher the intensity of the source, the higher the neutron contribution to the total dose for all scored positions. Maximum neutron and photon ambient dose equivalent values normalized to 10(9) simulated incident primaries were calculated at the exit of the vacuum chamber, where values of about 85 nSv (10(9) primaries)(â1) and 1.0 ΌSv (10(9) primaries)(â1) were found
Formally Defining and Iterating Infinite Models
International audienceThe wide adoption of MDE raises new situations where we need to manipulate very large models or even infinite model streams gathered at runtime. These new uses cases for MDE raise challenges that had been unforeseen by the time standard modeling framework were designed. This paper proposes a formal definition of an infinite model, as well as a formal framework to reason on queries over infinite models. This formal query definition aims at supporting the design and verification of operations that manipulate infinite models. First, we precisely identify the MOF parts which must be refined to support infinite structure. Then, we provide a formal coinductive definition dealing with unbounded and potentially infinite graph-based structure
High-throughput, automated quantification of white matter neurons in mild malformation of cortical development in epilepsy
Introduction
In epilepsy, the diagnosis of mild Malformation of Cortical Development type II (mMCD II) predominantly relies on the histopathological assessment of heterotopic neurons in the white matter. The exact diagnostic criteria for mMCD II are still ill-defined, mainly because findings from previous studies were contradictory due to small sample size, and the use of different stains and quantitative systems. Advance in technology leading to the development of whole slide imaging with high-throughput, automated quantitative analysis (WSA) may overcome these differences, and may provide objective, rapid, and reliable quantitation of white matter neurons in epilepsy. This study quantified the density of NeuN immunopositive neurons in the white matter of up to 142 epilepsy and control cases using WSA. Quantitative data from WSA was compared to two other systems, semi-automated quantitation, and the widely accepted method of stereology, to assess the reliability and quality of results from WSA.
Results
All quantitative systems showed a higher density of white matter neurons in epilepsy cases compared to controls (Pâ=â0.002). We found that, in particular, WSA with user-defined region of interest (manual) was superior in terms of larger sampled size, ease of use, time consumption, and accuracy in region selection and cell recognition compared to other methods. Using results from WSA manual, we proposed a threshold value for the classification of mMCD II, where 78% of patients now classified with mMCD II were seizure-free at the second post-operatively follow up.
Conclusion
This study confirms the potential role of WSA in future quantitative diagnostic histology, especially for the histopathological diagnosis of mMCD
Enabling the Reuse of Stored Model Transformations Through Annotations
International audienceWith the increasing adoption of MDE, model transformations , one of its core concepts together with metamodeling, stand out as a valuable asset. Therefore, a mechanism to annotate and store existing model transformations appears as a critical need for their efficient exploitation and reuse. Unfortunately, although several reuse mechanisms have been proposed for software artifacts in general and models in particular , none of them is specially tailored to the domain of model transformations. In order to fill this gap, we present here such a mechanism. Our approach is composed by two elements 1) a new DSL specially conceived for describing model transformations in terms of their functional and non-functional properties 2) a semi-automatic process for annotating and querying (repositories of) model transformations using as criteria the properties of our DSL. We validate the feasibility of our approach through a prototype implementation that integrates our approach in a GitHub repository
The JCMT Gould Belt survey: Dense core clusters in Orion B
The James Clerk Maxwell Telescope Gould Belt Legacy Survey obtained SCUBA-2 observations of dense cores within three sub-regions of OrionB: LDN1622, NGC2023/2024, and NGC2068/2071, all of which contain clusters of cores. We present an analysis of the clustering properties of these cores, including the two-point correlation function and Cartwrightâs Q parameter. We identify individual clusters of dense cores across all three regions using a minimal spanning tree technique, and find that in each cluster, the most massive cores tend to be centrally located. We also apply the independent MâÎŁ technique and find a strong correlation between core mass and the local surface density of cores. These two lines of evidence jointly suggest that some amount of mass segregation in clusters has happened already at the dense core stage
Distributed Model-to-Model Transformation with ATL on MapReduce
International audienceEfficient processing of very large models is a key requirement for the adoption of Model-Driven Engineering (MDE) in some industrial contexts. One of the central operations in MDE is rule-based model transformation (MT). It is used to specify manipulation operations over structured data coming in the form of model graphs. However, being based on com-putationally expensive operations like subgraph isomorphism, MT tools are facing issues on both memory occupancy and execution time while dealing with the increasing model size and complexity. One way to overcome these issues is to exploit the wide availability of distributed clusters in the Cloud for the distributed execution of MT. In this paper, we propose an approach to automatically distribute the execution of model transformations written in a popular MT language, ATL, on top of a well-known distributed programming model, MapReduce. We show how the execution semantics of ATL can be aligned with the MapReduce computation model. We describe the extensions to the ATL transformation engine to enable distribution, and we experimentally demonstrate the scalability of this solution in a reverse-engineering scenario
Parallel model validation with epsilon
Traditional model management programs, such as transformations, often perform poorly when dealing with very large models. Although many such programs are inherently parallelisable, the execution engines of popular model management languages were not designed for concurrency. We propose a scalable data and rule-parallel solution for an established and feature-rich model validation language (EVL). We highlight the challenges encountered with retro-fitting concurrency support and our solutions to these challenges. We evaluate the correctness of our implementation through rigorous automated tests. Our results show up to linear performance improvements with more threads and larger models, with significantly faster execution compared to interpreted OCL
The impact of brain-derived neurotrophic factor Val66Met polymorphism on cognition and functional brain networks in patients with intractable partial epilepsy
INTRODUCTION: Medial temporal lobe epilepsy (mTLE) is the most common refractory focal epilepsy in adults. Around 30%-40% of patients have prominent memory impairment and experience significant postoperative memory and language decline after surgical treatment. BDNF Val66Met polymorphism has also been associated with cognition and variability in structural and functional hippocampal indices in healthy controls and some patient groups. AIMS: We examined whether BDNF Val66Met variation was associated with cognitive impairment in mTLE. METHODS: In this study, we investigated the association of Val66Met polymorphism with cognitive performance (n = 276), postoperative cognitive change (n = 126) and fMRI activation patterns during memory encoding and language paradigms in 2 groups of patients with mTLE (n = 37 and 34). RESULTS: mTLE patients carrying the Met allele performed more poorly on memory tasks and showed reduced medial temporal lobe activation and reduced task-related deactivations within the default mode networks in both the fMRI memory and language tasks than Val/Val patients. CONCLUSIONS: Although cognitive impairment in epilepsy is the result of a complex interaction of factors, our results suggest a role of genetic factors on cognitive impairment in mTLE
- âŠ