303 research outputs found

    Consumption of Methane and CO_2 by Methanotrophic Microbial Mats from Gas Seeps of the Anoxic Black Sea

    Get PDF
    The deep anoxic shelf of the northwestern Black Sea has numerous gas seeps, which are populated by methanotrophic microbial mats in and above the seafloor. Above the seafloor, the mats can form tall reef-like structures composed of porous carbonate and microbial biomass. Here, we investigated the spatial patterns of CH_4 and CO_2 assimilation in relation to the distribution of ANME groups and their associated bacteria in mat samples obtained from the surface of a large reef structure. A combination of different methods, including radiotracer incubation, beta microimaging, secondary ion mass spectrometry, and catalyzed reporter deposition fluorescence in situ hybridization, was applied to sections of mat obtained from the large reef structure to locate hot spots of methanotrophy and to identify the responsible microbial consortia. In addition, CO_2 reduction to methane was investigated in the presence or absence of methane, sulfate, and hydrogen. The mat had an average δ^(13)C carbon isotopic signature of −67.1‰, indicating that methane was the main carbon source. Regions dominated by ANME-1 had isotope signatures that were significantly heavier (−66.4‰ ± 3.9 ‰ [mean ± standard deviation; n = 7]) than those of the more central regions dominated by ANME-2 (−72.9‰ ± 2.2 ‰; n = 7). Incorporation of ^(14)C from radiolabeled CH_4 or CO_2 revealed one hot spot for methanotrophy and CO2 fixation close to the surface of the mat and a low assimilation efficiency (1 to 2% of methane oxidized). Replicate incubations of the mat with ^(14)CH_4 or ^(14)CO_2 revealed that there was interconversion of CH_4 and CO_2. The level of CO_2 reduction was about 10% of the level of anaerobic oxidation of methane. However, since considerable methane formation was observed only in the presence of methane and sulfate, the process appeared to be a rereaction of anaerobic oxidation of methane rather than net methanogenesis

    Pkhd1

    Get PDF
    Autosomal-recessive polycystic kidney disease (ARPKD; MIM #263200) is a severe, hereditary, hepato-renal fibrocystic disorder that causes early childhood morbidity and mortality. Mutations in the polycystic kidney and hepatic disease 1 (PKHD1) gene, which encodes the protein fibrocystin/polyductin complex (FPC), cause all typical forms of ARPKD. Several mouse lines carrying diverse, genetically engineered disruptions in the orthologous Pkhd1 gene have been generated, but none expresses the classic ARPKD renal phenotype. In the current study, we characterized a spontaneous mouse Pkhd1 mutation that is transmitted as a recessive trait and causes cysticliver (cyli), similar to the hepato-biliary disease in ARPKD, but which is exacerbated by age, sex, and parity. We mapped the mutation to Chromosome 1 and determined that an insertion/deletion mutation causes a frameshift within Pkhd1 exon 48, which is predicted to result in a premature termination codon (UGA). Pkhd

    Making in the moment: The dynamic cognition of musicians-in-action

    Get PDF
    Watching highly-skilled experts in the midst of improvised performance can be a source of mystification and wonder. Understanding this mystery in more detail, especially for music, is a major motivation for my dissertation. Moreover, while there are multiple avenues through which one could explore improvisation, I will primarily utilize the tools of embodied cognitive science to help better understand it in general and bebop jazz improvisation in particular. I furthermore consider possible ways to define improvisation as an essential precondition of my project.In what follows, I will not defend one type of embodied approach over all alternatives. Instead, I will consider improvisation in light of three different strands of research: shared intentions, ecological psychology (especially in regards to a theory of affordances), and predictive processing. The focus on these strands, taken both as individual research programs and a single unit of analysis, closely mirrors the essential core commitment of embodiment by providing a dynamic account that spans brain, body, and world. It likewise does so in ways that reject any neat partition of inputs, cognition, and output.For shared intentions, the main issue I will explore concerns how to best account for the dynamic moment-to-moment engagement of musicians with each other, especially in light of improvisation as an intentional activity, and covering the differences from novices to experts in bebop performance. For a theory of affordances, the focus will be on the interplay between a skilled agent and a structured environment as an essential part of musical perception and action. Finally, for predictive processing, a picture of the brain as a predictive, anticipatory engine takes center stage to explain how musicians can respond to the extremely fast time constraints that are part of musical performance. I will also consider how novelty can be accounted for on predictive processing accounts. Through considerations of these different areas, the impacts of embodied approaches will be further clarified and help us to better model, understand, and appreciate the cognition of musicians-in-action

    Systems Engineering

    Get PDF
    The book "Systems Engineering: Practice and Theory" is a collection of articles written by developers and researches from all around the globe. Mostly they present methodologies for separate Systems Engineering processes; others consider issues of adjacent knowledge areas and sub-areas that significantly contribute to systems development, operation, and maintenance. Case studies include aircraft, spacecrafts, and space systems development, post-analysis of data collected during operation of large systems etc. Important issues related to "bottlenecks" of Systems Engineering, such as complexity, reliability, and safety of different kinds of systems, creation, operation and maintenance of services, system-human communication, and management tasks done during system projects are addressed in the collection. This book is for people who are interested in the modern state of the Systems Engineering knowledge area and for systems engineers involved in different activities of the area. Some articles may be a valuable source for university lecturers and students; most of case studies can be directly used in Systems Engineering courses as illustrative materials

    Virtual Runtime Application Partitions for Resource Management in Massively Parallel Architectures

    Get PDF
    This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.Siirretty Doriast

    Design optimization of the ¯PANDA Micro-Vertex-Detector for high performance spectroscopy in the charm quark sector

    Get PDF
    The ¯PANDA experiment is one of the key projects at the future FAIR facility, which is currently under construction at GSI Darmstadt. Measurements will be performed with antiprotons using a fixed-target setup. The main scope of ¯PANDA is the study of the strong interaction in the charm quark sector. Therefore, high precision spectroscopy of hadronic systems in this energy domain is a prerequisite. The Micro-Vertex-Detector (MVD) as innermost part of the tracking system plays an important role to achieve this goal. At present, the ¯PANDA project has exceeded the initial phase of conceptual design studies. Based on these results, an optimization of the individual detector subsystems, and thus also for the MVD, is necessary to continue the overall detector development towards its commissioning. Therefore, a comprehensive and realistic detector model must be developed, which on the one hand fulfils the physics requirements but on the other hand also includes feasible engineering solutions. This task is the main scope of the present work. The outcome of these studies will deliver important contributions to the technical design report for the ¯PANDA MVD, which is the next step towards the final detector assembly. In the first part of this work, main physics aspects of the charm spectroscopy are highlighted and a complete review of the experimental status in this field is given. Afterwards, all relevant details of the ¯PANDA experiment are summarized. The conceptual design and associated hardware developments for the MVD are discussed separately in the following chapters. They deliver basic input for the performed detector optimization, which is presented in the central part. Furthermore, this section describes the development of a comprehensive detector model for the MVD and its introduction into the physics simulation framework of ¯PANDA. The final part contains a compilation of extended simulations with the developed detector model. This includes the determination of basic detector parameters as well as the full simulation of physics channels. Obtained results demonstrate the compliance with all given requirements that warrant the desired physics performance.Das ¯PANDA-Experiment ist eines der Hauptprojekte an der zukünftigen Beschleunigeranlage FAIR, die sich zurzeit an der GSI in Darmstadt im Aufbau befindet. Für die später dort durchgeführten Experimente werden Antiprotonen zum Einsatz kommen, die auf ein festes Target geführt werden. Hauptziel dieser Messungen wird die Untersuchung der starken Wechselwirkung im Charm-Quark-Sektor sein. In diesem Zusammenhang ist eine hochpräzise Spektroskopie hadronischer Systeme im entsprechenden Energiebereich unabdingbar. Der Mikro-Vertex-Detektor (MVD) als innerster Teil des Spurerkennungssystems spielt dabei eine wesentliche Rolle. Das ¯PANDA-Projekt hat die Anfangsphase rein konzeptioneller Studien bereits hinter sich gelassen. Basierend auf diesen Ergebnissen ist eine Optimierung der einzelnen Detektorsysteme einschließlich des MVDs erforderlich, um die Umsetzung des Vorhabens bis hin zu seiner Inbetriebnahme voranzutreiben. Um dies zu erreichen muss ein möglichst realistisches und umfangreiches Modell erarbeitet werden, welches sowohl den physikalischen Anforderungen genügt, als auch die ingenieurstechnische Umsetzung erlaubt. Diese Aufgabe steht im Mittelpunkt der vorliegenden Arbeit. Wesentliche Teile der präsentierten Studien werden dabei Einklang in den zu erstellenden technischen Abschlussbericht für den ¯PANDA-MVD finden. Dieser Bericht ist der nächste Schritt auf dem Weg zum endgültigen Aufbau des Detektors. Der erste Teil der Arbeit befasst sich zunächst mit den wesentlichen physikalischen Aspekten der Spektroskopie Charm-behafteter Systeme. Außerdem wird ein kompletter Überblick über den derzeitigen experimentellen Stand auf diesem Gebiet gegeben. Anschließend werden alle wichtigen Informationen zum ¯PANDA-Experiment zusammengefasst. In den darauf folgenden Kapiteln werden die Grundkonzeption und zugehörige Hardware-Entwicklungen für den MVD vorgestellt. Diese liefern die wesentlichen Vorgaben für die durchgeführte Detektoroptimierung, welche im Hauptteil der Arbeit präsentiert wird. Darin wird auch die Ausarbeitung des detaillierten MVD-Modells bis hin zu seiner Implementierung in die Simulations-Software des ¯PANDA-Experiments beschrieben. Im letzten Teil werden die Ergebnisse ausführlicher Simulationen zusammengefasst, die mit dem entwickelten Detektormodell durchgeführt worden sind. Diese beziehen sich sowohl auf Basisparameter des Detektors als auch auf die vollständige Simulation physikalischer Kanäle. Die erzielten Resultate bestätigen die Einhaltung aller Vorgaben, die für das gewünschte Detektorverhalten zur Erfüllung des physikalischen Kernprogramms notwendig sind

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    Bases comportementales, moléculaires et cellulaires gouvernant l'apprentissage ambigu et la mémoire chez la drosophile

    Get PDF
    Extraire les liens prédictifs au sein d'un environnement permet d'appréhender la structure logique du monde. Ceci constitue la base des phénomènes d'apprentissage qui permettent d'établir des liens associatifs entre des évènements de notre entourage. Tout environnement naturel englobe une grande diversité de stimuli composés (i.e. intégrant plusieurs éléments). La façon dont ces stimuli composés sont appréhendés et associés à un renforcement éventuel (i.e. évènement plaisant ou aversif) est un thème fondamental de l'apprentissage associatif. Théoriquement, un stimulus composé AB peut être appris comme la somme de ses composants (A+B), un traitement dit élémentaire, comme un stimulus à part entière (traitement configural, AB=X) ou encore comme une entité comportant à la fois certaines caractéristiques de ses composants ainsi que des propriétés uniques (ou Indice Unique, AB = A+B+u). Ces deux dernières théories permettent notamment d'expliquer la résolution de problèmes ambigus tels que le Negative Patterning (NP) au cours duquel les composants du stimulus AB sont renforcés lorsque présentés seuls mais pas lorsqu'ils sont présentés en tant que composé. Bien que les réseaux neuronaux impliqués dans l'apprentissage associatif élémentaire soient bien connus, les mécanismes permettant la résolution d'apprentissages non élémentaires sont encore peu compris. Dans cette étude, nous démontrons pour la première fois que la Drosophile est capable d'apprentissage non-élémentaire de type NP. L'étude comportementale de la résolution du NP par les mouches montre qu'il passe par la répétition de cycles de conditionnement conduisant à un changement de représentation du mélange AB, s'éloignant peu à peu de la représentation de ses composants A et B. Nous développons ensuite un modèle computationnel à partir de données in vivo sur l'architecture et le fonctionnement des réseaux neuronaux de l'apprentissage olfactif chez la Drosophile, ce qui nous permet de proposer un mécanisme théorique permettant d'expliquer l'apprentissage du NP et dont la validité peut être testée grâce à des outils neurogénétiques. Lors d'un apprentissage de NP, les mouches acquièrent tout d'abord un premier lien associatif entre les composants A et B associés au renforcement, créant par la même occasion une ambiguïté avec leur mélange AB, présenté sans renforcement. Au cours des cycles de conditionnement, les représentations de A et B vis-à-vis de AB sont modulées de façon différentielle, inhibant progressivement la réponse neuronale au stimulus non renforcé tout en renforçant la réponse aux stimuli renforcés. Cette modulation augmente le contraste entre A, B et AB et permet aux drosophiles de résoudre la tâche de NP. Nous identifions les neurones APL (Anterior Paired Lateral) comme implémentation plausible de ce mécanisme, car l'engagement de leur activité inhibitrice spécifiquement durant la présentation de AB est nécessaire pour acquérir le NP sans altérer leurs capacités d'apprentissage dans des tâches non-ambiguës. Nous explorons ensuite l'implication des neurones APL dans un contexte plus général de résolution d'apprentissages ambigus. Pour conclure, notre travail établit la Drosophile comme modèle d'étude d'apprentissage non élémentaire, en proposant une première exploration des réseaux neuronaux sous-jacents à l'aide d'outils uniques à ce modèle. Il ouvre la voie à de nombreux projets dédiés à la compréhension des mécanismes neuronaux permettant aux animaux d'extraire des liens associatifs robustes dans un environnement complexe.Animals' survival heavily relies on their ability to establish causal relationships within their environment. That is made possible through learning experiences during which animals build associative links between the events they are exposed to. Most of the encountered stimuli are actually compounds, the constituents of which may have been reinforced (i.e., associated with a pleasant or unpleasant stimulus) in a different, sometimes opposed way. How compounds are perceived and processed is a central topic in the field of associative learning. In theory, a given compound AB may be learnt as the sum of its components (A+B), which is referred to as "Elemental learning", but it may also be learnt as a distinct stimulus (which Is called "Configural learning"). Finally, AB may bear both constituent-related and compound-specific features called "Unique Cues" (AB = A+B+u). Configural and unique cue processing enable the resolution of ambiguous tasks such as Negative Patterning (NP), during which A and B are reinforced when presented alone but not in a compound AB. Although neural correlates of simple associative learning are well described, those involved in non-elemental learning remain unclear. In this project, we rework a typical olfactory conditioning protocol based on semi-automated olfactory/electric shocks association, allowing us to demonstrate for the first time that Drosophila is able to solve NP tasks. Behavioural study of NP solving shows that its resolution relies on training repetition leading to a gradual change in the compound AB representation, shifting away from its constituents and thus becoming easier to distinguish. Next, we develop a computational model of olfactory associative learning in drosophila based on structural and functional in vivo data. Exploratory simulations of the model allow us to identify a theoretical mechanism enabling NP acquisition, the validity of which can be tested in vivo using neurogenetical tools only available in Drosophila. We propose that during a NP training, flies first acquire associative links between A, B and their reinforcement, which induces an ambiguity as the compound AB is presented without reinforcement. However, over the course of training cycles, non-reinforced stimuli representation is inhibited while the reinforced stimuli representation is consolidated. This differential modulation eventually leads to a shift in odours representation allowing flies to better distinguish between the constituents and their compound thus facilitating NP resolution. We identify APL (Anterior Paired Lateral) neurons as a plausible implementation of this theoretical mechanism, as APL inhibitory activity is specifically engaged during the non-reinforced stimulus presentation, which is necessary for NP acquisition but dispensable for non-ambiguous forms of learning. Lastly, we explore APL role in a broader context of ambiguity resolution. In conclusion, our work validates Drosophila as a robust model to investigate non-elementary learning, and present a promising model of the underlying neural mechanisms using a combination of behaviour, modelling and neurogenetical tools. We believe this opens the way to numerous interesting projects focused on understanding how animals extract robust associations in a complex world

    Environmental Emissions

    Get PDF
    Today, the issue of environmental emissions is more important than ever before. Air pollution with particulates, soot, carbon, aerosols, heavy metals, and so on is causing adverse effects on human health as well as the environment. This book presents new research and findings related to environmental emissions, pollution, and future sustainability. Written by experts in the field, chapters cover such topics as health effects, emission monitoring and mitigation, and emission composition and measurement
    • …
    corecore