215 research outputs found
The compatible solutes ectoine and 5-hydroxyectoine: Catabolism and regulatory mechanisms
To cope with osmotic stress many microorganisms make use of short, osmotically active, organic compounds, the so-called compatible solutes. Examples for especially effective members of this type of molecules are the tetrahydropyrimidines ectoine and 5-hydroxyectoine. Both molecules are produced by a large number of microorganisms, not only to fend-off osmotic stress, but also for example low and high temperature challenges. The biosynthetic pathway used by these organisms to synthesize ectoines has already been studied intensively and the enzymes used therein are characterized quite well, both biochemically as well as structurally. However, synthesis of ectoines is only half the story. Inevitably, ectoines are frequently released from the producer cells in different environmental settings. Especially in highly competitive habitats like the upper ocean layers some bacteria specialized on a niche like this. The model organism used in this work is such a species. It is the marine bacterium Ruegeria pomeroyi DSS-3 which belongs to the Roseobacter-clade. Roseobacter species are heterotrophic Proteobacteria which can live in symbiosis with phytoplankton as well as turning against them in a bacterial warfare fashion to scavenge valuable nutrients. Ectoines can be imported by R. pomeroyi DSS-3 in a high-affinity fashion and be used as energy as well as carbon- and nitrogen-sources. To achieve this, both ectoines rings are degraded by the hydrolase EutD and deacetylated by the deacetylase EutE. The first hydrolysis products α-ADABA (from ectoine) and hydroxy-α-ADABA (from hydroxyectoine) are deacetylated to DABA and hydroxy-DABA which are in additional biochemical reactions transformed to aspartate to fuel the cellâs central metabolism. The role and functioning of the EutDE enzymes which work in a concerted fashion are a central aspect of this work. Both enzymes could be biochemically and structurally characterized, and the architecture of the metabolic pathway could be illuminated. α-ADABA and hydroxy-α-ADABA are not only central to ectoine catabolism, but also to the regulatory mechanisms associated with it. Both molecules serve as inducers of the central regulatory protein of this pathway, the MocR-/GabR-type regulator protein EnuR. In the framework of this dissertation molecular details could be clarified which enable the EnuR repressor molecule to sense both molecules with high affinity to subsequently derepress the genes for the import and catabolism of ectoines
Performance, memory efficiency and programmability: the ambitious triptych of combining vertex-centricity with HPC
The field of graph processing has grown significantly due to the flexibility and wide
applicability of the graph data structure. In the meantime, so has interest from the
community in developing new approaches to graph processing applications. In 2010,
Google introduced the vertex-centric programming model through their framework Pregel. This consists of expressing computation from the perspective of a vertex, whilst inter-vertex communications are achieved via data exchanges along incoming and outgoing edges, using the message-passing abstraction provided. Pregel âs high-level programming interface, designed around a set of simple functions, provides ease of programmability to the user. The aim is to enable the development of graph processing applications without requiring expertise in optimisation or parallel programming. Such challenges are instead abstracted from the user and offloaded to the underlying framework. However, fine-grained synchronisation, unpredictable memory access patterns and multiple sources of load imbalance make it difficult to implement the vertex centric model efficiently on high performance computing platforms without sacrificing programmability.
This research focuses on combining vertex-centric and High-Performance Comput-
ing (HPC), resulting in the development of a shared-memory framework, iPregel, which
demonstrates that a performance and memory efficiency similar to that of non-vertex-
centric approaches can be achieved while preserving the programmability benefits of
vertex-centric. Non-volatile memory is then explored to extend single-node capabilities, during which multiple versions of iPregel are implemented to experiment with the various data movement strategies.
Then, distributed memory parallelism is investigated to overcome the resource limitations of single node processing. A second framework named DiP, which ports applicable iPregel âs optimisations to distributed memory, prioritises performance to high scalability.
This research has resulted in a set of techniques and optimisations illustrated through a shared-memory framework iPregel and a distributed-memory framework DiP. The former closes a gap of several orders of magnitude in both performance and memory efficiency, even able to process a graph of 750 billion edges using non-volatile memory. The latter has proved that this competitiveness can also be scaled beyond a single node, enabling the processing of the largest graph generated in this research, comprising 1.6 trillion edges. Most importantly, both frameworks achieved these performance and capability gains whilst also preserving programmability, which is the cornerstone of the vertex-centric programming model. This research therefore demonstrates that by combining vertex-centricity and High-Performance Computing (HPC), it is possible to maintain performance, memory efficiency and programmability
High-Energy Gamma-Ray Astronomy
This volume celebrates the 30th anniversary of the first very-high energy (VHE) gamma-ray Source detection: the Crab Nebula, observed by the pioneering ground-based Cherenkov telescope Whipple, at teraelectronvolts (TeV) energies, in 1989. As we entered a new era in TeV astronomy, with the imminent start of operations of the Cherenkov Telescope Array (CTA) and new facilities such as LHAASO and the proposed Southern Wide-Field Gamma-ray Observatory (SWGO), we conceived of this volume as a broad reflection on how far we have evolved in the astrophysics topics that dominated the field of TeV astronomy for much of recent history.In the past two decades, H.E.S.S., MAGIC and VERITAS pushed the field of TeV astronomy, consolidating the field of TeV astrophysics, from few to hundreds of TeV emitters. Today, this is a mature field, covering almost every topic of modern astrophysics. TeV astrophysics is also at the center of the multi-messenger astrophysics revolution, as the extreme photon energies involved provide an effective probe in cosmic-ray acceleration, propagation and interaction, in dark matter and exotic physics searches. The improvement that CTA will carry forward and the fact that CTA will operate as the first open observatory in the field, mean that gamma-ray astronomy is about to enter a new precision and productive era.This book aims to serve as an introduction to the field and its state of the art, presenting a series of authoritative reviews on a broad range of topics in which TeV astronomy provided essential contributions, and where some of the most relevant questions for future research lie
The Fifteenth Marcel Grossmann Meeting
The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
Technologies and Applications for Big Data Value
This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part âTechnologies and Methodsâ contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part âProcesses and Applicationsâ details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
Recommended from our members
Languages and Compilers for Writing Efficient High-Performance Computing Applications
Many everyday applications, such as web search, speech recognition, and weather prediction, are executed on high-performance systems containing thousands of Central Processing Units (CPUs) and Graphics Processing Units (GPUs). These applications can be written in either low-level programming languages, such as NVIDIA CUDA, or domain specific languages, like Halide for image processing and PyTorch for machine learning programs. Despite the popularity of these languages, there are several challenges that programmers face when developing efficient high-performance computing applications. First, since every hardware support a different low-level programming model, to utilize new hardware programmers need to rewrite their applications in another programming language. Second, writing efficient code involves restructuring the computation to ensure (i) regular memory access patterns, (ii) non-divergent control flow, and (iii) complete utilization of different programmer managed caches. Furthermore, since these low-level optimizations are known only to hardware experts, it is difficult for a domain expert to write optimized code for new computations. Third, existing domain specific languages suffer from optimization barriers in the language constructs that prevent new optimizations and hence, these languages provide sub-optimal performance. To address these challenges this thesis presents the following novel abstractions and compiler techniques for writing image processing and machine learning applications that can run efficiently on a variety of high-performance systems. First, this thesis presents techniques to optimize image processing programs on GPUs using the features of modern GPUs. These techniques improve the concurrency and register usage of generated code to provide better performance than the state-of-the-art. Second, this thesis presents NextDoor, which is the first system to provide an abstraction for writing graph sampling applications and efficiently executing these applications on GPUs. Third, this thesis presents CoCoNet, which is a domain specific language to co-optimize communication and computation in distributed machine learning workloads. By breaking the optimization barriers in existing domain specific languages, these techniques help programmers write correct and efficient code for diverse high-performance computing workloads
Understanding Quantum Technologies 2022
Understanding Quantum Technologies 2022 is a creative-commons ebook that
provides a unique 360 degrees overview of quantum technologies from science and
technology to geopolitical and societal issues. It covers quantum physics
history, quantum physics 101, gate-based quantum computing, quantum computing
engineering (including quantum error corrections and quantum computing
energetics), quantum computing hardware (all qubit types, including quantum
annealing and quantum simulation paradigms, history, science, research,
implementation and vendors), quantum enabling technologies (cryogenics, control
electronics, photonics, components fabs, raw materials), quantum computing
algorithms, software development tools and use cases, unconventional computing
(potential alternatives to quantum and classical computing), quantum
telecommunications and cryptography, quantum sensing, quantum technologies
around the world, quantum technologies societal impact and even quantum fake
sciences. The main audience are computer science engineers, developers and IT
specialists as well as quantum scientists and students who want to acquire a
global view of how quantum technologies work, and particularly quantum
computing. This version is an extensive update to the 2021 edition published in
October 2021.Comment: 1132 pages, 920 figures, Letter forma
Event Generators for High-Energy Physics Experiments
We provide an overview of the status of Monte-Carlo event generators for high-energy particle physics. Guided by the experimental needs and requirements, we highlight areas of active development, and opportunities for future improvements. Particular emphasis is given to physics models and algorithms that are employed across a variety of experiments. These common themes in event generator development lead to a more comprehensive understanding of physics at the highest energies and intensities, and allow models to be tested against a wealth of data that have been accumulated over the past decades. A cohesive approach to event generator development will allow these models to be further improved and systematic uncertainties to be reduced, directly contributing to future experimental success. Event generators are part of a much larger ecosystem of computational tools. They typically involve a number of unknown model parameters that must be tuned to experimental data, while maintaining the integrity of the underlying physics models. Making both these data, and the analyses with which they have been obtained accessible to future users is an essential aspect of open science and data preservation. It ensures the consistency of physics models across a variety of experiments
- âŠ