10,575 research outputs found

    Origin of the Golden Mass of Galaxies and Black Holes

    Full text link
    We address the origin of the golden mass and time for galaxy formation and the onset of rapid black-hole growth. The preferred dark-halo mass of ~1012M⊙10^{12}M_\odot is translated to a characteristic epoch, z~2, at which the typical forming halos have a comparable mass. We put together a coherent picture based on existing and new simple analytic modeling and cosmological simulations. We describe how the golden mass arises from two physical mechanisms that suppress gas supply and star formation below and above the golden mass, supernova feedback and virial shock heating of the circum-galactic medium (CGM), respectively. Cosmological simulations reveal that these mechanisms are responsible for a similar favored mass for the dramatic events of gaseous compaction into compact star-forming "blue nuggets", caused by mergers, counter-rotating streams or other mechanisms. This triggers inside-out quenching of star formation, to be maintained by the hot CGM, leading to today's passive early-type galaxies. The blue-nugget phase is responsible for transitions in the galaxy structural, kinematic and compositional properties, e.g., from dark-matter to baryon central dominance and from prolate to oblate shape. The growth of the central black hole is suppressed by supernova feedback below the critical mass, and is free to grow once the halo is massive enough to lock the supernova ejecta by its deep potential well and the hot CGM. A compaction near the golden mass makes the black hole sink to the galactic center and triggers a rapid black-hole growth. This ignites feedback by the Active Galactic Nucleus that helps keeping the CGM hot and maintaining long-term quenching.Comment: 9 pages, 12 figure

    Towards Logical Architecture and Formal Analysis of Dependencies Between Services

    Full text link
    This paper presents a formal approach to modelling and analysis of data and control flow dependencies between services within remotely deployed distributed systems of services. Our work aims at elaborating for a concrete system, which parts of the system (or system model) are necessary to check a given property. The approach allows services decomposition oriented towards efficient checking of system properties as well as analysis of dependencies within a system.Comment: Preprint, The 2014 Asia-Pacific Services Computing Conference (APSCC 2014

    Efficient Lifelong Learning with A-GEM

    Full text link
    In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards this end, we first introduce a new and a more realistic evaluation protocol, whereby learners observe each example only once and hyper-parameter selection is done on a small and disjoint set of tasks, which is not used for the actual learning experience and evaluation. Second, we introduce a new metric measuring how quickly a learner acquires a new skill. Third, we propose an improved version of GEM (Lopez-Paz & Ranzato, 2017), dubbed Averaged GEM (A-GEM), which enjoys the same or even better performance as GEM, while being almost as computationally and memory efficient as EWC (Kirkpatrick et al., 2016) and other regularization-based methods. Finally, we show that all algorithms including A-GEM can learn even more quickly if they are provided with task descriptors specifying the classification tasks under consideration. Our experiments on several standard lifelong learning benchmarks demonstrate that A-GEM has the best trade-off between accuracy and efficiency.Comment: Published as a conference paper at ICLR 201

    Measuring scope 3 carbon emissions : water and waste : a guide to good practice

    Get PDF

    Redefining the boundaries of interplanetary coronal mass ejections from observations at the ecliptic plane

    Get PDF
    On 2015 January 6-7, an interplanetary coronal mass ejection (ICME) was observed at L1. This event, which can be associated with a weak and slow coronal mass ejection, allows us to discuss on the differences between the boundaries of the magnetic cloud and the compositional boundaries. A fast stream from a solar coronal hole surrounding this ICME offers a unique opportunity to check the boundaries' process definition and to explain differences between them. Using Wind and ACE data, we perform a complementary analysis involving compositional, magnetic, and kinematic observations providing relevant information regarding the evolution of the ICME as travelling away from the Sun. We propose erosion, at least at the front boundary of the ICME, as the main reason for the difference between the boundaries, and compositional signatures as the most precise diagnostic tool for the boundaries of ICMEs.Comment: 9 pages and 7 figures in the original forma

    Symbolic Automata with Memory: a Computational Model for Complex Event Processing

    Full text link
    We propose an automaton model which is a combination of symbolic and register automata, i.e., we enrich symbolic automata with memory. We call such automata Register Match Automata (RMA). RMA extend the expressive power of symbolic automata, by allowing formulas to be applied not only to the last element read from the input string, but to multiple elements, stored in their registers. RMA also extend register automata, by allowing arbitrary formulas, besides equality predicates. We study the closure properties of RMA under union, concatenation, Kleene+, complement and determinization and show that RMA, contrary to symbolic automata, are not determinizable when viewed as recognizers, without taking the output of transitions into account. However, when a window operator, a quintessential feature in Complex Event Processing, is used, RMA are indeed determinizable even when viewed as recognizers. We present detailed algorithms for constructing deterministic RMA from regular expressions extended with nn-ary constraints. We show how RMA can be used in Complex Event Processing in order to detect patterns upon streams of events, using a framework that provides denotational and compositional semantics, and that allows for a systematic treatment of such automata

    MorphoSys: efficient colocation of QoS-constrained workloads in the cloud

    Full text link
    In hosting environments such as IaaS clouds, desirable application performance is usually guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated for unencumbered use for proper operation. Arbitrary colocation of applications with different SLAs on a single host may result in inefficient utilization of the host’s resources. In this paper, we propose that periodic resource allocation and consumption models -- often used to characterize real-time workloads -- be used for a more granular expression of SLAs. Our proposed SLA model has the salient feature that it exposes flexibilities that enable the infrastructure provider to safely transform SLAs from one form to another for the purpose of achieving more efficient colocation. Towards that goal, we present MORPHOSYS: a framework for a service that allows the manipulation of SLAs to enable efficient colocation of arbitrary workloads in a dynamic setting. We present results from extensive trace-driven simulations of colocated Video-on-Demand servers in a cloud setting. These results show that potentially-significant reduction in wasted resources (by as much as 60%) are possible using MORPHOSYS.National Science Foundation (0720604, 0735974, 0820138, 0952145, 1012798

    A compiler approach to scalable concurrent program design

    Get PDF
    The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support. The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics

    Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    Get PDF
    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented
    • …
    corecore