47 research outputs found

    THE HIGH CADENCE TRANSIENT SURVEY (HITS). I. SURVEY DESIGN AND SUPERNOVA SHOCK BREAKOUT CONSTRAINTS

    Get PDF
    Indexación: Web of Science; Scopus.We present the first results of the High Cadence Transient Survey (HiTS), a survey for which the objective is to detect and follow-up optical transients with characteristic timescales from hours to days, especially the earliest hours of supernova (SN) explosions. HiTS uses the Dark Energy Camera and a custom pipeline for image subtraction, candidate filtering and candidate visualization, which runs in real-time to be able to react rapidly to the new transients. We discuss the survey design, the technical challenges associated with the real-time analysis of these large volumes of data and our first results. In our 2013, 2014, and 2015 campaigns, we detected more than 120 young SN candidates, but we did not find a clear signature from the short-lived SN shock breakouts (SBOs) originating after the core collapse of red supergiant stars, which was the initial science aim of this survey. Using the empirical distribution of limiting magnitudes from our observational campaigns, we measured the expected recovery fraction of randomly injected SN light curves, which included SBO optical peaks produced with models from Tominaga et al. (2011) and Nakar & Sari (2010). From this analysis, we cannot rule out the models from Tominaga et al. (2011) under any reasonable distributions of progenitor masses, but we can marginally rule out the brighter and longer-lived SBO models from Nakar & Sari (2010) under our best-guess distribution of progenitor masses. Finally, we highlight the implications of this work for future massive data sets produced by astronomical observatories, such as LSST.http://iopscience.iop.org/article/10.3847/0004-637X/832/2/155/meta;jsessionid=76BDFFFE378003616F6DBA56A9225673.c4.iopscience.cld.iop.or

    Pulseq: A rapid and hardwareâ independent pulse sequence prototyping framework

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136354/1/mrm26235.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136354/2/mrm26235_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136354/3/mrm26235-sup-0001-suppinfo.pd

    One-sided versus two-sided stochastic descriptions

    Get PDF
    It is well-known that discrete-time finite-state Markov Chains, which are described by one-sided conditional probabilities which describe a dependence on the past as only dependent on the present, can also be described as one-dimensional Markov Fields, that is, nearest-neighbour Gibbs measures for finite-spin models, which are described by two-sided conditional probabilities. In such Markov Fields the time interpretation of past and future is being replaced by the space interpretation of an interior volume, surrounded by an exterior to the left and to the right. If we relax the Markov requirement to weak dependence, that is, continuous dependence, either on the past (generalising the Markov-Chain description) or on the external configuration (generalising the Markov-Field description), it turns out this equivalence breaks down, and neither class contains the other. In one direction this result has been known for a few years, in the opposite direction a counterexample was found recently. Our counterexample is based on the phenomenon of entropic repulsion in long-range Ising (or "Dyson") models.Comment: 13 pages, Contribution for "Statistical Mechanics of Classical and Disordered Systems

    Compassion as a practical and evolved ethic for conservation

    Get PDF
    © The Author(s) 2015. Published by Oxford University Press on behalf of the American Institute of Biological Sciences. The ethical position underpinning decisionmaking is an important concern for conservation biologists when setting priorities for interventions. The recent debate on how best to protect nature has centered on contrasting intrinsic and aesthetic values against utilitarian and economic values, driven by an inevitable global rise in conservation conflicts. These discussions have primarily been targeted at species and ecosystems for success, without explicitly expressing concern for the intrinsic value and welfare of individual animals. In part, this is because animal welfare has historically been thought of as an impediment to conservation. However, practical implementations of conservation that provide good welfare outcomes for individuals are no longer conceptually challenging; they have become reality. This reality, included under the auspices of "compassionate conservation," reflects an evolved ethic for sharing space with nature and is a major step forward for conservation

    Pathways towards coexistence with large carnivores in production systems

    Get PDF
    Coexistence between livestock grazing and carnivores in rangelands is a major challenge in terms of sustainable agriculture, animal welfare, species conservation and ecosystem function. Many effective non-lethal tools exist to protect livestock from predation, yet their adoption remains limited. Using a social-ecological transformations framework, we present two qualitative models that depict transformative change in rangelands grazing. Developed through participatory processes with stakeholders from South Africa and the United States of America, the models articulate drivers of change and the essential pathways to transition from routine lethal management of carnivores towards mutually beneficial coexistence. The pathways define broad actions that incorporate multiple values in grazing systems including changes to livestock management practices, financial support, industry capacity building, research, improved governance and marketing initiatives. A key fnding is the new concept of ‘Predator Smart Farming’, a holistic and conscientious approach to agriculture, which increases the resilience of landscapes, animals (domesticated and wild) and rural livelihoods. Implementation of these multiple pathways would lead to a future system that ensures thriving agricultural communities, secure livelihoods, reduced violence toward animals, and landscapes that are productive and support species conservation and coexistence

    Applying Time Warp to CPU Design

    No full text
    This paper exemplifies the similarities in Time Warp and computer architecture concepts and terminology, and the continued trend for convergence of ideas in these two areas. Time Warp can provide a means to describe the complex mechanisms being used to allow the instruction execution window to be enlarged. Furthermore it can extend the current mechanisms, which do not scale, in a scaleable manner. The issues involved in implementing Time Warp in a CPU design are also examined, and illustrated with reference to the Wisconsin Multiscalar machine and the Waikato WarpEngine. Finally the potential performance gains of such a system are briefly discussed. 1. Introduction Computer designers currently face a very interesting set of challenges. The steady increase in the number of transistors on a chip and the speed at which a chip can be clocked has continued its inexorable progress. In 1997, chips with millions of transistors and clock speeds of hundreds of MHz are in routine production an..

    Constraints on Parallelism Beyond 10 Instructions Per Cycle

    No full text
    The problem of extracting InstructionLevel Parallelism at levels of 10 instructionsper clock and higher is considered. Two different architectures which use speculation on memory accesses to achieve this level of performance are reviewed. It is pointed out that while this form of speculation gives high potential parallelism it is necessary to retain execution state so that incorrect speculation can be detected and subsequently squashed. Simulation results show that the space to store such state is a critical resource in obtaining good speedup. To make good use of the space it is essential that state be stored efficiently and that it be retired as soon as possible. A number of techniques for extracting the best usage from the available state storage are introduced. Keywords: instruction level parallelism, speculation 1 Introduction Increasingly computer architects and system designers are seeking to extract more computer performance by making use of parallelism. There are a number of ..

    Switching Circuit Optimization for Matrix Gradient Coils

    No full text
    Matrix gradient coils with up to 84 coil elements were recently introduced for magnetic resonance imaging. Ideally, each element is driven by a dedicated amplifier, which may be technically and financially infeasible. Instead, several elements can be connected in series (called a “cluster”) and driven by a single amplifier. In previous works, a set of clusters, called a “configuration,” was sought to approximate a target field shape. Because a magnetic resonance pulse sequence requires several distinct field shapes, a mechanism to switch between configurations is needed. This can be achieved by a hypothetical switching circuit connecting all terminals of all elements with each other and with the amplifiers. For a predefined set of configurations, a switching circuit can be designed to require only a limited amount of switches. Here we introduce an algorithm to minimize the number of switches without affecting the ability of the configurations to accurately create the desired fields. The problem is modeled using graph theory and split into 2 sequential combinatorial optimization problems that are solved using simulated annealing. For the investigated cases, the results show that compared to unoptimized switching circuits, the reduction of switches in optimized circuits ranges from 8% to up to 44% (average of 31%). This substantial reduction is achieved without impeding circuit functionality. This study shows how technical effort associated with implementation and operation of a matrix gradient coil is related to different hardware setups and how to reduce this effort

    Space Constraints on High Levels of ILP

    No full text
    . ILP is one way of effectively using the large number of transistors available on modern CPUs. Two different architectures which use speculation on memory accesses to do this are reviewed. While this form of speculation gives high potential parallelism, it is necessary to retain execution state so that the incorrect speculation can be detected and subsequently squashed. It is shown by theoretical arguments and simulation that the space to store such state is a critical resource in obtaining good speedup. The state must be stored efficiently and retired as soon as possible. It is also shown that larger problem sizes may achieve lower extracted parallelism, despite having a higher potential parallelism. 1 Introduction Increasingly computer architects and system designers seek to extract more computer performance by making use of parallelism. There are a number of ways of approaching this. This paper considers the problem of extracting instruction level parallelism (ILP), that is, paral..
    corecore