23 research outputs found
Recommended from our members
Predicted chance that global warming will temporarily exceed 1.5 °C
The Paris Agreement calls for efforts to limit anthropogenic global warming to less than 1.5 °C above preindustrial levels. However, natural internal variability may exacerbate anthropogenic warming to produce temporary excursions above 1.5 °C. Such excursions would not necessarily exceed the Paris Agreement, but would provide a warning that the threshold is being approached. Here we develop a new capability to predict the probability that global temperature will exceed 1.5 °C above preindustrial levels in the coming 5 years. For the period 2017 to 2021 we predict a 38% and 10% chance, respectively, of monthly or yearly temperatures exceeding 1.5 °C, with virtually no chance of the 5‐year mean being above the threshold. Our forecasts will be updated annually to provide policy makers with advanced warning of the evolving probability and duration of future warming events
Distributed Component System Based On Architecture Description: The SOFA Experience
In this paper, the authors share their experience gathered during the design and implementation of a runtime environment for the SOFA component system. The authors focus on the issues of mapping the SOFA component definition language into the C++ language and the integration of a CORBA middleware into the SOFA component system, aiming to support transparently distributed applications in a real-life environment. The experience highlights general problems related to the type system of architecture description languages and middleware implementations, the mapping of the type system into the implementation language, and the support for dynamic changes of the application architecture
Benchmark precision and random initial state
The applications of software benchmarks place an obvious demand on the precision of the benchmark results. An intuitive and frequently employed approach to obtaining precise enough benchmark results is having the benchmark collect a large number of samples that are simply averaged or otherwise statistically processed. We show that this approach ignores an inherent and unavoidable nondeterminism in the initial state of the system that is evaluated, often leading to an implausible estimate of result precision. We proceed by outlining the sources of nondeterminism in a typical system, illustrating the impact of the nondeterminism on selected classes of benchmarks. Finally, we suggest a method for quantitatively assessing the influence of nondeterminism on a benchmark, as well as approach that provides a plausible estimate of result precision in face of the nondeterminism
Automated Detection of Performance Regressions: The Mono Experience
Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully automated regression benchmarking environment for the Mono opensource project, we show how the problems associated with nondeterminism can be tackled using statistical methods
All-memristive neuromorphic computing with level-tuned neurons
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors
Learning spatio-temporal patterns in the presence of input noise using phase-change memristors
Neuromorphic systems increasingly attract research interest owing to their ability to provide biologically inspired methods of computing, alternative to the classic von Neumann architecture. In these systems, computing relies on spike-based communication between neurons, and memory is represented by evolving states of the synaptic interconnections. In this work, we first demonstrate how spike-timing-dependent plasticity (STDP) based synapses can be realized using the crystal-growth dynamics of phase-change memristors. Then, we present a novel learning architecture comprising an integrate-and-fire neuron and an array of phase-change synapses that is capable of detecting temporal correlations in parallel input streams. We demonstrate a continuous re-learning operation on a sequence of binary 20×20 pixel images in the presence of significant background noise. Experimental results using an array of phase-change cells as synaptic elements confirm the functionality and performance of the proposed learning architecture