147 research outputs found

    Tightening the uncertainty principle for stochastic currents

    Get PDF
    We connect two recent advances in the stochastic analysis of nonequilibrium systems: the (loose) uncertainty principle for the currents, which states that statistical errors are bounded by thermodynamic dissipation; and the analysis of thermodynamic consistency of the currents in the light of symmetries. Employing the large deviation techniques presented in [Gingrich et al., Phys. Rev. Lett. 2016] and [Pietzonka et al., Phys. Rev. E 2016], we provide a short proof of the loose uncertainty principle, and prove a tighter uncertainty relation for a class of thermodynamically consistent currents JJ. Our bound involves a measure of partial entropy production, that we interpret as the least amount of entropy that a system sustaining current JJ can possibly produce, at a given steady state. We provide a complete mathematical discussion of quadratic bounds which allows to determine which are optimal, and finally we argue that the relationship for the Fano factor of the entropy production rate var σ/mean σ≥2\mathrm{var}\, \sigma / \mathrm{mean}\, \sigma \geq 2 is the most significant realization of the loose bound. We base our analysis both on the formalism of diffusions, and of Markov jump processes in the light of Schnakenberg's cycle analysis.Comment: 13 pages, 4 figure

    Network-aware design-space exploration of a power-efficient embedded application

    Get PDF
    The paper presents the design and multi-parameter optimization of a networked embedded application for the health-care domain. Several hardware, software, and application parameters, such as clock frequency, sensor sampling rate, data packet rate, are tuned at design- and run-time according to application specifications and operating conditions to optimize hardware requirements, packet loss, power consumption. Experimental results show that further power efficiency can be achieved by considering also communication aspects during design space exploratio

    Classifying visual field loss in glaucoma through baseline matching of stable reference sequences

    Get PDF
    Glaucoma is a common disease of the eye that often results in partial blindness. The main symptom of glaucoma is progressive loss of sight in the visual field over time. The clinical management of glaucoma involves monitoring the progress of the disease using a sequence of regular visual field tests. However, there is currently no universally accepted standard method for classifying changes in the visual field test data. Sequence matching techniques typically rely on similarity measures. However, visual field measurements are very noisy, particularly in people with glaucoma. It is therefore difficult to establish a reference data set including both stable and progressive visual fields. This paper proposes a method that uses a "baseline" computed from a query sequence, to match stable sequences in a database of visual field measurements collected from volunteers. The purpose of the new method is to classify a given query sequence as being stable or progressive. The results suggest that the new method gives a significant improvement in accuracy for identifying progressive sequences, though there is a small penalty for stable sequences

    An optimizing C front-end for hardware synthesis

    Get PDF
    Modern embedded systems must execute a variety of high performance real-time tasks, such as audio and image compression and decompression, channel coding and encoding, etc. High hardware design and mask production costs dictate the need to re-use an architectural platform for as many applications as possible. Reconfigurable platforms can be very effective in these cases, because they allow one to re-use the architecture across a variety of applications. The efficient use of a reconfigurable platform requires a methodology and tools supporting it in order to extensively explore the hardware/software design space, without requiring developers to have a deep knowledge of the underlying architecture, since they often have a software background and only limited hardware design skills. This paper describes a tool that fits into a complete design flow for a reconfigurable processor and that allows one to efficiently transform a high level specification into a lower level one, more suitable for synthesis on the reconfigurable array. The effectiveness of the methodology is proved by a complete implementation of a turbo-decoder

    Educational aspects of VLSI training at postgraduate level

    Get PDF
    This paper will describe the way a VLSI circuit project is intended to be used for training in the Postgraduate School for Computer Aided Electrical Engineering in Bucharest, Romania. The stress will be focused on the main design steps, on the use of various CADENCE EdgeTM VLSI design environment facilities, and on strong team collaboration stimulatio

    An exclusion process on a tree with constant aggregate hopping rate

    Get PDF
    We introduce a model of a totally asymmetric simple exclusion process (TASEP) on a tree network where the aggregate hopping rate is constant from level to level. With this choice for hopping rates the model shows the same phase diagram as the one-dimensional case. The potential applications of our model are in the area of distribution networks; where a single large source supplies material to a large number of small sinks via a hierarchical network. We show that mean field theory (MFT) for our model is identical to that of the one-dimensional TASEP and that this mean field theory is exact for the TASEP on a tree in the limit of large branching ratio, bb(or equivalently large coordination number). We then present an exact solution for the two level tree (or star network) that allows the computation of any correlation function and confirm how mean field results are recovered as b→∞b\rightarrow\infty. As an example we compute the steady-state current as a function of branching ratio. We present simulation results that confirm these results and indicate that the convergence to MFT with large branching ratio is quite rapid.Comment: 20 pages. Submitted to J. Phys.

    Performance and energy-efficient implementation of a smart city application on FPGAs

    Get PDF
    The continuous growth of modern cities and the request for better quality of life, coupled with the increased availability of computing resources, lead to an increased attention to smart city services. Smart cities promise to deliver a better life to their inhabitants while simultaneously reducing resource requirements and pollution. They are thus perceived as a key enabler to sustainable growth. Out of many other issues, one of the major concerns for most cities in the world is traffic, which leads to a huge waste of time and energy, and to increased pollution. To optimize traffic in cities, one of the first steps is to get accurate information in real time about the traffic flows in the city. This can be achieved through the application of automated video analytics to the video streams provided by a set of cameras distributed throughout the city. Image sequence processing can be performed both peripherally and centrally. In this paper, we argue that, since centralized processing has several advantages in terms of availability, maintainability and cost, it is a very promising strategy to enable effective traffic management even in large cities. However, the computational costs are enormous, and thus require an energy-efficient High-Performance Computing approach. Field Programmable Gate Arrays (FPGAs) provide comparable computational resources to CPUs and GPUs, yet require much lower amounts of energy per operation (around 6×\times and 10×\times for the application considered in this case study). They are thus preferred resources to reduce both energy supply and cooling costs in the huge datacenters that will be needed by Smart Cities. In this paper, we describe efficient implementations of high-performance algorithms that can process traffic camera image sequences to provide traffic flow information in real-time at a low energy and power cost

    Performance and energy-efficient implementation of a smart city application on FPGAs

    Get PDF
    The continuous growth of modern cities and the request for better quality of life, coupled with the increased availability of computing resources, lead to an increased attention to smart city services. Smart cities promise to deliver a better life to their inhabitants while simultaneously reducing resource requirements and pollution. They are thus perceived as a key enabler to sustainable growth. Out of many other issues, one of the major concerns for most cities in the world is traffic, which leads to a huge waste of time and energy, and to increased pollution. To optimize traffic in cities, one of the first steps is to get accurate information in real time about the traffic flows in the city. This can be achieved through the application of automated video analytics to the video streams provided by a set of cameras distributed throughout the city. Image sequence processing can be performed both peripherally and centrally. In this paper, we argue that, since centralized processing has several advantages in terms of availability, maintainability and cost, it is a very promising strategy to enable effective traffic management even in large cities. However, the computational costs are enormous, and thus require an energy-efficient High-Performance Computing approach. Field Programmable Gate Arrays (FPGAs) provide comparable computational resources to CPUs and GPUs, yet require much lower amounts of energy per operation (around 6 × and 10 × for the application considered in this case study). They are thus preferred resources to reduce both energy supply and cooling costs in the huge datacenters that will be needed by Smart Cities. In this paper, we describe efficient implementations of high-performance algorithms that can process traffic camera image sequences to provide traffic flow information in real-time at a low energy and power cost

    Renyi entropy of the totally asymmetric exclusion process

    Get PDF
    The Renyi entropy is a generalisation of the Shannon entropy that is sensitive to the fine details of a probability distribution. We present results for the Renyi entropy of the totally asymmetric exclusion process (TASEP). We calculate explicitly an entropy whereby the squares of configuration probabilities are summed, using the matrix product formalism to map the problem to one involving a six direction lattice walk in the upper quarter plane. We derive the generating function across the whole phase diagram, using an obstinate kernel method. This gives the leading behaviour of the Renyi entropy and corrections in all phases of the TASEP. The leading behaviour is given by the result for a Bernoulli measure and we conjecture that this holds for all Renyi entropies. Within the maximal current phase the correction to the leading behaviour is logarithmic in the system size. Finally, we remark upon a special property of equilibrium systems whereby discontinuities in the Renyi entropy arise away from phase transitions, which we refer to as secondary transitions. We find no such secondary transition for this nonequilibrium system, supporting the notion that these are specific to equilibrium cases
    • …
    corecore