324,158 research outputs found

    Convergence Speed of the Consensus Algorithm with Interference and Sparse Long-Range Connectivity

    Full text link
    We analyze the effect of interference on the convergence rate of average consensus algorithms, which iteratively compute the measurement average by message passing among nodes. It is usually assumed that these algorithms converge faster with a greater exchange of information (i.e., by increased network connectivity) in every iteration. However, when interference is taken into account, it is no longer clear if the rate of convergence increases with network connectivity. We study this problem for randomly-placed consensus-seeking nodes connected through an interference-limited network. We investigate the following questions: (a) How does the rate of convergence vary with increasing communication range of each node? and (b) How does this result change when each node is allowed to communicate with a few selected far-off nodes? When nodes schedule their transmissions to avoid interference, we show that the convergence speed scales with r2−dr^{2-d}, where rr is the communication range and dd is the number of dimensions. This scaling is the result of two competing effects when increasing rr: Increased schedule length for interference-free transmission vs. the speed gain due to improved connectivity. Hence, although one-dimensional networks can converge faster from a greater communication range despite increased interference, the two effects exactly offset one another in two-dimensions. In higher dimensions, increasing the communication range can actually degrade the rate of convergence. Our results thus underline the importance of factoring in the effect of interference in the design of distributed estimation algorithms.Comment: 27 pages, 4 figure

    High-throughput and high-precision laser micromachining with ps-pulses in synchronized mode with a fast polygon line scanner

    Get PDF
    To be competitive in laser micro machining, high throughput is an important aspect. One possibility to increase productivity is scaling up the ablation process i.e. linearly increasing the laser repetition rate together with the average power and the scan speed. In the MHz-regime high scan speeds are required which cannot be provided by commercially available galvo scanners. In this work we will report on the results by using a polygon line scanner having a maximum scan speed of 100 m/s and a 50 W ps-laser system, synchronized via the SuperSyncâ„¢ technology. We will show the results concerning the removal rate and the surface quality for working at the optimum point i.e. most efficient point at repetition rates up to 8.2 MHz

    Effect of folds and pockets on the topology and propagation of premixed turbulent flames

    Get PDF
    Propagation of premixed turbulent flames is examined using a hybrid Navier-Stokes/front tracking methodology, within the context of a hydrodynamic model. The flame, treated as a surface of density discontinuity separating the burned and unburned gases, propagates relative to the fresh mixture at a speed that depends on the local mixture (through a Markstein length) and flow conditions (through the stretch rate), and the flow field is modified in turn by gas expansion; only positive Markstein length are considered, where thermo-diffusive instabilities are absent. Depending on the Markstein length, we have identified in a previous publication two modes of propagation - sub-critical and super-critical, based on whether the effects of the Darrieus-Landau instability are absent or dominant, respectively. The results were limited to low turbulence intensities where the mathematical representation of the flame front was based on an explicit single-valued function. In the present paper we utilize a generalized representation of the flame surface that allows for multivalued and disjointed interfaces, thus extending the results to higher turbulence intensities. We show that when increasing the turbulence intensity the influence of the Darrieus-Landau instability on the super-critical mode of propagation progressively decreases and in the newly identified highly-turbulent regime the flame is dominated completely by the turbulence for all values of Markstein numbers; i.e., with no distinction between sub- and super-critical conditions. Primary importance is given to the determination of the turbulent flame speed and its dependence on turbulence intensity which, when increasing the turbulence level, transitions from a quadratic to a sub-linear scaling. Moreover, the exponent of the sub-linear scaling for the turbulent flame speed is generally lower than the corresponding exponent for the scaling of the flame surface area ratio, which is often used for experimentally determining the turbulent flame speed. We show that the leveling in the rate of increase of the turbulent flame speed with turbulence intensity, is due to frequent flame folding and detachment of pockets of unburned gas that cause a reduction in the average main surface area of the flame, while the lower exponents in the scaling law for the turbulent flame speed compared to that of the flame surface area ratio is due to flame stretching. Disregarding the effect of flame stretch for mixtures of positive Markstein length results in overestimating the turbulent flame speed. Finally, we characterize the flame turbulence interaction via quantities such as the mean vorticity and mean strain, illustrating the effects of incoming turbulence on the flame and the modification of the flow by the flame on the unburned and burned sides

    Distributed Rate Scaling in Large-Scale Service Systems

    Full text link
    We consider a large-scale parallel-server system, where each server independently adjusts its processing speed in a decentralized manner. The objective is to minimize the overall cost, which comprises the average cost of maintaining the servers' processing speeds and a non-decreasing function of the tasks' sojourn times. The problem is compounded by the lack of knowledge of the task arrival rate and the absence of a centralized control or communication among the servers. We draw on ideas from stochastic approximation and present a novel rate scaling algorithm that ensures convergence of all server processing speeds to the globally asymptotically optimum value as the system size increases. Apart from the algorithm design, a key contribution of our approach lies in demonstrating how concepts from the stochastic approximation literature can be leveraged to effectively tackle learning problems in large-scale, distributed systems. En route, we also analyze the performance of a fully heterogeneous parallel-server system, where each server has a distinct processing speed, which might be of independent interest.Comment: 32 pages, 4 figure

    Dynamical Phase Transition in One Dimensional Traffic Flow Model with Blockage

    Full text link
    Effects of a bottleneck in a linear trafficway is investigated using a simple cellular automaton model. Introducing a blockage site which transmit cars at some transmission probability into the rule-184 cellular automaton, we observe three different phases with increasing car concentration: Besides the free phase and the jam phase, which exist already in the pure rule-184 model, the mixed phase of these two appears at intermediate concentration with well-defined phase boundaries. This mixed phase, where cars pile up behind the blockage to form a jam region, is characterized by a constant flow. In the thermodynamic limit, we obtain the exact expressions for several characteristic quantities in terms of the car density and the transmission rate. These quantities depend strongly on the system size at the phase boundaries; We analyse these finite size effects based on the finite-size scaling.Comment: 14 pages, LaTeX 13 postscript figures available upon request,OUCMT-94-

    Updating, Upgrading, Refining, Calibration and Implementation of Trade-Off Analysis Methodology Developed for INDOT

    Get PDF
    As part of the ongoing evolution towards integrated highway asset management, the Indiana Department of Transportation (INDOT), through SPR studies in 2004 and 2010, sponsored research that developed an overall framework for asset management. This was intended to foster decision support for alternative investments across the program areas on the basis of a broad range of performance measures and against the background of the various alternative actions or spending amounts that could be applied to the several different asset types in the different program areas. The 2010 study also developed theoretical constructs for scaling and amalgamating the different performance measures, and for analyzing the different kinds of trade-offs. The research products from the present study include this technical report which shows how theoretical underpinnings of the methodology developed for INDOT in 2010 have been updated, upgraded, and refined. The report also includes a case study that shows how the trade-off analysis framework has been calibrated using available data. Supplemental to the report is Trade-IN Version 1.0, a set of flexible and easy-to-use spreadsheets that implement the tradeoff framework. With this framework and using data at the current time or in the future, INDOT’s asset managers are placed in a better position to quantify and comprehend the relationships between budget levels and system-wide performance, the relationships between different pairs of conflicting or non-conflicting performance measures under a given budget limit, and the consequences, in terms of system-wide performance, of funding shifts across the management systems or program areas

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table
    • …
    corecore