2,096 research outputs found

    Asymptotically optimal declustering schemes for 2-dim range queries

    Get PDF
    AbstractDeclustering techniques have been widely adopted in parallel storage systems (e.g. disk arrays) to speed up bulk retrieval of multidimensional data. A declustering scheme distributes data items among multiple disks, thus enabling parallel data access and reducing query response time. We measure the performance of any declustering scheme as its worst case additive deviation from the ideal scheme. The goal thus is to design declustering schemes with as small an additive error as possible. We describe a number of declustering schemes with additive error O(logM) for 2-dimensional range queries, where M is the number of disks. These are the first results giving O(logM) upper bound for all values of M. Our second result is a lower bound on the additive error. It is known that except for a few stringent cases, additive error of any 2-dimensional declustering scheme is at least one. We strengthen this lower bound to Ω((logM)(d−1/2)) for d-dimensional schemes and to Ω(logM) for 2-dimensional schemes, thus proving that the 2-dimensional schemes described in this paper are (asymptotically) optimal. These results are obtained by establishing a connection to geometric discrepancy. We also present simulation results to evaluate the performance of these schemes in practice

    Analysis of Power-aware Buffering Schemes in Wireless Sensor Networks

    Full text link
    We study the power-aware buffering problem in battery-powered sensor networks, focusing on the fixed-size and fixed-interval buffering schemes. The main motivation is to address the yet poorly understood size variation-induced effect on power-aware buffering schemes. Our theoretical analysis elucidates the fundamental differences between the fixed-size and fixed-interval buffering schemes in the presence of data size variation. It shows that data size variation has detrimental effects on the power expenditure of the fixed-size buffering in general, and reveals that the size variation induced effects can be either mitigated by a positive skewness or promoted by a negative skewness in size distribution. By contrast, the fixed-interval buffering scheme has an obvious advantage of being eminently immune to the data-size variation. Hence the fixed-interval buffering scheme is a risk-averse strategy for its robustness in a variety of operational environments. In addition, based on the fixed-interval buffering scheme, we establish the power consumption relationship between child nodes and parent node in a static data collection tree, and give an in-depth analysis of the impact of child bandwidth distribution on parent's power consumption. This study is of practical significance: it sheds new light on the relationship among power consumption of buffering schemes, power parameters of radio module and memory bank, data arrival rate and data size variation, thereby providing well-informed guidance in determining an optimal buffer size (interval) to maximize the operational lifespan of sensor networks

    Reconstruction of plasma density profiles by measuring spectra of radiation emitted from oscillating plasma dipoles

    Get PDF
    We suggest a new method for characterising non-uniform density distributions of plasma by measuring the spectra of radiation emitted from a localised plasma dipole oscillator excited by colliding electromagnetic pulses. The density distribution can be determined by scanning the collision point in space. Two-dimensional particle-in-cell simulations demonstrate the reconstruction of linear and nonlinear density profiles corresponding to laser-produced plasma. The method can be applied to a wide range of plasma, including fusion and low temperature plasmas. It overcomes many of the disadvantages of existing methods that only yield average densities along the path of probe pulses, such as interferometry and spectroscopy

    Westerbork Ultra-Deep Survey of HI at z=0.2

    Get PDF
    In this contribution, we present some preliminary observational results from the completed ultra-deep survey of 21cm emission from neutral hydrogen at redshifts z=0.164-0.224 with the Westerbork Synthesis Radio Telescope. In two separate fields, a total of 160 individual galaxies has been detected in neutral hydrogen, with HI masses varying from 1.1x10^9 to 4.0x10^10 Msun. The largest galaxies are spatially resolved by the synthesized beam of 23x37 arcsec^2 while the velocity resolution of 19 km/s allowed the HI emission lines to be well resolved. The large scale structure in the surveyed volume is traced well in HI, apart from the highest density regions like the cores of galaxy clusters. All significant HI detections have obvious or plausible optical counterparts which are usually blue late-type galaxies that are UV-bright. One of the observed fields contains a massive Butcher-Oemler cluster but none of the associated blue galaxies has been detected in HI. The data suggest that the lower-luminosity galaxies at z=0.2 are more gas-rich than galaxies of similar luminosities at z=0, pending a careful analysis of the completeness near the detection limit. Optical counterparts of the HI detected galaxies are mostly located in the 'blue cloud' of the galaxy population although several galaxies on the 'red sequence' are also detected in HI. These results hold great promise for future deep 21cm surveys of neutral hydrogen with MeerKAT, APERTIF, ASKAP, and ultimately the Square Kilometre Array.Comment: 10 pages, 9 figures, Proceedings of ISKAF2010 Science Meeting: A New Golden Age for Radio Astronomy, June 10-14 2010, Assen, the Netherlands. Edited by J. van Leeuwen. Movies of rendered rotating data cubes are available at http://www.astro.rug.nl/~verheyen/BUDHIES/index.htm

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Toward Rigorous Telecoupling Causal Attribution: A Systematic Review and Typology

    Get PDF
    Telecoupled flows of people, organisms, goods, information, and energy are expanding across the globe. Causes are integral components of the telecoupling framework, yet the rigor with which they have been identified and evaluated to date is unknown. We address this knowledge gap by systematically reviewing causal attribution in the telecoupling literature (n = 89 studies) and developing a standardized causal terminology and typology for consistent use in telecoupling research. Causes are defined based on six criteria: sector (e.g., environmental, economic), system of origin (i.e., sending, receiving, spillover), agent, distance, response time (i.e., time lapse between cause and effect), and direction (i.e., producing positive or negative effects). Using case studies from the telecoupling literature, we demonstrate the need to enhance the rigor of telecoupling causal attribution by combining qualitative and quantitative methods via process-tracing, counterfactual analysis, and related approaches. Rigorous qualitative-quantitative causal attribution is critical for accurately assessing the social-ecological causes and consequences of telecouplings and thereby identifying leverage points for informed management and governance of telecoupled systems.ISSN:2071-105

    Replica theory for learning curves for Gaussian processes on random graphs

    Full text link
    Statistical physics approaches can be used to derive accurate predictions for the performance of inference methods learning from potentially noisy data, as quantified by the learning curve defined as the average error versus number of training examples. We analyse a challenging problem in the area of non-parametric inference where an effectively infinite number of parameters has to be learned, specifically Gaussian process regression. When the inputs are vertices on a random graph and the outputs noisy function values, we show that replica techniques can be used to obtain exact performance predictions in the limit of large graphs. The covariance of the Gaussian process prior is defined by a random walk kernel, the discrete analogue of squared exponential kernels on continuous spaces. Conventionally this kernel is normalised only globally, so that the prior variance can differ between vertices; as a more principled alternative we consider local normalisation, where the prior variance is uniform
    corecore