163 research outputs found

    Compressive sensor networks : fundamental limits and algorithms

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 85-92).Compressed sensing is a non-adaptive compression method that takes advantage of natural sparsity at the input and is fast gaining relevance to both researchers and engineers for its universality and applicability. First developed by Candis et al., the subject has seen a surge of high-quality results both in its theory and applications. This thesis extends compressed sensing ideas to sensor networks and other bandwidth-constrained communication systems. In particular, we explore the limits of performance of compressive sensor networks in relation to fundamental operations such as quantization and parameter estimation. Since compressed sensing is originally formulated as a real-valued problem, quantization of the measurements is a very natural extension. Although several researchers have proposed modified reconstruction methods that mitigate quantization noise for a fixed quantizer, the optimal design of such quantizers is still unknown. We propose to find the optimal quantizer in terms of minimizing quantization error by using recent results in functional scalar quantization. The best quantizer in this case is not the optimal design for the measurements themselves but rather is reweighted by a factor we call the sensitivity. Numerical results demonstrate a constant-factor improvement in the fixed-rate case. Parameter estimation is an important goal of many sensing systems since users often care about some function of the data rather than the data itself.(cont.) Thus, it is of interest to see how efficiently nodes using compressed sensing can estimate a parameter, and if the measurements scalings can be less restrictive than the bounds in the literature. We explore this problem for time difference and angle of arrival, two common methods for source geolocation. We first derive Cramer-Rao lower bounds for both parameters and show that a practical block-OMP estimator can be relatively efficient for signal reconstruction. However, there is a large gap between theory and practice for time difference or angle of arrival estimation, which demonstrates the CRB to be an optimistic lower bound for nonlinear estimation. We also find scaling laws 'for time difference estimation in the discrete case. This is strongly related to partial support recovery, and we derive some new sufficient conditions that show a very simple reconstruction algorithm can achieve substantially better scaling than full support recovery suggests is possible.by John Zheng Sun.S.M

    Optimal Hyper-Scalable Load Balancing with a Strict Queue Limit

    Get PDF
    Load balancing plays a critical role in efficiently dispatching jobs in parallel-server systems such as cloud networks and data centers. A fundamental challenge in the design of load balancing algorithms is to achieve an optimal trade-off between delay performance and implementation overhead (e.g. communication or memory usage). This trade-off has primarily been studied so far from the angle of the amount of overhead required to achieve asymptotically optimal performance, particularly vanishing delay in large-scale systems. In contrast, in the present paper, we focus on an arbitrarily sparse communication budget, possibly well below the minimum requirement for vanishing delay, referred to as the hyper-scalable operating region. Furthermore, jobs may only be admitted when a specific limit on the queue position of the job can be guaranteed. The centerpiece of our analysis is a universal upper bound for the achievable throughput of any dispatcher-driven algorithm for a given communication budget and queue limit. We also propose a specific hyper-scalable scheme which can operate at any given message rate and enforce any given queue limit, while allowing the server states to be captured via a closed product-form network, in which servers act as customers traversing various nodes. The product-form distribution is leveraged to prove that the bound is tight and that the proposed hyper-scalable scheme is throughput-optimal in a many-server regime given the communication and queue limit constraints. Extensive simulation experiments are conducted to illustrate the results

    Incentives and Redistribution in Homogeneous Bike-Sharing Systems with Stations of Finite Capacity

    Get PDF
    Bike-sharing systems are becoming important for urban transportation. In such systems, users arrive at a station, take a bike and use it for a while, then return it to another station of their choice. Each station has a finite capacity: it cannot host more bikes than its capacity. We propose a stochastic model of an homogeneous bike-sharing system and study the effect of users random choices on the number of problematic stations, i.e., stations that, at a given time, have no bikes available or no available spots for bikes to be returned to. We quantify the influence of the station capacities, and we compute the fleet size that is optimal in terms of minimizing the proportion of problematic stations. Even in a homogeneous city, the system exhibits a poor performance: the minimal proportion of problematic stations is of the order of (but not lower than) the inverse of the capacity. We show that simple incentives, such as suggesting users to return to the least loaded station among two stations, improve the situation by an exponential factor. We also compute the rate at which bikes have to be redistributed by trucks to insure a given quality of service. This rate is of the order of the inverse of the station capacity. For all cases considered, the fleet size that corresponds to the best performance is half of the total number of spots plus a few more, the value of the few more can be computed in closed-form as a function of the system parameters. It corresponds to the average number of bikes in circulation

    Experimental multiparameter quantum metrology in adaptive regime

    Get PDF
    Relevant metrological scenarios involve the simultaneous estimation of multiple parameters. The fundamental ingredient to achieve quantum-enhanced performances is based on the use of appropriately tailored quantum probes. However, reaching the ultimate resolution allowed by physical laws requires non trivial estimation strategies both from a theoretical and a practical point of view. A crucial tool for this purpose is the application of adaptive learning techniques. Indeed, adaptive strategies provide a flexible approach to obtain optimal parameter-independent performances, and optimize convergence to the fundamental bounds with limited amount of resources. Here, we combine on the same platform quantum-enhanced multiparameter estimation attaining the corresponding quantum limit and adaptive techniques. We demonstrate the simultaneous estimation of three optical phases in a programmable integrated photonic circuit, in the limited resource regime. The obtained results show the possibility of successfully combining different fundamental methodologies towards transition to quantum sensors applications

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability
    • …
    corecore