2,766 research outputs found

    Optimizing on-demand resource deployment for peer-assisted content delivery

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for services in a pee-to-peer (P2P) fashion. Such peer-assisted service paradigm promises significant infrastructure cost reduction, but suffers from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to clients especially for real-time applications where content can not be cached. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to efficiently utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the upstream capacity of clients. We target three applications that require the delivery of real-time as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time - the time it takes to deliver the content to all clients in a group. The second application is live video streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for clients running bandwidth-intensive applications. For each of the above applications, we develop analytical models that efficiently allocate the already available resources. They also efficiently allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate these techniques through simulation and/or implementation

    Optimizing on-demand resource deployment for peer-assisted content delivery (PhD thesis)

    Full text link
    Increasingly, content delivery solutions leverage client resources in exchange for service in a peer-to-peer (P2P) fashion. Such peer-assisted service paradigms promise significant infrastructure cost reduction, but suffer from the unpredictability associated with client resources, which is often exhibited as an imbalance between the contribution and consumption of resources by clients. This imbalance hinders the ability to guarantee a minimum service fidelity of these services to the clients. In this thesis, we propose a novel architectural service model that enables the establishment of higher fidelity services through (1) coordinating the content delivery to optimally utilize the available resources, and (2) leasing the least additional cloud resources, available through special nodes (angels) that join the service on-demand, and only if needed, to complement the scarce resources available through clients. While the proposed service model can be deployed in many settings, this thesis focuses on peer-assisted content delivery applications, in which the scarce resource is typically the uplink capacity of clients. We target three applications that require the delivery of fresh as opposed to stale content. The first application is bulk-synchronous transfer, in which the goal of the system is to minimize the maximum distribution time -- the time it takes to deliver the content to all clients in a group. The second application is live streaming, in which the goal of the system is to maintain a given streaming quality. The third application is Tor, the anonymous onion routing network, in which the goal of the system is to boost performance (increase throughput and reduce latency) throughout the network, and especially for bandwidth-intensive applications. For each of the above applications, we develop mathematical models that optimally allocate the already available resources. They also optimally allocate additional on-demand resource to achieve a certain level of service. Our analytical models and efficient constructions depend on some simplifying, yet impractical, assumptions. Thus, inspired by our models and constructions, we develop practical techniques that we incorporate into prototypical peer-assisted angel-enabled cloud services. We evaluate those techniques through simulation and/or implementation. (Major Advisor: Azer Bestavros

    Data-driven Protection of Transformers, Phase Angle Regulators, and Transmission Lines in Interconnected Power Systems

    Get PDF
    This dissertation highlights the growing interest in and adoption of machine learning approaches for fault detection in modern electric power grids. Once a fault has occurred, it must be identified quickly and a variety of preventative steps must be taken to remove or insulate it. As a result, detecting, locating, and classifying faults early and accurately can improve safety and dependability while reducing downtime and hardware damage. Machine learning-based solutions and tools to carry out effective data processing and analysis to aid power system operations and decision-making are becoming preeminent with better system condition awareness and data availability. Power transformers, Phase Shift Transformers or Phase Angle Regulators, and transmission lines are critical components in power systems, and ensuring their safety is a primary issue. Differential relays are commonly employed to protect transformers, whereas distance relays are utilized to protect transmission lines. Magnetizing inrush, overexcitation, and current transformer saturation make transformer protection a challenge. Furthermore, non-standard phase shift, series core saturation, low turn-to-turn, and turn-to-ground fault currents are non-traditional problems associated with Phase Angle Regulators. Faults during symmetrical power swings and unstable power swings may cause mal-operation of distance relays, and unintentional and uncontrolled islanding. The distance relays also mal-operate for transmission lines connected to type-3 wind farms. The conventional protection techniques would no longer be adequate to address the above-mentioned challenges due to their limitations in handling and analyzing the massive amount of data, limited generalizability of conventional models, incapability to model non-linear systems, etc. These limitations of conventional differential and distance protection methods bring forward the motivation of using machine learning techniques in addressing various protection challenges. The power transformers and Phase Angle Regulators are modeled to simulate and analyze the transients accurately. Appropriate time and frequency domain features are selected using different selection algorithms to train the machine learning algorithms. The boosting algorithms outperformed the other classifiers for detection of faults with balanced accuracies of above 99% and computational time of about one and a half cycles. The case studies on transmission lines show that the developed methods distinguish power swings and faults, and determine the correct fault zone. The proposed data-driven protection algorithms can work together with conventional differential and distance relays and offer supervisory control over their operation and thus improve the dependability and security of protection systems

    Massive MIMO for Wireless Sensing with a Coherent Multiple Access Channel

    Full text link
    We consider the detection and estimation of a zero-mean Gaussian signal in a wireless sensor network with a coherent multiple access channel, when the fusion center (FC) is configured with a large number of antennas and the wireless channels between the sensor nodes and FC experience Rayleigh fading. For the detection problem, we study the Neyman-Pearson (NP) Detector and Energy Detector (ED), and find optimal values for the sensor transmission gains. For the NP detector which requires channel state information (CSI), we show that detection performance remains asymptotically constant with the number of FC antennas if the sensor transmit power decreases proportionally with the increase in the number of antennas. Performance bounds show that the benefit of multiple antennas at the FC disappears as the transmit power grows. The results of the NP detector are also generalized to the linear minimum mean squared error estimator. For the ED which does not require CSI, we derive optimal gains that maximize the deflection coefficient of the detector, and we show that a constant deflection can be asymptotically achieved if the sensor transmit power scales as the inverse square root of the number of FC antennas. Unlike the NP detector, for high sensor power the multi-antenna ED is observed to empirically have significantly better performance than the single-antenna implementation. A number of simulation results are included to validate the analysis.Comment: 32 pages, 6 figures, accepted by IEEE Transactions on Signal Processing, Feb. 201

    Hypothesis Testing in Feedforward Networks with Broadcast Failures

    Full text link
    Consider a countably infinite set of nodes, which sequentially make decisions between two given hypotheses. Each node takes a measurement of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in whether there exists a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We also derive the convergence rate of the error probability. In case 2, we show that if each node learns from all of its previous predecessors, then the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities

    Enhancing reliability in passive anti-islanding protection schemes for distribution systems with distributed generation

    Get PDF
    This thesis introduces a new approach to enhance the reliability of conventional passive anti-islanding protection scheme in distribution systems embedding distributed generation. This approach uses an Islanding-Dedicated System (IDS) per phase which will be logically combined with the conventional scheme, either in blocking or permissive modes. Each phase IDS is designed based on data mining techniques. The use of Artificial Neural Networks (ANNs) enables to reach higher accuracy and speed among other data mining techniques. The proposed scheme is trained and tested on a practical radial distribution system with six-1.67 MW Doubly-Fed Induction Generators (DFIG-DGs) wind turbines. Various scenarios of DFIG-DG operating conditions with different types of disturbances for critical breakers are simulated. Conventional passive anti-islanding relays incorrectly detected 67.3% of non-islanding scenarios. In other words, the security is as low as 32.3%. The obtained results indicate that the proposed approach can be used to theoretically increase the security to 100%. Therefore, the overall reliability of the system is substantially increased

    Beeping a Deterministic Time-Optimal Leader Election

    Get PDF
    The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints on the exchangeable messages can result in efficient algorithms, when dealing with the beeping model. Using this deterministic leader election algorithm, we obtain a randomized leader election algorithm for anonymous networks with an asymptotically optimal round complexity of O(D + log n) w.h.p. In previous works this complexity was obtained in expectation only. Moreover, using deterministic leader election, we obtain efficient algorithms for symmetry-breaking and communication procedures: O(log n) time MIS and 5-coloring for tree networks (which is time-optimal), as well as k-source multi-broadcast for general graphs in O(min(k,log n) * D + k log{(n M)/k}) rounds (for messages in {1,..., M}). This latter result improves on previous solutions when the number of sources k is sublogarithmic (k = o(log n))

    Performance optimization of wireless sensor networks for remote monitoring

    Get PDF
    Wireless sensor networks (WSNs) have gained worldwide attention in recent years because of their great potential for a variety of applications such as hazardous environment exploration, military surveillance, habitat monitoring, seismic sensing, and so on. In this thesis we study the use of WSNs for remote monitoring, where a wireless sensor network is deployed in a remote region for sensing phenomena of interest while its data monitoring center is located in a metropolitan area that is geographically distant from the monitored region. This application scenario poses great challenges since such kind of monitoring is typically large scale and expected to be operational for a prolonged period without human involvement. Also, the long distance between the monitored region and the data monitoring center requires that the sensed data must be transferred by the employment of a third-party communication service, which incurs service costs. Existing methodologies for performance optimization of WSNs base on that both the sensor network and its data monitoring center are co-located, and therefore are no longer applicable to the remote monitoring scenario. Thus, developing new techniques and approaches for severely resource-constrained WSNs is desperately needed to maintain sustainable, unattended remote monitoring with low cost. Specifically, this thesis addresses the key issues and tackles problems in the deployment of WSNs for remote monitoring from the following aspects. To maximize the lifetime of large-scale monitoring, we deal with the energy consumption imbalance issue by exploring multiple sinks. We develop scalable algorithms which determine the optimal number of sinks needed and their locations, thereby dynamically identifying the energy bottlenecks and balancing the data relay workload throughout the network. We conduct experiments and the experimental results demonstrate that the proposed algorithms significantly prolong the network lifetime. To eliminate imbalance of energy consumption among sensor nodes, a complementary strategy is to introduce a mobile sink for data gathering. However, the limited communication time between the mobile sink and nodes results in that only part of sensed data will be collected and the rest will be lost, for which we propose the concept of monitoring quality with the exploration of sensed data correlation among nodes. We devise a heuristic for monitoring quality maximization, which schedules the sink to collect data from selected nodes, and uses the collected data to recover the missing ones. We study the performance of the proposed heuristic and validate its effectiveness in improving the monitoring quality. To strive for the fine trade-off between two performance metrics: throughput and cost, we investigate novel problems of minimizing cost with guaranteed throughput, and maximizing throughput with minimal cost. We develop approximation algorithms which find reliable data routing in the WSN and strategically balance workload on the sinks. We prove that the delivered solutions are fractional of the optimum. We finally conclude our work and discuss potential research topics which derive from the studies of this thesis
    • …
    corecore