238,750 research outputs found
FAULT LOCATION ALGORITHMS, OBSERVABILITY AND OPTIMALITY FOR POWER DISTRIBUTION SYSTEMS
Power outages usually lead to customer complaints and revenue losses. Consequently, fast and accurate fault location on electric lines is needed so that repair work can be carried out as fast as possible.
Chapter 2 describes novel fault location algorithms for radial and non-radial ungrounded power distribution systems. For both types of systems, fault location approaches using line to neutral or line to line measurements are presented. It’s assumed that network structure and parameters are known, so that during-fault bus impedance matrix of the system can be derived. Functions of bus impedance matrix and available measurements at substation are formulated, from which the unknown fault location can be estimated. Evaluation studies on fault location accuracy and robustness of fault location methods to load variations and measurement errors has been performed.
Most existing fault location methods rely on measurements obtained from meters installed in power systems. To get the most from a limited number of meters available, optimal meter placement methods are needed. Chapter 3 presents a novel optimal meter placement algorithm to keep the system observable in terms of fault location determination. The observability of a fault location in power systems is defined first. Then, fault location observability analysis of the whole system is performed to determine the least number of meters needed and their best locations to achieve fault location observability. Case studies on fault location observability with limited meters are presented. Optimal meter deployment results based on the studied system with equal and varying monitoring cost for meters are displayed.
To enhance fault location accuracy, an optimal fault location estimator for power distribution systems with distributed generation (DG) is described in Chapter 4. Voltages and currents at locations with power generation are adopted to give the best estimation of variables including measurements, fault location and fault resistances. Chi-square test is employed to detect and identify bad measurement. Evaluation studies are carried out to validate the effectiveness of optimal fault location estimator. A set of measurements with one bad measurement is utilized to test if a bad data can be identified successfully by the presented method
A Robust Fault-Tolerant and Scalable Cluster-wide Deduplication for Shared-Nothing Storage Systems
Deduplication has been largely employed in distributed storage systems to
improve space efficiency. Traditional deduplication research ignores the design
specifications of shared-nothing distributed storage systems such as no central
metadata bottleneck, scalability, and storage rebalancing. Further,
deduplication introduces transactional changes, which are prone to errors in
the event of a system failure, resulting in inconsistencies in data and
deduplication metadata. In this paper, we propose a robust, fault-tolerant and
scalable cluster-wide deduplication that can eliminate duplicate copies across
the cluster. We design a distributed deduplication metadata shard which
guarantees performance scalability while preserving the design constraints of
shared- nothing storage systems. The placement of chunks and deduplication
metadata is made cluster-wide based on the content fingerprint of chunks. To
ensure transactional consistency and garbage identification, we employ a
flag-based asynchronous consistency mechanism. We implement the proposed
deduplication on Ceph. The evaluation shows high disk-space savings with
minimal performance degradation as well as high robustness in the event of
sudden server failure.Comment: 6 Pages including reference
Amorphous Placement and Retrieval of Sensory Data in Sparse Mobile Ad-Hoc Networks
Abstract—Personal communication devices are increasingly being equipped with sensors that are able to passively collect information from their surroundings – information that could be stored in fairly small local caches. We envision a system in which users of such devices use their collective sensing, storage, and communication resources to query the state of (possibly remote) neighborhoods. The goal of such a system is to achieve the highest query success ratio using the least communication overhead (power). We show that the use of Data Centric Storage (DCS), or directed placement, is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, amorphous placement, in which sensory samples are cached locally and informed exchanges of cached samples is used to diffuse the sensory data throughout the whole network. In handling queries, the local cache is searched first for potential answers. If unsuccessful, the query is forwarded to one or more direct neighbors for answers. This technique leverages node mobility and caching capabilities to avoid the multi-hop communication overhead of directed placement. Using a simplified mobility model, we provide analytical lower and upper bounds on the ability of amorphous placement to achieve uniform field coverage in one and two dimensions. We show that combining informed shuffling of cached samples upon an encounter between two nodes, with the querying of direct neighbors could lead to significant performance improvements. For instance, under realistic mobility models, our simulation experiments show that amorphous placement achieves 10% to 40% better query answering ratio at a 25% to 35% savings in consumed power over directed placement.National Science Foundation (CNS Cybertrust 0524477, CNS NeTS 0520166, CNS ITR 0205294, EIA RI 0202067
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
- …