34 research outputs found

    In-Network Redundancy Generation for Opportunistic Speedup of Backup

    Full text link
    Erasure coding is a storage-efficient alternative to replication for achieving reliable data backup in distributed storage systems. During the storage process, traditional erasure codes require a unique source node to create and upload all the redundant data to the different storage nodes. However, such a source node may have limited communication and computation capabilities, which constrain the storage process throughput. Moreover, the source node and the different storage nodes might not be able to send and receive data simultaneously -- e.g., nodes might be busy in a datacenter setting, or simply be offline in a peer-to-peer setting -- which can further threaten the efficacy of the overall storage process. In this paper we propose an "in-network" redundancy generation process which distributes the data insertion load among the source and storage nodes by allowing the storage nodes to generate new redundant data by exchanging partial information among themselves, improving the throughput of the storage process. The process is carried out asynchronously, utilizing spare bandwidth and computing resources from the storage nodes. The proposed approach leverages on the local repairability property of newly proposed erasure codes tailor made for the needs of distributed storage systems. We analytically show that the performance of this technique relies on an efficient usage of the spare node resources, and we derive a set of scheduling algorithms to maximize the same. We experimentally show, using availability traces from real peer-to-peer applications as well as Google data center availability and workload traces, that our algorithms can, depending on the environment characteristics, increase the throughput of the storage process significantly (up to 90% in data centers, and 60% in peer-to-peer settings) with respect to the classical naive data insertion approach

    Network Coding for Distributed Cloud, Fog and Data Center Storage

    Get PDF

    Robust Sensor Fusion Algorithms: Calibration and Cost Minimization.

    Get PDF
    A system reacting to its environment requires sensor input to model the environment. Unfortunately, sensors are electromechanical devices subject to physical limitations. It is challenging for a system to robustly evaluate sensor data which is of questionable accuracy and dependability. Sensor fusion addresses this problem by taking inputs from several sensors and merging the individual sensor readings into a single logical reading. The use of heterogeneous physical sensors allows a logical sensor to be less sensitive to the limitations of any single sensor technology, and the use of multiple identical sensors allows the system to tolerate failures of some of its component physical sensors. These are examples of fault masking, or N-modular redundancy. This research resolves two problems of fault masking systems: the automatic calibration of systems which return partially redundant image data is problematic, and the cost incurred by installing redundant system components can be prohibitive. Both are presented in mathematical terms as optimization problems. To combine inputs from multiple independent sensors, readings must be registered to a common coordinate system. This problem is complex when functions equating the readings are not known a priori. It is even more difficult in the case of sensor readings, where data contains noise and may have a sizable periodic component. A practical method must find a near optimal answer in the presence of large amounts of noise. The first part of this research derives a computational scheme capable of registering partially overlapping noisy sensor readings. Another problem with redundant systems is the cost incurred by redundancy. The trade-off between reliability and system cost is most evident in fault-tolerant systems. Given several component types with known dependability statistics, it is possible to determine the combinations of components which fulfill dependability constraints by modeling the system using Markov chains. When unit costs are known, it is desirable to use low cost combinations of components to fulfill the reliability constraints. The second part of this dissertation develops a methodology for designing sensor systems, with redundant components, which satisfy dependability constraints at near minimal cost. Open problems are also listed

    Automated longwall guidance and control systems, phase 2, part 2: RCS, FAS, and MCS

    Get PDF
    The prototype preliminary design of the face advancement system (FAS) consisting of the yaw alignment system (YAS) and the roll control system (RCS), and the master control station (MCS) is outlined

    Computer-aided investigation of interaction mediated by an AR-enabled wearable interface

    Get PDF
    Dierker A. Computer-aided investigation of interaction mediated by an AR-enabled wearable interface. Bielefeld: Universitätsbibliothek Bielefeld; 2012.This thesis provides an approach on facilitating the analysis of nonverbal behaviour during human-human interaction. Thereby, much of the work that researchers do starting with experiment control, data acquisition, tagging and finally the analysis of the data is alleviated. For this, software and hardware techniques are used as sensor technology, machine learning, object tracking, data processing, visualisation and Augmented Reality. These are combined into an Augmented-Reality-enabled Interception Interface (ARbInI), a modular wearable interface for two users. The interface mediates the users’ interaction thereby intercepting and influencing it. The ARbInI interface consists of two identical setups of sensors and displays, which are mutually coupled. Combining cameras and microphones with sensors, the system offers to record rich multimodal interaction cues in an efficient way. The recorded data can be analysed online and offline for interaction features (e. g. head gestures in head movements, objects in joint attention, speech times) using integrated machine-learning approaches. The classified features can be tagged in the data. For a detailed analysis, the recorded multimodal data is transferred automatically into file bundles loadable in a standard annotation tool where the data can be further tagged by hand. For statistic analyses of the complete multimodal corpus, a toolbox for use in a standard statistics program allows to directly import the corpus and to automate the analysis of multimodal and complex relationships between arbitrary data types. When using the optional multimodal Augmented Reality techniques integrated into ARbInI, the camera records exactly what the participant can see and nothing more or less. The following additional advantages can be used during the experiment: (a) the experiment can be controlled by using the auditory or visual displays thereby ensuring controlled experimental conditions, (b) the experiment can be disturbed, thus offering to investigate how problems in interaction are discovered and solved, and (c) the experiment can be enhanced by interactively comprising the behaviour of the user thereby offering to investigate how users cope with novel interaction channels. This thesis introduces criteria for the design of scenarios in which interaction analysis can benefit from the experimentation interface and presents a set of scenarios. These scenarios are applied in several empirical studies thereby collecting multimodal corpora that particularly include head gestures. The capabilities of computer-aided interaction analysis for the investigation of speech, visual attention and head movements are illustrated on this empirical data. The effects of the head-mounted display (HMD) are evaluated thoroughly in two studies. The results show that the HMD users need more head movements to achieve the same shift of gaze direction and perform less head gestures with slower velocity and fewer repetitions compared to non-HMD users. From this, a reduced willingness to perform head movements if not necessary can be concluded. Moreover, compensation strategies are established like leaning backwards to enlarge the field of view, and increasing the number of utterances or changing the reference to objects to compensate for the absence of mutual eye contact. Two studies investigate the interaction while actively inducing misunderstandings. The participants here use compensation strategies like multiple verification questions and arbitrary gaze movements. Additionally, an enhancement method that highlights the visual attention of the interaction partner is evaluated in a search task. The results show a significantly shorter reaction time and fewer errors

    Redundantly grouped cross-object coding for repairable storage

    No full text
    The problem of replenishing redundancy in erasure code based fault-tolerant storage has received a great deal of attention recently, leading to the design of several new coding techniques [3], aiming at a better repairability. In this paper, we adopt a different point of view, by proposing to code across different already encoded objects to alleviate the repair problem. We show that the addition of parity pieces - the simplest form of coding - significantly boosts repairability without sacrificing fault-tolerance for equivalent storage overhead. The simplicity of our approach as well as its reliance on time-tested techniques makes it readily deployable.Accepted versio

    Human Factors Design Standard for Acquisition of Commercial-off-the-Shelf Subsystems, Non-Developmental Items, and Developmental Systems

    Get PDF
    The Human Factors Design Standard (HFDS) provides reference information to assist in the selection, analysis, design, development, and evaluation of new and modified Federal Aviation Administration (FAA) systems and equipment. This document is based largely on the 1996 Human Factors Design Guide (HFDG) produced by the FAA in 1996. It converts the original guidelines document to a standard and incorporates updated information, including the newly revised chapters on automation and human-computer interface. The updated document includes extensive reorganization of material based on user feedback on how the document has been used in the past. Additional information has been also been added to help the users better understand tradeoffs involved with specific design criteria. This standard covers a broad range of human factors topics that pertain to automation, maintenance, displays and printers, controls and visual indicators, alarms, alerts and voice output, input devices, workplace design, system security, safety, the environment, and anthropometry documentation. This document also includes extensive human-computer interface information

    Space station System Engineering and Integration (SE and I). Volume 2: Study results

    Get PDF
    A summary of significant study results that are products of the Phase B conceptual design task are contained. Major elements are addressed. Study results applicable to each major element or area of design are summarized and included where appropriate. Areas addressed include: system engineering and integration; customer accommodations; test and program verification; product assurance; conceptual design; operations and planning; technical and management information system (TMIS); and advanced development

    Techniques for the realization of ultra- reliable spaceborne computer Final report

    Get PDF
    Bibliography and new techniques for use of error correction and redundancy to improve reliability of spaceborne computer
    corecore