428 research outputs found

    Predicting soil moisture depletion beneath trembling aspen

    Get PDF

    Coded caching in a multi-server system with random topology

    Get PDF
    Cache-aided content delivery is studied in a multi-server system with P servers and K users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any ρ out of P servers. Thanks to the availability of multiple servers, which model small-cell base stations (SBSs), demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to the single-server scenario. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery latency is studied for both successive and parallel transmissions from the servers. It is shown that, with successive transmissions the achievable average delivery latency is comparable to the one achieved in the single-server scenario, while the gap between the two depends on ρ, the available redundancy across the servers, and can be reduced by increasing the storage capacity at the SBSs. The optimality of the proposed scheme with uncoded cache placement and MDS-coded server storage is also proved for successive transmissions

    Secure distributed matrix computation with discrete fourier transform

    Get PDF
    We consider the problem of secure distributed matrix computation (SDMC), where a user queries a function of data matrices generated at distributed source nodes. We assume the availability of N honest but curious computation servers, which are connected to the sources, the user, and each other through orthogonal and reliable communication links. Our goal is to minimize the amount of data that must be transmitted from the sources to the servers, called the upload cost, while guaranteeing that no T colluding servers can learn any information about the source matrices, and the user cannot learn any information beyond the computation result. We first focus on secure distributed matrix multiplication (SDMM), considering two matrices, and propose a novel polynomial coding scheme using the properties of finite field discrete Fourier transform, which achieves an upload cost significantly lower than the existing results in the literature. We then generalize the proposed scheme to include straggler mitigation, and to the multiplication of multiple matrices while keeping the input matrices, the intermediate computation results, as well as the final result secure against any T colluding servers. We also consider a special case, called computation with own data, where the data matrices used for computation belong to the user. In this case, we drop the security requirement against the user, and show that the proposed scheme achieves the minimal upload cost. We then propose methods for performing other common matrix computations securely on distributed servers, including changing the parameters of secret sharing, matrix transpose, matrix exponentiation, solving a linear system, and matrix inversion, which are then used to show how arbitrary matrix polynomials can be computed securely on distributed servers using the proposed procedur

    Neural Distributed Image Compression Using Common Information

    Get PDF
    We present a novel deep neural network (DNN) architecture for compressing an image when a correlated image is available as side information only at the decoder, a special case of the well-known distributed source coding (DSC) problem in information theory. In particular, we consider a pair of stereo images, which generally have high correlation with each other due to overlapping fields of view, and assume that one image of the pair is to be compressed and transmitted, while the other image is available only at the decoder. In the proposed architecture, the encoder maps the input image to a latent space, quantizes the latent representation, and compresses it using entropy coding. The decoder is trained to extract the common information between the input image and the correlated image, using only the latter. The received latent representation and the locally generated common information are passed through a decoder network to obtain an enhanced reconstruction of the input image. The common information provides a succinct representation of the relevant information at the receiver. We train and demonstrate the effectiveness of the proposed approach on the KITTI and Cityscape datasets of stereo image pairs. Our results show that the proposed architecture is capable of exploiting the decoder-only side information, and outperforms previous work on stereo image compression with decoder side information

    Functional broadcast repair of multiple partial failures in wireless distributed storage systems

    Get PDF
    We consider a distributed storage system with n nodes, where a user can recover the stored file from any k nodes, and study the problem of repairing r partially failed nodes. We consider broadcast repair , that is, d surviving nodes transmit broadcast messages on an error-free wireless channel to the r nodes being repaired, which are then used, together with the surviving data in the local memories of the failed nodes, to recover the lost content. First, we derive the trade-off between the storage capacity and the repair bandwidth for partial repair of multiple failed nodes, based on the cut-set bound for information flow graphs. It is shown that utilizing the broadcast nature of the wireless medium and the surviving contents at the partially failed nodes reduces the repair bandwidth per node significantly. Then, we list a set of invariant conditions that are sufficient for a functional repair code to be feasible. We further propose a scheme for functional repair of multiple failed nodes that satisfies the invariant conditions with high probability, and its extension to the repair of partial failures. The performance of the proposed scheme meets the cut-set bound on all the points on the trade-off curve for all admissible parameters when k is divisible by r , while employing linear subpacketization, which is an important practical consideration in the design of distributed storage codes. Unlike random linear codes, which are conventionally used for functional repair of failed nodes, the proposed repair scheme has lower overhead, lower input-output cost, and lower computational complexity during repair

    Functional broadcast repair of multiple partial failures in wireless distributed storage systems

    Get PDF
    We consider a distributed storage system with n nodes, where a user can recover the stored file from any k nodes, and study the problem of repairing r partially failed nodes. We consider broadcast repair , that is, d surviving nodes transmit broadcast messages on an error-free wireless channel to the r nodes being repaired, which are then used, together with the surviving data in the local memories of the failed nodes, to recover the lost content. First, we derive the trade-off between the storage capacity and the repair bandwidth for partial repair of multiple failed nodes, based on the cut-set bound for information flow graphs. It is shown that utilizing the broadcast nature of the wireless medium and the surviving contents at the partially failed nodes reduces the repair bandwidth per node significantly. Then, we list a set of invariant conditions that are sufficient for a functional repair code to be feasible. We further propose a scheme for functional repair of multiple failed nodes that satisfies the invariant conditions with high probability, and its extension to the repair of partial failures. The performance of the proposed scheme meets the cut-set bound on all the points on the trade-off curve for all admissible parameters when k is divisible by r , while employing linear subpacketization, which is an important practical consideration in the design of distributed storage codes. Unlike random linear codes, which are conventionally used for functional repair of failed nodes, the proposed repair scheme has lower overhead, lower input-output cost, and lower computational complexity during repair

    Neural distributed image compression with cross-attention feature alignment

    Get PDF
    We consider the problem of compressing an information source when a correlated one is available as side information only at the decoder side, which is a special case of the distributed source coding problem in information theory. In particular, we consider a pair of stereo images, which have overlapping fields of view, and are captured by a synchronized and calibrated pair of cameras as correlated image sources. In previously proposed methods, the encoder transforms the input image to a latent representation using a deep neural network, and compresses the quantized latent representation losslessly using entropy coding. The decoder decodes the entropy-coded quantized latent representation, and reconstructs the input image using this representation and the available side information. In the proposed method, the decoder employs a cross-attention module to align the feature maps obtained from the received latent representation of the input image and a latent representation of the side information. We argue that aligning the correlated patches in the feature maps allows better utilization of the side information. We empirically demonstrate the competitiveness of the proposed algorithm on KITTI and Cityscape datasets of stereo image pairs. Our experimental results show that the proposed architecture is able to exploit the decoder-only side information in a more efficient manner compared to previous works

    Measuring of roundness of WPC materials after turning

    Get PDF
    Natural fibers offer several advantages. They are renewable, inexpensive, can be used to isolate a sound and have got a low density. The disadvantages of these materials are: susceptibility to moisture, low fire resistance, and sensitivity to biodegradation. Their disadvantages are possible to eliminate by using of thermosetting and thermoplastic matrixes obtaining the plastics filled by organic fillers. Nowadays is preferred a usage of thermoplastic matrixes (especially in WPC product)

    Advanced Manifolds for Improved Solid Oxide Electrolyzer Performance

    Get PDF
    An investigation was conducted to see if additive manufacturing could be used to fabricate more efficient manifold designs for improved flow, reduced stresses, and decreased number of joints to be sealed for a solid oxide electrolyzer used to convert carbon dioxide to oxygen. Computational flow and mechanical modeling were conducted on a NASA Glenn Research Center patented cell and stack design with the potential to achieve a 3-4 times mass reduction. Various manifold designs were modeled, and two were downselected to be fabricated and tested. Some benefit was seen in a baffled manifold design, which directed incoming flow more effectively into the flow channels, compared to the original design, where the flow spent more time within the manifold itself. Flow measurements indicated some non-uniformity of flow across the channels at higher flow rates, which were not predicted by the model. Some possible explanations for the differences are discussed

    Auto-Classifier: A Robust Defect Detector Based on an AutoML Head

    Full text link
    The dominant approach for surface defect detection is the use of hand-crafted feature-based methods. However, this falls short when conditions vary that affect extracted images. So, in this paper, we sought to determine how well several state-of-the-art Convolutional Neural Networks perform in the task of surface defect detection. Moreover, we propose two methods: CNN-Fusion, that fuses the prediction of all the networks into a final one, and Auto-Classifier, which is a novel proposal that improves a Convolutional Neural Network by modifying its classification component using AutoML. We carried out experiments to evaluate the proposed methods in the task of surface defect detection using different datasets from DAGM2007. We show that the use of Convolutional Neural Networks achieves better results than traditional methods, and also, that Auto-Classifier out-performs all other methods, by achieving 100% accuracy and 100% AUC results throughout all the datasets.Comment: 12 pages, 2 figures. Published in ICONIP2020, proceedings published in the Springer's series of Lecture Notes in Computer Scienc
    corecore