11,555 research outputs found

    Encoding Specific 3D Polyhedral Complexes Using 3D Binary Images

    Get PDF
    We build upon the work developed in [4] in which we presented a method to “locally repair” the cubical complex Q(I) associated to a 3D binary image I, to obtain a “well-composed” polyhedral complex P(I), homotopy equivalent to Q(I). There, we developed a new codification system for P(I), called ExtendedCubeMap (ECM) representation, that encodes: (1) the (geometric) information of the cells of P(I) (i.e., which cells are presented and where), under the form of a 3D grayscale image gP ; (2) the boundary face relations between the cells of P(I), under the form of a set BP of structuring elements. In this paper, we simplify ECM representations, proving that geometric and topological information of cells can be encoded using just a 3D binary image, without the need of using colors or sets of structuring elements. We also outline a possible application in which well-composed polyhedral complexes can be useful.Junta de Andalucía FQM-369Ministerio de Economía y Competitividad MTM2012-32706Ministerio de Economía y Competitividad MTM2015-67072-

    Certifying and removing disparate impact

    Full text link
    What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process. When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses. We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.Comment: Extended version of paper accepted at 2015 ACM SIGKDD Conference on Knowledge Discovery and Data Minin

    Reed-solomon forward error correction (FEC) schemes, RFC 5510

    Get PDF
    This document describes a Fully-Specified Forward Error Correction (FEC) Scheme for the Reed-Solomon FEC codes over GF(2^^m), where m is in {2..16}, and its application to the reliable delivery of data objects on the packet erasure channel (i.e., a communication path where packets are either received without any corruption or discarded during transmission). This document also describes a Fully-Specified FEC Scheme for the special case of Reed-Solomon codes over GF(2^^8) when there is no encoding symbol group. Finally, in the context of the Under-Specified Small Block Systematic FEC Scheme (FEC Encoding ID 129), this document assigns an FEC Instance ID to the special case of Reed-Solomon codes over GF(2^^8). Reed-Solomon codes belong to the class of Maximum Distance Separable (MDS) codes, i.e., they enable a receiver to recover the k source symbols from any set of k received symbols. The schemes described here are compatible with the implementation from Luigi Rizzo

    Bi-objective modeling approach for repairing multiple feature infrastructure systems

    Get PDF
    A bi-objective decision aid model for planning long-term maintenance of infrastructure systems is presented, oriented to interventions on their constituent elements, with two upgrade levels possible for each element (partial/full repairs). The model aims at maximizing benefits and minimizing costs, and its novelty is taking into consideration, and combining, the system/element structure, volume discounts, and socioeconomic factors. The model is tested with field data from 229 sidewalks (systems) and compared to two simpler repair policies, of allowing only partial or full repairs. Results show that the efficiency gains are greater in the lower mid-range budget region. The proposed modeling approach is an innovative tool to optimize cost/benefits for the various repair options and analyze the respective trade-offs.info:eu-repo/semantics/publishedVersio

    Relating multi-sequence longitudinal intensity profiles and clinical covariates in new multiple sclerosis lesions

    Get PDF
    Structural magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients. The formation of these lesions is a complex process involving inflammation, tissue damage, and tissue repair, all of which are visible on MRI. Here we characterize the lesion formation process on longitudinal, multi-sequence structural MRI from 34 MS patients and relate the longitudinal changes we observe within lesions to therapeutic interventions. In this article, we first outline a pipeline to extract voxel level, multi-sequence longitudinal profiles from four MRI sequences within lesion tissue. We then propose two models to relate clinical covariates to the longitudinal profiles. The first model is a principal component analysis (PCA) regression model, which collapses the information from all four profiles into a scalar value. We find that the score on the first PC identifies areas of slow, long-term intensity changes within the lesion at a voxel level, as validated by two experienced clinicians, a neuroradiologist and a neurologist. On a quality scale of 1 to 4 (4 being the highest) the neuroradiologist gave the score on the first PC a median rating of 4 (95% CI: [4,4]), and the neurologist gave it a median rating of 3 (95% CI: [3,3]). In the PCA regression model, we find that treatment with disease modifying therapies (p-value < 0.01), steroids (p-value < 0.01), and being closer to the boundary of abnormal signal intensity (p-value < 0.01) are associated with a return of a voxel to intensity values closer to that of normal-appearing tissue. The second model is a function-on-scalar regression, which allows for assessment of the individual time points at which the covariates are associated with the profiles. In the function-on-scalar regression both age and distance to the boundary were found to have a statistically significant association with the profiles

    Self-repairing Homomorphic Codes for Distributed Storage Systems

    Full text link
    Erasure codes provide a storage efficient alternative to replication based redundancy in (networked) storage systems. They however entail high communication overhead for maintenance, when some of the encoded fragments are lost and need to be replenished. Such overheads arise from the fundamental need to recreate (or keep separately) first a copy of the whole object before any individual encoded fragment can be generated and replenished. There has been recently intense interest to explore alternatives, most prominent ones being regenerating codes (RGC) and hierarchical codes (HC). We propose as an alternative a new family of codes to improve the maintenance process, which we call self-repairing codes (SRC), with the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments without having to reconstruct first the original data, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. These properties allow for not only low communication overhead to recreate a missing fragment, but also independent reconstruction of different missing fragments in parallel, possibly in different parts of the network. We analyze the static resilience of SRCs with respect to traditional erasure codes, and observe that SRCs incur marginally larger storage overhead in order to achieve the aforementioned properties. The salient SRC properties naturally translate to low communication overheads for reconstruction of lost fragments, and allow reconstruction with lower latency by facilitating repairs in parallel. These desirable properties make self-repairing codes a good and practical candidate for networked distributed storage systems
    • …
    corecore