1,150,193 research outputs found

    On Density-Critical Matroids

    Get PDF
    For a matroid MM having mm rank-one flats, the density d(M)d(M) is mr(M)\tfrac{m}{r(M)} unless m=0m = 0, in which case d(M)=0d(M)= 0. A matroid is density-critical if all of its proper minors of non-zero rank have lower density. By a 1965 theorem of Edmonds, a matroid that is minor-minimal among simple matroids that cannot be covered by kk independent sets is density-critical. It is straightforward to show that U1,k+1U_{1,k+1} is the only minor-minimal loopless matroid with no covering by kk independent sets. We prove that there are exactly ten minor-minimal simple obstructions to a matroid being able to be covered by two independent sets. These ten matroids are precisely the density-critical matroids MM such that d(M)>2d(M) > 2 but d(N)2d(N) \le 2 for all proper minors NN of MM. All density-critical matroids of density less than 22 are series-parallel networks. For k2k \ge 2, although finding all density-critical matroids of density at most kk does not seem straightforward, we do solve this problem for k=94k=\tfrac{9}{4}.Comment: 16 page

    Critical sets of the total variance of state detect all SLOCC entanglement classes

    Full text link
    We present a general algorithm for finding all classes of pure multiparticle states equivalent under Stochastic Local Operations and Classsical Communication (SLOCC). We parametrize all SLOCC classes by the critical sets of the total variance function. Our method works for arbitrary systems of distinguishable and indistinguishable particles. We also discuss the Morse indices of critical points which have the interpretation of the number of independent non-local perturbations increasing the variance and hence entanglement of a state. We illustrate our method by two examples.Comment: 4 page

    Apparent horizons in simplicial Brill wave initial data

    Get PDF
    We construct initial data for a particular class of Brill wave metrics using Regge calculus, and compare the results to a corresponding continuum solution, finding excellent agreement. We then search for trapped surfaces in both sets of initial data, and provide an independent verification of the existence of an apparent horizon once a critical gravitational wave amplitude is passed. Our estimate of this critical value, using both the Regge and continuum solutions, supports other recent findings.Comment: 7 pages, 6 EPS figures, LaTeX 2e. Submitted to Class. Quant. Gra

    Check-hybrid GLDPC Codes: Systematic Elimination of Trapping Sets and Guaranteed Error Correction Capability

    Full text link
    In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error correcting component code. Specifically, we follow two main purposes to construct the check-hybrid codes; first, based on the knowledge of the trapping sets of the global LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping (PBF) decoder corrects the errors on a trapping set and hence eliminates the trapping set. The second purpose is to minimize the rate loss caused by replacing the super checks through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of column-weight 3 LDPC code and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Moreover, we provide a fixed set for a class of constructed check-hybrid codes. The guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes. Finally, we investigate the eliminating of trapping sets of a column-weight 3 LDPC code using the Gallager B decoding algorithm and generalize the results obtained for the PBF for the Gallager B decoding algorithm

    On Identifying Critical Nuggets Of Information During Classification Task

    Get PDF
    In large databases, there may exist critical nuggets - small collections of records or instances that contain domain-specific important information. This information can be used for future decision making such as labeling of critical, unlabeled data records and improving classification results by reducing false positive and false negative errors. In recent years, data mining efforts have focussed on pattern and outlier detection methods. However, not much effort has been dedicated to finding critical nuggets within a data set. This work introduces the idea of critical nuggets, proposes an innovative domain-independent method to measure criticality, suggests a heuristic to reduce the search space for finding critical nuggets, and isolates and validates critical nuggets from some real world data sets. It seems that only a few subsets may qualify to be critical nuggets, underlying the importance of finding them. The proposed methodology can detect them. This work also identifies certain properties of critical nuggets and provides experimental validation of the properties. Critical nuggets were then applied to 2 important classification task related performance metrics - classification accuracy and misclassification costs. Experimental results helped validate that critical nuggets can assist in improving classification accuracies in real world data sets when compared with other standalone classification algorithms. The improvements in accuracy using the critical nuggets were statistically significant. Extensive studies were also undertaken on real world data sets that utilized critical nuggets to help minimize misclassification costs. In this case as well the critical nuggets based approach yielded statistically significant, lower misclassification costs than than standalone classification methods

    Real root finding for equivariant semi-algebraic systems

    Get PDF
    Let RR be a real closed field. We consider basic semi-algebraic sets defined by nn-variate equations/inequalities of ss symmetric polynomials and an equivariant family of polynomials, all of them of degree bounded by 2d<n2d < n. Such a semi-algebraic set is invariant by the action of the symmetric group. We show that such a set is either empty or it contains a point with at most 2d12d-1 distinct coordinates. Combining this geometric result with efficient algorithms for real root finding (based on the critical point method), one can decide the emptiness of basic semi-algebraic sets defined by ss polynomials of degree dd in time (sn)O(d)(sn)^{O(d)}. This improves the state-of-the-art which is exponential in nn. When the variables x1,,xnx_1, \ldots, x_n are quantified and the coefficients of the input system depend on parameters y1,,yty_1, \ldots, y_t, one also demonstrates that the corresponding one-block quantifier elimination problem can be solved in time (sn)O(dt)(sn)^{O(dt)}
    corecore