35,671 research outputs found

    Super-simple (v,5,2)(v,5,2) directed designs and their smallest defining sets with its application in LDPC codes

    Full text link
    In this paper, we show that for all v0,1v\equiv 0,1 (mod 5) and v15v\geq 15, there exists a super-simple (v,5,2)(v,5,2) directed design, also for these parameters there exists a super-simple (v,5,2)(v,5,2) directed design such that its smallest defining sets contain at least half of its blocks. Also, we show that these designs are useful in constructing parity-check matrices of LDPC codes.Comment: arXiv admin note: substantial text overlap with arXiv:1508.0009

    Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks

    Full text link
    Bayesian Networks (BNs) represent conditional probability relations among a set of random variables (nodes) in the form of a directed acyclic graph (DAG), and have found diverse applications in knowledge discovery. We study the problem of learning the sparse DAG structure of a BN from continuous observational data. The central problem can be modeled as a mixed-integer program with an objective function composed of a convex quadratic loss function and a regularization penalty subject to linear constraints. The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions. However, the state-of-the-art optimization solvers are not able to obtain provably optimal solutions to the existing mathematical formulations for medium-size problems within reasonable computational times. To address this difficulty, we tackle the problem from both computational and statistical perspectives. On the one hand, we propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution to the mixed-integer program, and establish the consistency of this approximate solution. On the other hand, we improve the existing formulations by replacing the linear "big-MM" constraints that represent the relationship between the continuous and binary indicator variables with second-order conic constraints. Our numerical results demonstrate the effectiveness of the proposed approaches

    A bandwidth theorem for approximate decompositions

    Get PDF
    We provide a degree condition on a regular nn-vertex graph GG which ensures the existence of a near optimal packing of any family H\mathcal H of bounded degree nn-vertex kk-chromatic separable graphs into GG. In general, this degree condition is best possible. Here a graph is separable if it has a sublinear separator whose removal results in a set of components of sublinear size. Equivalently, the separability condition can be replaced by that of having small bandwidth. Thus our result can be viewed as a version of the bandwidth theorem of B\"ottcher, Schacht and Taraz in the setting of approximate decompositions. More precisely, let δk\delta_k be the infimum over all δ1/2\delta\ge 1/2 ensuring an approximate KkK_k-decomposition of any sufficiently large regular nn-vertex graph GG of degree at least δn\delta n. Now suppose that GG is an nn-vertex graph which is close to rr-regular for some r(δk+o(1))nr \ge (\delta_k+o(1))n and suppose that H1,,HtH_1,\dots,H_t is a sequence of bounded degree nn-vertex kk-chromatic separable graphs with ie(Hi)(1o(1))e(G)\sum_i e(H_i) \le (1-o(1))e(G). We show that there is an edge-disjoint packing of H1,,HtH_1,\dots,H_t into GG. If the HiH_i are bipartite, then r(1/2+o(1))nr\geq (1/2+o(1))n is sufficient. In particular, this yields an approximate version of the tree packing conjecture in the setting of regular host graphs GG of high degree. Similarly, our result implies approximate versions of the Oberwolfach problem, the Alspach problem and the existence of resolvable designs in the setting of regular host graphs of high degree.Comment: Final version, to appear in the Proceedings of the London Mathematical Societ

    TOPOLOGY OPTIMIZATION USING A LEVEL SET PENALIZATION WITH CONSTRAINED TOPOLOGY FEATURES

    Get PDF
    Topology optimization techniques have been applied to structural design problems in order to determine the best material distribution in a given domain. The topology optimization problem is ill-posed because optimal designs tend to have infinite number of holes. In order to regularize this problem, a geometrical constraint, for instance the perimeter of the design (i.e., the measure of the boundary of the solid region, length in 2D problems or the surface area in 3D problems) is usually imposed. In this thesis, a novel methodology to solve the topology optimization problem with a constraint on the number of holes is proposed. Case studies are performed and numerical tests evaluated as a way to establish the efficacy and reliability of the proposed method. In the proposed topology optimization process, the material/void distribution evolves towards the optimum in an iterative process in which discretization is performed by finite elements and the material densities in each element are considered as the design variables. In this process, the material/void distribution is updated by a two-step procedure. In the first step, a temporary density function, ϕ*(x), is updated through the steepest descent direction. In the subsequent step, the temporary density function ϕ*(x) is used to model the next material/void distribution, χ*(x), by means of the level set concept. With this procedure, holes are easily created and quantified, material is conveniently added/removed. If the design space is reduced to the elements in the boundary, the topology optimization process turns into a shape optimization procedure in which the boundaries are allowed to move towards the optimal configuration. Thus, the methodology proposed in this work controls the number of holes in the optimal design by combining both topology and shape optimization. In order to evaluate the effectiveness of the proposed method, 2-D minimum compliance problems with volume constraints are solved and numerical tests performed. In addition, the method is capable of handling very general objective functions, and the sensitivities with respect to the design variables can be conveniently computed

    Larger Corner-Free Sets from Combinatorial Degenerations

    Get PDF
    There is a large and important collection of Ramsey-type combinatorial problems, closely related to central problems in complexity theory, that can be formulated in terms of the asymptotic growth of the size of the maximum independent sets in powers of a fixed small (directed or undirected) hypergraph, also called the Shannon capacity. An important instance of this is the corner problem studied in the context of multiparty communication complexity in the Number On the Forehead (NOF) model. Versions of this problem and the NOF connection have seen much interest (and progress) in recent works of Linial, Pitassi and Shraibman (ITCS 2019) and Linial and Shraibman (CCC 2021). We introduce and study a general algebraic method for lower bounding the Shannon capacity of directed hypergraphs via combinatorial degenerations, a combinatorial kind of "approximation" of subgraphs that originates from the study of matrix multiplication in algebraic complexity theory (and which play an important role there) but which we use in a novel way. Using the combinatorial degeneration method, we make progress on the corner problem by explicitly constructing a corner-free subset in F2n×F2nF_2^n \times F_2^n of size Ω(3.39n/poly(n))\Omega(3.39^n/poly(n)), which improves the previous lower bound Ω(2.82n)\Omega(2.82^n) of Linial, Pitassi and Shraibman (ITCS 2019) and which gets us closer to the best upper bound 4no(n)4^{n - o(n)}. Our new construction of corner-free sets implies an improved NOF protocol for the Eval problem. In the Eval problem over a group GG, three players need to determine whether their inputs x1,x2,x3Gx_1, x_2, x_3 \in G sum to zero. We find that the NOF communication complexity of the Eval problem over F2nF_2^n is at most 0.24n+O(logn)0.24n + O(\log n), which improves the previous upper bound 0.5n+O(logn)0.5n + O(\log n).Comment: A short version of this paper will appear in the proceedings of ITCS 2022. This paper improves results that appeared in arxiv:2104.01130v

    Parallel Genetic Algorithms for the DAG Vertex Splitting Problem

    Get PDF
    Directed Acyclic Graphs are often used to model circuits and networks. The path length in such Directed Acyclic Graphs represents circuit or network delays. In the vertex splitting problem, the objective is to determine a minimum number of vertices from the graph to split such that the resulting graph has no path of length greater than a given δ. The problem has been proven to be NP-hard. A Sequential Genetic Algorithm has been developed to solve the DAG Vertex Splitting Problem. Unlike a standard Genetic Algorithm, this approach uses a variable chromosome length to represent the vertices that split the graph and a dynamic population size. Two String Length Reduction Methods to reduce the string length and two Stepping Methods to explore the search space have been developed. Combinations of these four methods have been studied and conclusions are drawn. A parallel version of the sequential Genetic Algorithm has been developed. It uses a fully distributed scheme to assign different string lengths to processors. A ring exchange method is used in order to exchange good individuals between processors. Almost linear speed-up and two cases of super linear speed-up are reported
    corecore