35,671 research outputs found
Super-simple directed designs and their smallest defining sets with its application in LDPC codes
In this paper, we show that for all (mod 5) and ,
there exists a super-simple directed design, also for these
parameters there exists a super-simple directed design such that its
smallest defining sets contain at least half of its blocks. Also, we show that
these designs are useful in constructing parity-check matrices of LDPC codes.Comment: arXiv admin note: substantial text overlap with arXiv:1508.0009
Consistent Second-Order Conic Integer Programming for Learning Bayesian Networks
Bayesian Networks (BNs) represent conditional probability relations among a
set of random variables (nodes) in the form of a directed acyclic graph (DAG),
and have found diverse applications in knowledge discovery. We study the
problem of learning the sparse DAG structure of a BN from continuous
observational data. The central problem can be modeled as a mixed-integer
program with an objective function composed of a convex quadratic loss function
and a regularization penalty subject to linear constraints. The optimal
solution to this mathematical program is known to have desirable statistical
properties under certain conditions. However, the state-of-the-art optimization
solvers are not able to obtain provably optimal solutions to the existing
mathematical formulations for medium-size problems within reasonable
computational times. To address this difficulty, we tackle the problem from
both computational and statistical perspectives. On the one hand, we propose a
concrete early stopping criterion to terminate the branch-and-bound process in
order to obtain a near-optimal solution to the mixed-integer program, and
establish the consistency of this approximate solution. On the other hand, we
improve the existing formulations by replacing the linear "big-" constraints
that represent the relationship between the continuous and binary indicator
variables with second-order conic constraints. Our numerical results
demonstrate the effectiveness of the proposed approaches
A bandwidth theorem for approximate decompositions
We provide a degree condition on a regular -vertex graph which ensures
the existence of a near optimal packing of any family of bounded
degree -vertex -chromatic separable graphs into . In general, this
degree condition is best possible.
Here a graph is separable if it has a sublinear separator whose removal
results in a set of components of sublinear size. Equivalently, the
separability condition can be replaced by that of having small bandwidth. Thus
our result can be viewed as a version of the bandwidth theorem of B\"ottcher,
Schacht and Taraz in the setting of approximate decompositions.
More precisely, let be the infimum over all
ensuring an approximate -decomposition of any sufficiently large regular
-vertex graph of degree at least . Now suppose that is an
-vertex graph which is close to -regular for some and suppose that is a sequence of bounded
degree -vertex -chromatic separable graphs with . We show that there is an edge-disjoint packing of
into .
If the are bipartite, then is sufficient. In
particular, this yields an approximate version of the tree packing conjecture
in the setting of regular host graphs of high degree. Similarly, our result
implies approximate versions of the Oberwolfach problem, the Alspach problem
and the existence of resolvable designs in the setting of regular host graphs
of high degree.Comment: Final version, to appear in the Proceedings of the London
Mathematical Societ
TOPOLOGY OPTIMIZATION USING A LEVEL SET PENALIZATION WITH CONSTRAINED TOPOLOGY FEATURES
Topology optimization techniques have been applied to structural design problems in order to determine the best material distribution in a given domain. The topology optimization problem is ill-posed because optimal designs tend to have infinite number of holes. In order to regularize this problem, a geometrical constraint, for instance the perimeter of the design (i.e., the measure of the boundary of the solid region, length in 2D problems or the surface area in 3D problems) is usually imposed. In this thesis, a novel methodology to solve the topology optimization problem with a constraint on the number of holes is proposed. Case studies are performed and numerical tests evaluated as a way to establish the efficacy and reliability of the proposed method. In the proposed topology optimization process, the material/void distribution evolves towards the optimum in an iterative process in which discretization is performed by finite elements and the material densities in each element are considered as the design variables. In this process, the material/void distribution is updated by a two-step procedure. In the first step, a temporary density function, ϕ*(x), is updated through the steepest descent direction. In the subsequent step, the temporary density function ϕ*(x) is used to model the next material/void distribution, χ*(x), by means of the level set concept. With this procedure, holes are easily created and quantified, material is conveniently added/removed. If the design space is reduced to the elements in the boundary, the topology optimization process turns into a shape optimization procedure in which the boundaries are allowed to move towards the optimal configuration. Thus, the methodology proposed in this work controls the number of holes in the optimal design by combining both topology and shape optimization. In order to evaluate the effectiveness of the proposed method, 2-D minimum compliance problems with volume constraints are solved and numerical tests performed. In addition, the method is capable of handling very general objective functions, and the sensitivities with respect to the design variables can be conveniently computed
Larger Corner-Free Sets from Combinatorial Degenerations
There is a large and important collection of Ramsey-type combinatorial
problems, closely related to central problems in complexity theory, that can be
formulated in terms of the asymptotic growth of the size of the maximum
independent sets in powers of a fixed small (directed or undirected)
hypergraph, also called the Shannon capacity. An important instance of this is
the corner problem studied in the context of multiparty communication
complexity in the Number On the Forehead (NOF) model. Versions of this problem
and the NOF connection have seen much interest (and progress) in recent works
of Linial, Pitassi and Shraibman (ITCS 2019) and Linial and Shraibman (CCC
2021).
We introduce and study a general algebraic method for lower bounding the
Shannon capacity of directed hypergraphs via combinatorial degenerations, a
combinatorial kind of "approximation" of subgraphs that originates from the
study of matrix multiplication in algebraic complexity theory (and which play
an important role there) but which we use in a novel way.
Using the combinatorial degeneration method, we make progress on the corner
problem by explicitly constructing a corner-free subset in
of size , which improves the previous lower bound
of Linial, Pitassi and Shraibman (ITCS 2019) and which gets us
closer to the best upper bound . Our new construction of
corner-free sets implies an improved NOF protocol for the Eval problem. In the
Eval problem over a group , three players need to determine whether their
inputs sum to zero. We find that the NOF communication
complexity of the Eval problem over is at most ,
which improves the previous upper bound .Comment: A short version of this paper will appear in the proceedings of ITCS
2022. This paper improves results that appeared in arxiv:2104.01130v
Parallel Genetic Algorithms for the DAG Vertex Splitting Problem
Directed Acyclic Graphs are often used to model circuits and networks. The path length in such Directed Acyclic Graphs represents circuit or network delays. In the vertex splitting problem, the objective is to determine a minimum number of vertices from the graph to split such that the resulting graph has no path of length greater than a given δ. The problem has been proven to be NP-hard.
A Sequential Genetic Algorithm has been developed to solve the DAG Vertex Splitting Problem. Unlike a standard Genetic Algorithm, this approach uses a variable chromosome length to represent the vertices that split the graph and a dynamic population size. Two String Length Reduction Methods to reduce the string length and two Stepping Methods to explore the search space have been developed. Combinations of these four methods have been studied and conclusions are drawn.
A parallel version of the sequential Genetic Algorithm has been developed. It uses a fully distributed scheme to assign different string lengths to processors. A ring exchange method is used in order to exchange good individuals between processors. Almost linear speed-up and two cases of super linear speed-up are reported
- …