175,487 research outputs found

    Matchings with lower quotas: Algorithms and complexity

    Get PDF
    We study a natural generalization of the maximum weight many-to-one matching problem. We are given an undirected bipartite graph G=(A∪˙P,E)G=(A∪˙P,E) with weights on the edges in E, and with lower and upper quotas on the vertices in P. We seek a maximum weight many-to-one matching satisfying two sets of constraints: vertices in A are incident to at most one matching edge, while vertices in P are either unmatched or they are incident to a number of matching edges between their lower and upper quota. This problem, which we call maximum weight many-to-one matching with lower and upper quotas (WMLQ), has applications to the assignment of students to projects within university courses, where there are constraints on the minimum and maximum numbers of students that must be assigned to each project. In this paper, we provide a comprehensive analysis of the complexity of WMLQ from the viewpoints of classical polynomial time algorithms, fixed-parameter tractability, as well as approximability. We draw the line between NPNP-hard and polynomially tractable instances in terms of degree and quota constraints and provide efficient algorithms to solve the tractable ones. We further show that the problem can be solved in polynomial time for instances with bounded treewidth; however, the corresponding runtime is exponential in the treewidth with the maximum upper quota umaxumax as basis, and we prove that this dependence is necessary unless FPT=W[1]FPT=W[1]. The approximability of WMLQ is also discussed: we present an approximation algorithm for the general case with performance guarantee umax+1umax+1, which is asymptotically best possible unless P=NPP=NP. Finally, we elaborate on how most of our positive results carry over to matchings in arbitrary graphs with lower quotas

    Ensemble Analysis of Adaptive Compressed Genome Sequencing Strategies

    Full text link
    Acquiring genomes at single-cell resolution has many applications such as in the study of microbiota. However, deep sequencing and assembly of all of millions of cells in a sample is prohibitively costly. A property that can come to rescue is that deep sequencing of every cell should not be necessary to capture all distinct genomes, as the majority of cells are biological replicates. Biologically important samples are often sparse in that sense. In this paper, we propose an adaptive compressed method, also known as distilled sensing, to capture all distinct genomes in a sparse microbial community with reduced sequencing effort. As opposed to group testing in which the number of distinct events is often constant and sparsity is equivalent to rarity of an event, sparsity in our case means scarcity of distinct events in comparison to the data size. Previously, we introduced the problem and proposed a distilled sensing solution based on the breadth first search strategy. We simulated the whole process which constrained our ability to study the behavior of the algorithm for the entire ensemble due to its computational intensity. In this paper, we modify our previous breadth first search strategy and introduce the depth first search strategy. Instead of simulating the entire process, which is intractable for a large number of experiments, we provide a dynamic programming algorithm to analyze the behavior of the method for the entire ensemble. The ensemble analysis algorithm recursively calculates the probability of capturing every distinct genome and also the expected total sequenced nucleotides for a given population profile. Our results suggest that the expected total sequenced nucleotides grows proportional to log\log of the number of cells and proportional linearly with the number of distinct genomes

    A New Multi-Resource cumulatives Constraint with Negative Heights

    Get PDF
    This paper presents a new cumulatives constraint which generalizes the original cumulative constraint in different ways. The two most important aspects consist in permitting multiple cumulative resources as well as negative heights for the resource consumption of the tasks. This allows modeling in an easy way new scheduling and planning problems. The introduction of negative heights has forced us to come up with new propagation algorithms and to revisit existing ones. The first propagation algorithm is derived from an idea called sweep which is extensively used in computational geometry; the second algorithm is based on a combination of sweep and constructive disjunction, while the last is a generalization of task intervals to this new context. A real-life timetabling problem originally motivated this constraint which was implemented within the SICStus finite domain solver and evaluated against different problem patterns
    corecore