351,712 research outputs found
Finding Near-Optimal Independent Sets at Scale
The independent set problem is NP-hard and particularly difficult to solve in
large sparse graphs. In this work, we develop an advanced evolutionary
algorithm, which incorporates kernelization techniques to compute large
independent sets in huge sparse networks. A recent exact algorithm has shown
that large networks can be solved exactly by employing a branch-and-reduce
technique that recursively kernelizes the graph and performs branching.
However, one major drawback of their algorithm is that, for huge graphs,
branching still can take exponential time. To avoid this problem, we
recursively choose vertices that are likely to be in a large independent set
(using an evolutionary approach), then further kernelize the graph. We show
that identifying and removing vertices likely to be in large independent sets
opens up the reduction space---which not only speeds up the computation of
large independent sets drastically, but also enables us to compute high-quality
independent sets on much larger instances than previously reported in the
literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with
arXiv:1502.0168
Advanced Multilevel Node Separator Algorithms
A node separator of a graph is a subset S of the nodes such that removing S
and its incident edges divides the graph into two disconnected components of
about equal size. In this work, we introduce novel algorithms to find small
node separators in large graphs. With focus on solution quality, we introduce
novel flow-based local search algorithms which are integrated in a multilevel
framework. In addition, we transfer techniques successfully used in the graph
partitioning field. This includes the usage of edge ratings tailored to our
problem to guide the graph coarsening algorithm as well as highly localized
local search and iterated multilevel cycles to improve solution quality even
further. Experiments indicate that flow-based local search algorithms on its
own in a multilevel framework are already highly competitive in terms of
separator quality. Adding additional local search algorithms further improves
solution quality. Our strongest configuration almost always outperforms
competing systems while on average computing 10% and 62% smaller separators
than Metis and Scotch, respectively
HIV-1 tropism determination using a phenotypic Env recombinant viral assay highlights overestimation of CXCR4-usage by genotypic prediction algorithms for CRRF01_AE and CRF02_AG
Background: Human Immunodeficiency virus type-1 (HIV) entry into target cells involves binding of the viral envelope (Env) to CD4 and a coreceptor, mainly CCR5 or CXCR4. The only currently licensed HIV entry inhibitor, maraviroc, targets CCR5, and the presence of CXCX4-using strains must be excluded prior to treatment. Co-receptor usage can be assessed by phenotypic assays or through genotypic prediction. Here we compared the performance of a phenotypic Env-Recombinant Viral Assay (RVA) to the two most widely used genotypic prediction algorithms, Geno2Pheno([coreceptor]) and webPSSM.
Methods: Co-receptor tropism of samples from 73 subtype B and 219 non-B infections was measured phenotypically using a luciferase-tagged, NL4-3-based, RVA targeting Env. In parallel, tropism was inferred genotypically from the corresponding V3-loop sequences using Geno2Pheno([coreceptor]) (5-20% FPR) and webPSSM-R5X4. For discordant samples, phenotypic outcome was retested using co-receptor antagonists or the validated Trofile (R) Enhanced-Sensitivity-Tropism-Assay.
Results: The lower detection limit of the RVA was 2.5% and 5% for X4 and R5 minority variants respectively. A phenotype/genotype result was obtained for 210 samples. Overall, concordance of phenotypic results with Geno2Pheno([coreceptor]) was 85.2% and concordance with webPSSM was 79.5%. For subtype B, concordance with Geno2pheno([coreceptor]) was 94.4% and concordance with webPSSM was 79.6%. High concordance of genotypic tools with phenotypic outcome was seen for subtype C (90% for both tools). Main discordances involved CRF01_AE and CRF02_AG for both algorithms (CRF01_AE: 35.9% discordances with Geno2Pheno([coreceptor]) and 28.2% with webPSSM; CRF02_AG: 20.7% for both algorithms). Genotypic prediction overestimated CXCR4-usage for both CRFs. For webPSSM, 40% discordance was observed for subtype A.
Conclusions: Phenotypic assays remain the most accurate for most non-B subtypes and new subtype-specific rules should be developed for non-B subtypes, as research studies more and more draw conclusions from genotypically-inferred tropism, and to avoid unnecessarily precluding patients with limited treatment options from receiving maraviroc or other entry inhibitors
Optimization techniques in respiratory control system models
One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.Postprint (author's final draft
Incremental Sampling-based Algorithms for Optimal Motion Planning
During the last decade, incremental sampling-based motion planning
algorithms, such as the Rapidly-exploring Random Trees (RRTs) have been shown
to work well in practice and to possess theoretical guarantees such as
probabilistic completeness. However, no theoretical bounds on the quality of
the solution obtained by these algorithms have been established so far. The
first contribution of this paper is a negative result: it is proven that, under
mild technical conditions, the cost of the best path in the RRT converges
almost surely to a non-optimal value. Second, a new algorithm is considered,
called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost
of the best path in the RRG converges to the optimum almost surely. Third, a
tree version of RRG is introduced, called the RRT algorithm, which
preserves the asymptotic optimality of RRG while maintaining a tree structure
like RRT. The analysis of the new algorithms hinges on novel connections
between sampling-based motion planning algorithms and the theory of random
geometric graphs. In terms of computational complexity, it is shown that the
number of simple operations required by both the RRG and RRT algorithms is
asymptotically within a constant factor of that required by RRT.Comment: 20 pages, 10 figures, this manuscript is submitted to the
International Journal of Robotics Research, a short version is to appear at
the 2010 Robotics: Science and Systems Conference
- …