78 research outputs found

    Experimental evaluation of algorithms forsolving problems with combinatorial explosion

    Get PDF
    Solving problems with combinatorial explosionplays an important role in decision-making, sincefeasible or optimal decisions often depend on anon-trivial combination of various factors. Gener-ally, an effective strategy for solving such problemsis merging different viewpoints adopted in differ-ent communities that try to solve similar prob-lems; such that algorithms developed in one re-search area are applicable to other problems, orcan be hybridised with techniques in other ar-eas. This is one of the aims of the RCRA (Ra-gionamento Automatico e Rappresentazione dellaConoscenza) group,1the interest group of the Ital-ian Association for Artificial Intelligence (AI*IA)on knowledge representation and automated rea-soning, which organises its annual meetings since1994

    Solving Linux Upgradeability Problems Using Boolean Optimization

    Full text link
    Managing the software complexity of package-based systems can be regarded as one of the main challenges in software architectures. Upgrades are required on a short time basis and systems are expected to be reliable and consistent after that. For each package in the system, a set of dependencies and a set of conflicts have to be taken into account. Although this problem is computationally hard to solve, efficient tools are required. In the best scenario, the solutions provided should also be optimal in order to better fulfill users requirements and expectations. This paper describes two different tools, both based on Boolean satisfiability (SAT), for solving Linux upgradeability problems. The problem instances used in the evaluation of these tools were mainly obtained from real environments, and are subject to two different lexicographic optimization criteria. The developed tools can provide optimal solutions for many of the instances, but a few challenges remain. Moreover, it is our understanding that this problem has many similarities with other configuration problems, and therefore the same techniques can be used in other domains.Comment: In Proceedings LoCoCo 2010, arXiv:1007.083

    AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

    Get PDF
    Article in monograph or in proceedingsLeiden Inst Advanced Computer Science

    AutoFolio: An Automatically Configured Algorithm Selector (Extended Abstract)

    Get PDF
    Article in monograph or in proceedingsLeiden Inst Advanced Computer Science

    A hybrid algorithm combining path scanning and biased random sampling for the Arc Routing Problem

    Get PDF
    The Arc Routing Problem is a kind of NP-hard routing problems where the demand is located in some of the arcs connecting nodes and should be completely served fulfilling certain constraints. This paper presents a hybrid algorithm which combines a classical heuristic with biased random sampling, to solve the Capacitated Arc Routing Problem (CARP). This new algorithm is compared with the classical Path scanning heuristic, reaching results which outperform it. As discussed in the paper, the methodology presented is flexible, can be easily parallelised and it does not require any complex fine-tuning process. Some preliminary tests show the potential of the proposed approach as well as its limitationsPostprint (published version

    Predicting Good Configurations for GitHub and Stack Overflow Topic Models

    Full text link
    Software repositories contain large amounts of textual data, ranging from source code comments and issue descriptions to questions, answers, and comments on Stack Overflow. To make sense of this textual data, topic modelling is frequently used as a text-mining tool for the discovery of hidden semantic structures in text bodies. Latent Dirichlet allocation (LDA) is a commonly used topic model that aims to explain the structure of a corpus by grouping texts. LDA requires multiple parameters to work well, and there are only rough and sometimes conflicting guidelines available on how these parameters should be set. In this paper, we contribute (i) a broad study of parameters to arrive at good local optima for GitHub and Stack Overflow text corpora, (ii) an a-posteriori characterisation of text corpora related to eight programming languages, and (iii) an analysis of corpus feature importance via per-corpus LDA configuration. We find that (1) popular rules of thumb for topic modelling parameter configuration are not applicable to the corpora used in our experiments, (2) corpora sampled from GitHub and Stack Overflow have different characteristics and require different configurations to achieve good model fit, and (3) we can predict good configurations for unseen corpora reliably. These findings support researchers and practitioners in efficiently determining suitable configurations for topic modelling when analysing textual data contained in software repositories.Comment: to appear as full paper at MSR 2019, the 16th International Conference on Mining Software Repositorie
    • …
    corecore