12 research outputs found

    Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm

    No full text
    Abstract: The MAXimum propositional SATisfiability problem (MAXSAT) is a well known NP-hard optimization problem with many theoretical and practical applications in artificial intelligence and mathematical logic. Heuristic local search algorithms are widely recognized as the most effective approaches used to solve them. However, their performance depends both on their complexity and their tuning parameters which are controlled experimentally and remain a difficult task. Extremal Optimization (EO) is one of the simplest heuristic methods with only one free parameter, which has proved competitive with the more elaborate general-purpose method on graph partitioning and coloring. It is inspired by the dynamics of physical systems with emergent complexity and their ability to self-organize to reach an optimal adaptation state. In this paper, we propose an extremal optimization procedure for MAXSAT and consider its effectiveness by computational experiments on a benchmark of random instances. Comparative tests showed that this procedure improves significantly previous results obtained on the same benchmark with other modern local search methods like WSAT, simulated annealing and Tabu Search (TS)

    A Swarm Random Walk Based Method for the Standard Cell Placement Problem

    No full text
    The standard cell placement (SCP) problem is a well-studied placement problem, as it is an important step in the VLSI design process. In SCP, cells are placed on chip to optimize some objectives, such as wirelength or area. The SCP problem is solved using mainly four basic methods: simulated annealing, quadratic placement, min-cut placement, and force-directed placement. These methods are adequate for small chip sizes. Nowadays, chip sizes are very large, and hence, hybrid methods are employed to solve the SCP problem instead of the original methods by themselves. This paper presents a new hybrid method for the SCP problem using a swarm intelligence-based (SI) method, called SwarmRW (swarm random walk), on top of a min-cut based partitioner. The resulting placer, called sPL (swarm placer), was tested on the PEKU benchmark suite and compared with several related placers. The obtained results demonstrate the effectiveness of the proposed approach and show that sPL can achieve competitive performance

    Naïve Bayes classifiers for authorship attribution of Arabic texts

    Get PDF
    Authorship attribution is the process of assigning an author to an anonymous text based on writing characteristics. Several authorship attribution methods were developed for natural languages, such as English, Chinese and Dutch. However, the number of related works for Arabic is limited. Naïve Bayes classifiers have been widely used for various natural language processing tasks. However, there is generally no mention of the event model used, which can have a considerable impact on the performance of the classifier. To the best of our knowledge, naïve Bayes classifiers have not yet been considered for authorship attribution in Arabic. Therefore, we propose to study their use for this problem, taking into account different event models, namely, simple naïve Bayes (NB), multinomial naïve Bayes (MNB), multi-variant Bernoulli naïve Bayes (MBNB) and multi-variant Poisson naïve Bayes (MPNB). We evaluate these models’ performances on a large Arabic dataset extracted from books of 10 different authors and compare them with other existing methods. The experimental results show that MBNB provides the best results and could attribute the author of a text with an accuracy of 97.43%. Comparison results with related methods indicate that MBNB and MNB are appropriate for authorship attribution

    Solving Multi-Document Summarization as an Orienteering Problem

    No full text
    With advances in information technology, people face the problem of dealing with tremendous amounts of information and need ways to save time and effort by summarizing the most important and relevant information. Thus, automatic text summarization has become necessary to reduce the information overload. This article proposes a novel extractive graph-based approach to solve the multi-document summarization (MDS) problem. To optimize the coverage of information in the output summary, the problem is formulated as an orienteering problem and heuristically solved by an ant colony system algorithm. The performance of the implemented system (MDS-OP) was evaluated on DUC 2004 (Task 2) and MultiLing 2015 (MMS task) benchmark corpora using several ROUGE metrics, as well as other methods. Its comparison with the performances of 26 systems shows that MDS-OP achieved the best F-measure scores on both tasks in terms of ROUGE-1 and ROUGE-L (DUC 2004), ROUGE-SU4, and three other evaluation methods (MultiLing 2015). Overall, MDS-OP ranked among the best 3 systems

    Boosting the Performance of CDCL-Based SAT Solvers by Exploiting Backbones and Backdoors

    No full text
    Boolean structural measures were introduced to explain the high performance of conflict-driven clause-learning (CDCL) SAT solvers on industrial SAT instances. Those considered in this study include measures related to backbones and backdoors: backbone size, backbone frequency, and backdoor size. A key area of research is to improve the performance of CDCL SAT solvers by exploiting these measures. For the purpose of guiding the CDCL SAT solver for branching on backbone and backdoor variables, this study proposes low-overhead heuristics for computing these variables. Through these heuristics, a set of modifications to the Variable State Independent Decaying Sum (VSIDS) decision heuristic is suggested to exploit backbones and backdoors and potentially improve the performance of CDCL SAT solvers. In total, fifteen variants of two competitive base solvers, MapleLCMDistChronoBT-DL-v3 and LSTech, were developed. Empirical evaluation was conducted on 32 industrial families from 2002–2021 SAT competitions. According to the results, modifying the VSIDS heuristic in the base solvers to exploit backbones and backdoors improves its performance. In particular, our new CDCL SAT solver, LSTech_BBsfcr_v1, solved more industrial SAT instances than the winning CDCL SAT solvers in 2020 and 2021 SAT competitions

    Boosting the Performance of CDCL-Based SAT Solvers by Exploiting Backbones and Backdoors

    No full text
    Boolean structural measures were introduced to explain the high performance of conflict-driven clause-learning (CDCL) SAT solvers on industrial SAT instances. Those considered in this study include measures related to backbones and backdoors: backbone size, backbone frequency, and backdoor size. A key area of research is to improve the performance of CDCL SAT solvers by exploiting these measures. For the purpose of guiding the CDCL SAT solver for branching on backbone and backdoor variables, this study proposes low-overhead heuristics for computing these variables. Through these heuristics, a set of modifications to the Variable State Independent Decaying Sum (VSIDS) decision heuristic is suggested to exploit backbones and backdoors and potentially improve the performance of CDCL SAT solvers. In total, fifteen variants of two competitive base solvers, MapleLCMDistChronoBT-DL-v3 and LSTech, were developed. Empirical evaluation was conducted on 32 industrial families from 2002–2021 SAT competitions. According to the results, modifying the VSIDS heuristic in the base solvers to exploit backbones and backdoors improves its performance. In particular, our new CDCL SAT solver, LSTech_BBsfcr_v1, solved more industrial SAT instances than the winning CDCL SAT solvers in 2020 and 2021 SAT competitions

    Exploring Evaluation Methods for Interpretable Machine Learning: A Survey

    No full text
    In recent times, the progress of machine learning has facilitated the development of decision support systems that exhibit predictive accuracy, surpassing human capabilities in certain scenarios. However, this improvement has come at the cost of increased model complexity, rendering them black-box models that obscure their internal logic from users. These black boxes are primarily designed to optimize predictive accuracy, limiting their applicability in critical domains such as medicine, law, and finance, where both accuracy and interpretability are crucial factors for model acceptance. Despite the growing body of research on interpretability, there remains a significant dearth of evaluation methods for the proposed approaches. This survey aims to shed light on various evaluation methods employed in interpreting models. Two primary procedures are prevalent in the literature: qualitative and quantitative evaluations. Qualitative evaluations rely on human assessments, while quantitative evaluations utilize computational metrics. Human evaluation commonly manifests as either researcher intuition or well-designed experiments. However, this approach is susceptible to human biases and fatigue and cannot adequately compare two models. Consequently, there has been a recent decline in the use of human evaluation, with computational metrics gaining prominence as a more rigorous method for comparing and assessing different approaches. These metrics are designed to serve specific goals, such as fidelity, comprehensibility, or stability. The existing metrics often face challenges when scaling or being applied to different types of model outputs and alternative approaches. Another important factor that needs to be addressed is that while evaluating interpretability methods, their results may not always be entirely accurate. For instance, relying on the drop in probability to assess fidelity can be problematic, particularly when facing the challenge of out-of-distribution data. Furthermore, a fundamental challenge in the interpretability domain is the lack of consensus regarding its definition and requirements. This issue is compounded in the evaluation process and becomes particularly apparent when assessing comprehensibility
    corecore