25 research outputs found

    Circuit Testing Based on Fuzzy Sampling with BDD Bases

    Get PDF
    Fuzzy testing of integrated circuits is an established technique. Current approaches generate an approximately uniform random sample from a translation of the circuit to Boolean logic. These approaches have serious scalability issues, which become more pressing with the ever-increasing size of circuits. We propose using a base of binary decision diagrams to sample the translations as a soft computing approach. Uniformity is guaranteed by design and scalability is greatly improved. We test our approach against five other state-of-the-art tools and find our tool to outperform all of them, both in terms of performance and scalability

    Circuit Testing Based on Fuzzy Sampling with BDD Bases

    Get PDF
    Fuzzy testing of integrated circuits is an established technique. Current approaches generate an approximately uniform random sample from a translation of the circuit to Boolean logic. These approaches have serious scalability issues, which become more pressing with the ever-increasing size of circuits. We propose using a base of binary decision diagrams to sample the translations as a soft computing approach. Uniformity is guaranteed by design and scalability is greatly improved. We test our approach against five other state-of-the-art tools and find our tool to outperform all of them, both in terms of performance and scalability

    A Rule-Learning Approach for Detecting Faults in Highly Configurable Software Systems from Uniform Random Samples

    Get PDF
    Software systems tend to become more and more configurable to satisfy the demands of their increasingly varied customers. Exhaustively testing the correctness of highly configurable software is infeasible in most cases because the space of possible configurations is typically colossal. This paper proposes addressing this challenge by (i) working with a representative sample of the configurations, i.e., a ``uniform'' random sample, and (ii) processing the results of testing the sample with a rule induction system that extracts the faults that cause the tests to fail. The paper (i) gives a concrete implementation of the approach, (ii) compares the performance of the rule learning algorithms AQ, CN2, LEM2, PART, and RIPPER, and (iii) provides empirical evidence supporting our procedure

    Group Decision-Making Based on Artificial Intelligence: A Bibliometric Analysis

    Get PDF
    Decisions concerning crucial and complicated problems are seldom made by a single person. Instead, they require the cooperation of a group of experts in which each participant has their own individual opinions, motivations, background, and interests regarding the existing alternatives. In the last 30 years, much research has been undertaken to provide automated assistance to reach a consensual solution supported by most of the group members. Artificial intelligence techniques are commonly applied to tackle critical group decision-making difficulties. For instance, experts' preferences are often vague and imprecise; hence, their opinions are combined using fuzzy linguistic approaches. This paper reports a bibliometric analysis of the ample literature published in this regard. In particular, our analysis: (i) shows the impact and upswing publication trend on this topic; (ii) identifies the most productive authors, institutions, and countries; (iii) discusses authors' and journals' productivity patterns; and (iv) recognizes the most relevant research topics and how the interest on them has evolved over the years

    Rough Sets: a Bibliometric Analysis from 2014 to 2018

    Get PDF
    Along almost forty years, considerable research has been undertaken on rough set theory to deal with vague information. Rough sets have proven to be extremely helpful for a diversity of computer-science problems (e.g., knowledge discovery, computational logic, machine learning, etc.), and numerous application domains (e.g., business economics, telecommunications, neurosciences, etc.). Accordingly, the literature on rough sets has grown without ceasing, and nowadays it is immense. This paper provides a comprehensive overview of the research published for the last five years. To do so, it analyzes 4,038 records retrieved from the Clarivate Web of Science database, identifying (i) the most prolific authors and their collaboration networks, (ii) the countries and organizations that are leading research on rough sets, (iii) the journals that are publishing most papers, (iv) the topics that are being most researched, and (v) the principal application domains

    Uniform and scalable SAT-sampling for configurable systems

    Get PDF
    Several relevant analyses on configurable software systems remain intractable because they require examining vast and highly-constrained configuration spaces. Those analyses could be addressed through statistical inference, i.e., working with a much more tractable sample that later supports generalizing the results obtained to the entire configuration space. To make this possible, the laws of statistical inference impose an indispensable requirement: each member of the population must be equally likely to be included in the sample, i.e., the sampling process needs to be "uniform". Various SAT-samplers have been developed for generating uniform random samples at a reasonable computational cost. Unfortunately, there is a lack of experimental validation over large configuration models to show whether the samplers indeed produce genuine uniform samples or not. This paper (i) presents a new statistical test to verify to what extent samplers accomplish uniformity and (ii) reports the evaluation of four state-of-the-art samplers: Spur, QuickSampler, Unigen2, and Smarch. According to our experimental results, only Spur satisfies both scalability and uniformity.Ministerio de Ciencia, Innovación y Universidades VITAL-3D DPI2016-77677-PMinisterio de Ciencia, Innovación y Universidades OPHELIA RTI2018-101204-B-C22Comunidad Autónoma de Madrid CAM RoboCity2030 S2013/MIT-2748Agencia Estatal de Investigación TIN2017-90644-RED

    Uniform and scalable sampling of highly configurable systems

    Get PDF
    Many analyses on confgurable software systems are intractable when confronted with colossal and highly-constrained confguration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random samples at a reasonable computational cost. However, there is a lack of experimental validation over colossal spaces to show whether the samplers indeed produce uniform samples or not. This paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and fve other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our experimental results show only BDDSampler satisfes both scalability and uniformity.Universidad Nacional de Educación a Distancia (UNED) OPTIVAC 096-034091 2021V/PUNED/008Ministerio de Ciencia, Innovación y Universidades RTI2018-101204-B-C22 (OPHELIA)Comunidad Autónoma de Madrid ROBOCITY2030-DIH-CM S2018/NMT-4331Agencia Estatal de Investigación TIN2017-90644-RED

    Monte Carlo Tree Search for Feature Model Analyses: a General Framework for Decision-Making

    Get PDF
    The colossal solution spaces of most configurable systems make intractable their exhaustive exploration. Accordingly, relevant anal-yses remain open research problems. There exist analyses alterna-tives such as SAT solving or constraint programming. However, none of them have explored simulation-based methods. Monte Carlo-based decision making is a simulation based method for deal-ing with colossal solution spaces using randomness. This paper proposes a conceptual framework that tackles various of those anal-yses using Monte Carlo methods, which have proven to succeed in vast search spaces (e.g., game theory). Our general framework is described formally, and its flexibility to cope with a diversity of analysis problemsis discussed (e.g., finding defective configurations, feature model reverse engineering or getting optimal performance configurations). Additionally, we present a Python implementation of the framework that shows the feasibility of our proposal. With this contribution, we envision that different problems can be ad dressed using Monte Carlo simulations and that our framework can be used to advance the state of the art a step forward.Ministerio de Economía y Competitividad RTI2018-101204-B-C22 (OPHELIA

    Methods for identifying biomedical translation: a systematic review

    Get PDF
    Translational medicine is an important area of biomedicine, and has significantly facilitated the development of biomedical research. Despite its relevance, there is no consensus on how to evaluate its progress and impact. A systematic review was carried out to identify all the methods to evaluate translational research. Seven methods were found according to the established criteria to analyze their characteristics, advantages, and limitations. They allow us to perform this type of evaluation in different ways. No relevant advantages were found between them; each one presented its specific limitations that need to be considered. Nevertheless, the Triangle of Biomedicine could be considered the most relevant method, concerning the time since its publication and usefulness. In conclusion, there is still a lack of a gold-standard method for evaluating biomedical translational research.This work has been supported by the Spanish State Research Agency through the project PID2019-105381GA-I00/AEI/10.13039/ 501100011033 (iScience), grant CTS-115 (Tissue Engineering Research Group, University of Granada) from Junta de Andalucia, Spain, a postdoctoral grant (RH-0145-2020) from the Andalusia Health System and with the EU FEDER ITI Grant for Cadiz Province PI-0032- 2017. The present work is part of the Ph.D. thesis dissertation of Javier Padilla-Cabello
    corecore