57 research outputs found
Recommended from our members
How does predicate invention affect human comprehensibility?
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as that of Mitchell, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present the results of experiments testing human comprehensibility of logic programs learned with and without predicate invention. Results indicate that comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols
Stability of Diethyl Carbonate in the Presence of Acidic and Basic Solvents
Reducing carbon dioxide (CO2) emissions is an inevitable measure for fighting anthropogenic climate change. Carbon Capture and Utilization (CCU) technologies are gaining rising attention as an additional contributor to reaching the Paris Agreement goals. Giving CO2 a value as a feedstock to be refined into chemicals to be used in industry is a crucial aspect of making these technologies interesting for vast industrial sectors. The synthesis of diethyl carbonate (DEC) is recognized as a promising prospect for the successful implementation of CCU. DEC is considered a fully biodegradable, low-toxic solvent, which can be synthesized from CO2 and ethanol in the presence of a catalyst. DEC may be a non-toxic alternative to other solvents such as toluene or methyl isobutyl ketone (MIBK).
The optimization of DEC synthesis is one aspect that is under investigation today. For the exploration of DEC's applicability, an extensive amount of data is beneficial. Many applications of solvents involve the presence of acids and bases. Hence, an interest in DEC in various environments is reasonable. The decomposition of DEC after contact with water, different acids and bases at room temperature, and the boiling point was determined experimentally to characterize chemical stability. Further, the influence of sodium chloride and a cerium-based catalyst used in DEC synthesis was investigated
Recommended from our members
A neural cognitive model of argumentation with application to legal inference and decision making
Formal models of argumentation have been investigated in several areas, from multi-agent systems and artificial intelligence (AI) to decision making, philosophy and law. In artificial intelligence, logic-based models have been the standard for the representation of argumentative reasoning. More recently, the standard logic-based models have been shown equivalent to standard connectionist models. This has created a new line of research where (i) neural networks can be used as a parallel computational model for argumentation and (ii) neural networks can be used to combine argumentation, quantitative reasoning and statistical learning. At the same time, non-standard logic models of argumentation started to emerge. In this paper, we propose a connectionist cognitive model of argumentation that accounts for both standard and non-standard forms of argumentation. The model is shown to be an adequate framework for dealing with standard and non-standard argumentation, including joint-attacks, argument support, ordered attacks, disjunctive attacks, meta-level attacks, self-defeating attacks, argument accrual and uncertainty. We show that the neural cognitive approach offers an adequate way of modelling all of these different aspects of argumentation. We have applied the framework to the modelling of a public prosecution charging decision as part of a real legal decision making case study containing many of the above aspects of argumentation. The results show that the model can be a useful tool in the analysis of legal decision making, including the analysis of what-if questions and the analysis of alternative conclusions. The approach opens up two new perspectives in the short-term: the use of neural networks for computing prevailing arguments efficiently through the propagation in parallel of neuronal activations, and the use of the same networks to evolve the structure of the argumentation network through learning (e.g. to learn the strength of arguments from data)
Recommended from our members
Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as Mitchell’s, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present two sets of experiments testing human comprehensibility of logic programs. In the first experiment we test human comprehensibility with and without predicate invention. Results indicate comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols. In the second experiment we directly test whether any state-of-the-art ILP systems are ultra-strong learners in Michie’s sense, and select the Metagol system for use in humans trials. Results show participants were not able to learn the relational concept on their own from a set of examples but they were able to apply the relational definition provided by the ILP system correctly. This implies the existence of a class of relational concepts which are hard to acquire for humans, though easy to understand given an abstract explanation. We believe improved understanding of this class could have potential relevance to contexts involving human learning, teaching and verbal interaction
Computational Models for Prediction of Yeast Strain Potential for Winemaking from Phenotypic Profiles
Saccharomyces cerevisiae strains from diverse natural habitats harbour a vast amount of phenotypic diversity, driven by interactions between yeast and the respective environment. In grape juice fermentations, strains are exposed to a wide array of biotic and abiotic stressors, which may lead to strain selection and generate naturally arising strain diversity. Certain phenotypes are of particular interest for the winemaking industry and could be identified by screening of large number of different strains. The objective of the present work was to use data mining approaches to identify those phenotypic tests that are most useful to predict a strain's potential for winemaking. We have constituted a S. cerevisiae collection comprising 172 strains of worldwide geographical origins or technological applications. Their phenotype was screened by considering 30 physiological traits that are important from an oenological point of view. Growth in the presence of potassium bisulphite, growth at 40 degrees C, and resistance to ethanol were mostly contributing to strain variability, as shown by the principal component analysis. In the hierarchical clustering of phenotypic profiles the strains isolated from the same wines and vineyards were scattered throughout all clusters, whereas commercial winemaking strains tended to co-cluster. Mann-Whitney test revealed significant associations between phenotypic results and strain's technological application or origin. Naive Bayesian classifier identified 3 of the 30 phenotypic tests of growth in iprodion (0.05 mg/mL), cycloheximide (0.1 mu g/mL) and potassium bisulphite (150 mg/mL) that provided most information for the assignment of a strain to the group of commercial strains. The probability of a strain to be assigned to this group was 27% using the entire phenotypic profile and increased to 95%, when only results from the three tests were considered. Results show the usefulness of computational approaches to simplify strain selection procedures.Ines Mendes and Ricardo Franco-Duarte are recipients of a fellowship from the Portuguese Science Foundation, FCT (SFRH/BD/74798/2010, SFRH/BD/48591/2008, respectively) and Joao Drumonde-Neves is recipient of a fellowship from the Azores government (M3.1.2/F/006/2008 (DRCT)). Financial support was obtained from FEDER funds through the program COMPETE and by national funds through FCT by the projects FCOMP-01-0124-008775 (PTDC/AGR-ALI/103392/2008) and PTDC/AGR-ALI/121062/2010. Lan Umek and Blaz Zupan acknowledge financial support from Slovene Research Agency (P2-0209). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.info:eu-repo/semantics/publishedVersio
Unsupervised assessment of microarray data quality using a Gaussian mixture model
<p>Abstract</p> <p>Background</p> <p>Quality assessment of microarray data is an important and often challenging aspect of gene expression analysis. This task frequently involves the examination of a variety of summary statistics and diagnostic plots. The interpretation of these diagnostics is often subjective, and generally requires careful expert scrutiny.</p> <p>Results</p> <p>We show how an unsupervised classification technique based on the Expectation-Maximization (EM) algorithm and the naïve Bayes model can be used to automate microarray quality assessment. The method is flexible and can be easily adapted to accommodate alternate quality statistics and platforms. We evaluate our approach using Affymetrix 3' gene expression and exon arrays and compare the performance of this method to a similar supervised approach.</p> <p>Conclusion</p> <p>This research illustrates the efficacy of an unsupervised classification approach for the purpose of automated microarray data quality assessment. Since our approach requires only unannotated training data, it is easy to customize and to keep up-to-date as technology evolves. In contrast to other "black box" classification systems, this method also allows for intuitive explanations.</p
- …