63 research outputs found

    Analogical Proportions and Multiple-Valued Logics

    Get PDF
    National audienceRecently, a propositional logic modeling of analogical proportions, i.e., statements of the form “A is to B as C is to D”, has been proposed, and has then led to introduce new related proportions in a general setting. This framework is well-suited for analogical reasoning and classification tasks about situations described by means of Boolean properties. There is a clear need for extending this approach to deal with the cases where i) properties are gradual ; ii) properties may not apply to some situations ; iii) the truth status of a property is unknown. The paper investigates the appropriate extension in each of these three cases

    From Analogical Proportion to Logical Proportions

    Get PDF
    International audienceGiven a 4-tuple of Boolean variables (a, b, c, d), logical proportions are modeled by a pair of equivalences relating similarity indicators ( a∧b and a¯∧b¯), or dissimilarity indicators ( a∧b¯ and a¯∧b) pertaining to the pair (a, b), to the ones associated with the pair (c, d). There are 120 semantically distinct logical proportions. One of them models the analogical proportion which corresponds to a statement of the form “a is to b as c is to d”. The paper inventories the whole set of logical proportions by dividing it into five subfamilies according to what they express, and then identifies the proportions that satisfy noticeable properties such as full identity (the pair of equivalences defining the proportion hold as true for the 4-tuple (a, a, a, a)), symmetry (if the proportion holds for (a, b, c, d), it also holds for (c, d, a, b)), or code independency (if the proportion holds for (a, b, c, d), it also holds for their negations (a¯,b¯,c¯,d¯)). It appears that only four proportions (including analogical proportion) are homogeneous in the sense that they use only one type of indicator (either similarity or dissimilarity) in their definition. Due to their specific patterns, they have a particular cognitive appeal, and as such are studied in greater details. Finally, the paper provides a discussion of the other existing works on analogical proportions

    Genotype at the P554L Variant of the Hexose-6 Phosphate Dehydrogenase Gene Is Associated with Carotid Intima-Medial Thickness

    Get PDF
    Objective: The combined thickness of the intima and media of the carotid artery (carotid intima-medial thickness, CIMT) is associated with cardiovascular disease and stroke. Previous studies indicate that carotid intima-medial thickness is a significantly heritable phenotype, but the responsible genes are largely unknown. Hexose-6 phosphate dehydrogenase (H6PDH) is a microsomal enzyme whose activity regulates corticosteroid metabolism in the liver and adipose tissue; variability in measures of corticosteroid metabolism within the normal range have been associated with risk factors for cardiovascular disease. We performed a genetic association study in 854 members of 224 families to assess the relationship between polymorphisms in the gene coding for hexose-6 phosphate dehydrogenase (H6PD) and carotid intima-medial thickness. Methods: Families were ascertained via a hypertensive proband. CIMT was measured using B-mode ultrasound. Single nucleotide polymorphisms (SNPs) tagging common variation in the H6PD gene were genotyped. Association was assessed following adjustment for significant covariates including "classical" cardiovascular risk factors. Functional studies to determine the effect of particular SNPs on H6PDH were performed. Results: There was evidence of association between the single nucleotide polymorphism rs17368528 in exon five of the H6PD gene, which encodes an amino-acid change from proline to leucine in the H6PDH protein, and mean carotid intima-medial thickness (p = 0.00065). Genotype was associated with a 5% (or 0.04 mm) higher mean carotid intima-medial thickness measurement per allele, and determined 2% of the population variability in the phenotype. Conclusions: Our results suggest a novel role for the H6PD gene in atherosclerosis susceptibility

    Analogical Classification: A Rule-Based View

    Get PDF
    International audienceAnalogical proportion-based classification methods have been introduced a few years ago. They look in the training set for suitable triples of examples that are in an analogical proportion with the item to be classified, on a maximal set of attributes. This can be viewed as a lazy classification technique since, like k-nn algorithms, there is no static model built from the set of examples. The amazing results (at least in terms of accuracy) that have been obtained from such techniques are not easy to justify from a theoretical viewpoint. In this paper, we show that there exists an alternative method to build analogical proportion-based learners by statically building a set of inference rules during a preliminary training step. This gives birth to a new classification algorithm that deals with pairs rather than with triples of examples. Experiments on classical benchmarks of the UC Irvine repository are reported, showing that we get comparable results

    Logical definition of analogical proportion and its fuzzy extensions

    Full text link
    An analogical proportion is a statement of the form "A is to B as C is to D". In a logical setting, items A, B, C and D are Boolean vectors. This notion is at the core of analogical reasoning. This paper proposes a sound definition of analogical proportion, based on a logical expression that holds true for each vector component if and only if the analogical proportion holds true. The analogical equation, where D is unknown, is also discussed. The logical expression of the analogical proportion has several equivalent forms, which may lead to distinct extensions when the vector components takes its values in the unit interval, depending on the choice of the multiple-valued connectives. Applications to case-based and approximate reasoning, and to learning are outlined. ©2008 IEEE

    What is the Search Space of the Regular Inference?

    No full text
    This paper revisits the theory of regular inference, in particular by extending the definition of structural completeness of a positive sample and by demonstrating two basic theorems. This framework enables to state the regular inference problem as a search through a boolean lattice built from the positive sample. Several properties of the search space are studied and generalization criteria are discussed. In this framework, the concept of border set is introduced, that is the set of the most general solutions excluding a negative sample. Finally, the complexity of regular language identification from both a theoritical and a practical point of view is discussed. 1 Introduction Regular inference is the process of learning a regular language from a set of examples, consisting of a positive sample, i.e. a finite subset of a regular language. A negative sample, i.e. a finite set of strings not belonging to this language, may also be available. This problem has been studied as early as th..

    What is the search space of Regular Inference?

    No full text
    This paper revisits the theory of regular inference, in particular by extending the definition of structural completeness of a positive sample and by demonstrating two basic theorems. This framework enables to state the regular inference problem as a search through a boolean lattice built from the positive sample. Several properties of the search space are studied and generalization criteria are discussed. In this framework, the concept of border set is introduced, that is the set of the most general solutions excluding a negative sample. Finally, the complexity of regular language identification from both a theoritic al and a practical point of view is discussed
    • 

    corecore