974 research outputs found

    Categorization of interestingness measures for knowledge extraction

    Full text link
    Finding interesting association rules is an important and active research field in data mining. The algorithms of the Apriori family are based on two rule extraction measures, support and confidence. Although these two measures have the virtue of being algorithmically fast, they generate a prohibitive number of rules most of which are redundant and irrelevant. It is therefore necessary to use further measures which filter uninteresting rules. Many synthesis studies were then realized on the interestingness measures according to several points of view. Different reported studies have been carried out to identify "good" properties of rule extraction measures and these properties have been assessed on 61 measures. The purpose of this paper is twofold. First to extend the number of the measures and properties to be studied, in addition to the formalization of the properties proposed in the literature. Second, in the light of this formal study, to categorize the studied measures. This paper leads then to identify categories of measures in order to help the users to efficiently select an appropriate measure by choosing one or more measure(s) during the knowledge extraction process. The properties evaluation on the 61 measures has enabled us to identify 7 classes of measures, classes that we obtained using two different clustering techniques.Comment: 34 pages, 4 figure

    A foundation for machine learning in design

    Get PDF
    This paper presents a formalism for considering the issues of learning in design. A foundation for machine learning in design (MLinD) is defined so as to provide answers to basic questions on learning in design, such as, "What types of knowledge can be learnt?", "How does learning occur?", and "When does learning occur?". Five main elements of MLinD are presented as the input knowledge, knowledge transformers, output knowledge, goals/reasons for learning, and learning triggers. Using this foundation, published systems in MLinD were reviewed. The systematic review presents a basis for validating the presented foundation. The paper concludes that there is considerable work to be carried out in order to fully formalize the foundation of MLinD

    Adaptive content mapping for internet navigation

    Get PDF
    The Internet as the biggest human library ever assembled keeps on growing. Although all kinds of information carriers (e.g. audio/video/hybrid file formats) are available, text based documents dominate. It is estimated that about 80% of all information worldwide stored electronically exists in (or can be converted into) text form. More and more, all kinds of documents are generated by means of a text processing system and are therefore available electronically. Nowadays, many printed journals are also published online and may even discontinue to appear in print form tomorrow. This development has many convincing advantages: the documents are both available faster (cf. prepress services) and cheaper, they can be searched more easily, the physical storage only needs a fraction of the space previously necessary and the medium will not age. For most people, fast and easy access is the most interesting feature of the new age; computer-aided search for specific documents or Web pages becomes the basic tool for information-oriented work. But this tool has problems. The current keyword based search machines available on the Internet are not really appropriate for such a task; either there are (way) too many documents matching the specified keywords are presented or none at all. The problem lies in the fact that it is often very difficult to choose appropriate terms describing the desired topic in the first place. This contribution discusses the current state-of-the-art techniques in content-based searching (along with common visualization/browsing approaches) and proposes a particular adaptive solution for intuitive Internet document navigation, which not only enables the user to provide full texts instead of manually selected keywords (if available), but also allows him/her to explore the whole database

    Comparative study of human age estimation based on hand-crafted and deep face features

    Get PDF
    In the past few years, human facial age estimation has drawn a lot of attention in the computer vision and pattern recognition communities because of its important applications in age-based image retrieval, security control and surveillance, biomet- rics, human-computer interaction (HCI) and social robotics. In connection with these investigations, estimating the age of a person from the numerical analysis of his/her face image is a relatively new topic. Also, in problems such as Image Classification the Deep Neural Networks have given the best results in some areas including age estimation. In this work we use three hand-crafted features as well as five deep features that can be obtained from pre-trained deep convolutional neural networks. We do a comparative study of the obtained age estimation results with these features

    Conceptual roles of data in program: analyses and applications

    Get PDF
    Program comprehension is the prerequisite for many software evolution and maintenance tasks. Currently, the research falls short in addressing how to build tools that can use domain-specific knowledge to provide powerful capabilities for extracting valuable information for facilitating program comprehension. Such capabilities are critical for working with large and complex program where program comprehension often is not possible without the help of domain-specific knowledge.;Our research advances the state-of-art in program analysis techniques based on domain-specific knowledge. The program artifacts including variables and methods are carriers of domain concepts that provide the key to understand programs. Our program analysis is directed by domain knowledge stored as domain-specific rules. Our analysis is iterative and interactive. It is based on flexible inference rules and inter-exchangeable and extensible information storage. We designed and developed a comprehensive software environment SeeCORE based on our knowledge-centric analysis methodology. The SeeCORE tool provides multiple views and abstractions to assist in understanding complex programs. The case studies demonstrate the effectiveness of our method. We demonstrate the flexibility of our approach by analyzing two legacy programs in distinct domains

    Use of supporting software tool for decision-making during low-probability severe accident management at nuclear power plants

    Get PDF
    In the project NARSIS – New Approach to Reactor Safety ImprovementS – possible advances in safety assessment of nuclear power plants (NPPs) were considered, which also included possible improvements in the field of management of low probability accident scenarios. As a part of it, a supporting software tool for making decisions under severe accident management was developed. The mentioned tool, named Severa, is a prototype demonstration-level decision supporting system, aimed for the use by the technical support center (TSC) while managing a severe accident, or for the training purposes. Severa interprets, stores and monitors key physical measurements during accident sequence progression. It assesses the current state of physical barriers: core, reactor coolant system, reactor pressure vessel and containment. The tool gives predictions regarding accident progression in the case that no action is taken by the TSC. It provides a list of possible recovery strategies and courses of action. The applicability and feasibility of possible action courses in the given situation are addressed. For each action course, Severa assesses consequences in terms of probability of the containment failure and estimated time window for failure. At the end, Severa evaluates and ranks the feasible actions, providing recommendations for the TSC. The verification and validation of Severa has been performed in the project and is also described in this paper. Although largely simplified in its current state, Severa successfully demonstrated its potential for supporting accident management and pointed toward the next steps needed with regard to further advancements in this fiel

    Use of supporting software tool for decision-making during low-probability severe accident management at nuclear power plants

    Get PDF
    In the project NARSIS – New Approach to Reactor Safety ImprovementS – possible advances in safety assessment of nuclear power plants (NPPs) were considered, which also included possible improvements in the field of management of low probability accident scenarios. As a part of it, a supporting software tool for making decisions under severe accident management was developed. The mentioned tool, named Severa, is a prototype demonstration-level decision supporting system, aimed for the use by the technical support center (TSC) while managing a severe accident, or for the training purposes. Severa interprets, stores and monitors key physical measurements during accident sequence progression. It assesses the current state of physical barriers: core, reactor coolant system, reactor pressure vessel and containment. The tool gives predictions regarding accident progression in the case that no action is taken by the TSC. It provides a list of possible recovery strategies and courses of action. The applicability and feasibility of possible action courses in the given situation are addressed. For each action course, Severa assesses consequences in terms of probability of the containment failure and estimated time window for failure. At the end, Severa evaluates and ranks the feasible actions, providing recommendations for the TSC. The verification and validation of Severa has been performed in the project and is also described in this paper. Although largely simplified in its current state, Severa successfully demonstrated its potential for supporting accident management and pointed toward the next steps needed with regard to further advancements in this fiel

    Sensitivity Analysis Method to Address User Disparities in the Analytic Hierarchy Process

    Get PDF
    Decision makers often face complex problems, which can seldom be addressed well without the use of structured analytical models. Mathematical models have been developed to streamline and facilitate decision making activities, and among these, the Analytic Hierarchy Process (AHP) constitutes one of the most utilized multi-criteria decision analysis methods. While AHP has been thoroughly researched and applied, the method still shows limitations in terms of addressing user profile disparities. A novel sensitivity analysis method based on local partial derivatives is presented here to address these limitations. This new methodology informs AHP users of which pairwise comparisons most impact the derived weights and the ranking of alternatives. The method can also be applied to decision processes that require the aggregation of results obtained by several users, as it highlights which individuals most critically impact the aggregated group results while also enabling to focus on inputs that drive the final ordering of alternatives. An aerospace design and engineering example that requires group decision making is presented to demonstrate and validate the proposed methodology
    • …
    corecore