1,712 research outputs found

    Automatically Discovering Hidden Transformation Chaining Constraints

    Get PDF
    Model transformations operate on models conforming to precisely defined metamodels. Consequently, it often seems relatively easy to chain them: the output of a transformation may be given as input to a second one if metamodels match. However, this simple rule has some obvious limitations. For instance, a transformation may only use a subset of a metamodel. Therefore, chaining transformations appropriately requires more information. We present here an approach that automatically discovers more detailed information about actual chaining constraints by statically analyzing transformations. The objective is to provide developers who decide to chain transformations with more data on which to base their choices. This approach has been successfully applied to the case of a library of endogenous transformations. They all have the same source and target metamodel but have some hidden chaining constraints. In such a case, the simple metamodel matching rule given above does not provide any useful information

    Compiling knowledge-based systems from KEE to Ada

    Get PDF
    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development

    Learning sequences of rules using classifier systems with tags

    Get PDF
    IEEE International Conference on Systems, Man, and Cybernetics. Tokyo, 12-15 October 1999.The objective of this paper was to obtain an encoding structure that would allow the genetic evolution of rules in such a manner that the number of rules and relationship in a classifier system (CS) would be learnt in the evolution process. For this purpose, an area that allows the definition of rule groups has been entered into the condition and message part of the encoded rules. This area is called internal tag. This term was coined because the system has some similarities with natural processes that take place in certain animal species, where the existence of tags allows them to communicate and recognize each other. Such CS is called a tag classifier system (TCS). The TCS has been tested in the game of draughts and compared with the classical CS. The results show an improving of the CS performance

    Qualifying chains of transformation with coverage based evaluation criteria

    Get PDF
    Abstract. In Model-Driven Engineering (MDE) the development of complex and large transformations can benefit from the reuse of smaller ones that can be composed according to user requirements. Composing transformations is a complex problem: typically smaller transformations are discovered and selected by developers from different and heterogeneous sources. Then the identified transformations are chained by means of manual and error-prone composition processes. Based on our approach, when we propose one or more transformation chains to the user, it is difficult for him to choose one path instead of another without considering the semantic properties of a transformation. In this paper when multiple chains are proposed to the user, according to his requirements, we propose an approach to classify these suitable chains with respect to the coverage of the metamodels involved in the transformation. Based on coverage value, we are able to qualify the transformation chains with an evaluation criteria which gives as an indication of how much information a transformation chain covers over another

    Flavor and Collider Signatures of Asymmetric Dark Matter

    Get PDF
    We consider flavor constraints on, and collider signatures of, Asymmetric Dark Matter (ADM) via higher dimension operators. In the supersymmetric models we consider, R-parity violating (RPV) operators carrying B-L interact with n dark matter (DM) particles X through an interaction of the form W = X^n O_{B-L}, where O_{B-L} = q l d^c, u^c d^c d^c, l l e^c. This interaction ensures that the lightest ordinary supersymmetric particle (LOSP) is unstable to decay into the X sector, leading to a higher multiplicity of final state particles and reduced missing energy at a collider. Flavor-violating processes place constraints on the scale of the higher dimension operator, impacting whether the LOSP decays promptly. While the strongest limitations on RPV from n-\bar{n} oscillations and proton decay do not apply to ADM, we analyze the constraints from meson mixing, mu-e conversion, mu -> 3 e and b -> s l^+ l^-. We show that these flavor constraints, even in the absence of flavor symmetries, allow parameter space for prompt decay to the X sector, with additional jets and leptons in exotic flavor combinations. We study the constraints from existing 8 TeV LHC SUSY searches with (i) 2-6 jets plus missing energy, and (ii) 1-2 leptons, 3-6 jets plus missing energy, comparing the constraints on ADM-extended supersymmetry with the usual supersymmetric simplified models.Comment: 63 pages, 26 figures, 10 tables, revtex

    CBCV: A CAD-based vision system

    Get PDF
    Journal ArticleThe CBCV system has been developed in order to provide the capability of automatically synthesizing executable vision modules for various functions like object recognition, pose determinaion, quality inspection, etc. A wide range of tools exist for both 2D and 3D vision, including not only software capabilities for various vision algorithms, but also a high-level frame-based system for describing knowledge about applications and the techniques for solving particular problems?

    Deploying artifical intelligence techniques in loan application processing

    Get PDF
    The granting of loans by a financial institution is one of the important decisions that require insubstantial care. The institution usually employs loan officers to make credit decisions or recommendations for that particular institution. These officers are given some hard roles in evaluating the worthiness of each application. Some researchers recognize that the capability of humans to judge the worthiness of a loan is rather poor. Since business data warehouses store historical data from previous application, it is likely that there is knowledge hidden in this data may be useful in decision making. Unfortunately the task of discovering hidden information and useful relationship from data is difficult for human. This is due the fact that the data to be examined is very large and the nature of the relationship within the data is not obvious. To this end, Artificial Intelligence (AI) techniques can be beneficial to assist the decision maker in making decisions regarding loan application. AI provides a variety of useful tool for discovering the non-obvious relationships in historical data, while ensuring those relationships discovered will generalize to the future data. This knowledge is important and can be used by the loan officer in determining whether to accept or reject an application. This study suggests that loan application processing system integrates two components of computer-based information system namely, office automation system and AI system that comprises of intelligent decision support system and knowledge based system. In essence, the potential use of such a system can be accelerated to promote any organizations as an efficient and effective organization that has competitive advantage

    A Comparative Study of Text Summarization on E-mail Data Using Unsupervised Learning Approaches

    Get PDF
    Over the last few years, email has met with enormous popularity. People send and receive a lot of messages every day, connect with colleagues and friends, share files and information. Unfortunately, the email overload outbreak has developed into a personal trouble for users as well as a financial concerns for businesses. Accessing an ever-increasing number of lengthy emails in the present generation has become a major concern for many users. Email text summarization is a promising approach to resolve this challenge. Email messages are general domain text, unstructured and not always well developed syntactically. Such elements introduce challenges for study in text processing, especially for the task of summarization. This research employs a quantitative and inductive methodologies to implement the Unsupervised learning models that addresses summarization task problem, to efficiently generate more precise summaries and to determine which approach of implementing Unsupervised clustering models outperform the best. The precision score from ROUGE-N metrics is used as the evaluation metrics in this research. This research evaluates the performance in terms of the precision score of four different approaches of text summarization by using various combinations of feature embedding technique like Word2Vec /BERT model and hybrid/conventional clustering algorithms. The results reveals that both the approaches of using Word2Vec and BERT feature embedding along with hybrid PHA-ClusteringGain k-Means algorithm achieved increase in the precision when compared with the conventional k-means clustering model. Among those hybrid approaches performed, the one using Word2Vec as feature embedding method attained 55.73% as maximum precision value
    • …
    corecore