262 research outputs found

    Applying SMT Solvers to the Test Template Framework

    Full text link
    The Test Template Framework (TTF) is a model-based testing method for the Z notation. In the TTF, test cases are generated from test specifications, which are predicates written in Z. In turn, the Z notation is based on first-order logic with equality and Zermelo-Fraenkel set theory. In this way, a test case is a witness satisfying a formula in that theory. Satisfiability Modulo Theory (SMT) solvers are software tools that decide the satisfiability of arbitrary formulas in a large number of built-in logical theories and their combination. In this paper, we present the first results of applying two SMT solvers, Yices and CVC3, as the engines to find test cases from TTF's test specifications. In doing so, shallow embeddings of a significant portion of the Z notation into the input languages of Yices and CVC3 are provided, given that they do not directly support Zermelo-Fraenkel set theory as defined in Z. Finally, the results of applying these embeddings to a number of test specifications of eight cases studies are analysed.Comment: In Proceedings MBT 2012, arXiv:1202.582

    A Survey of Paraphrasing and Textual Entailment Methods

    Full text link
    Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.Comment: Technical Report, Natural Language Processing Group, Department of Informatics, Athens University of Economics and Business, Greece, 201

    Generic Architecture for Predictive Computational Modelling with Application to Financial Data Analysis: Integration of Semantic Approach and Machine Learning

    Get PDF
    The PhD thesis introduces a Generic Architecture for Predictive Computational Modelling capable of automating analytical conclusions regarding quantitative data structured as a data frame. The model involves heterogeneous data mining based on a semantic approach, graph-based methods (ontology, knowledge graphs, graph databases) and advanced machine learning methods. The main focus of my research is data pre-processing aimed at a more efficient selection of input features to the computational model. Since the model I propose is generic, it can be applied for data mining of all quantitative datasets (containing two-dimensional, size-mutable, heterogeneous tabular data); however, it is best suitable for highly interconnected data. To adapt this generic model to a specific use case, an Ontology as the formal conceptual representation for the relevant domain knowledge is needed. I have determined to use financial/market data for my use cases. In the course of practical experiments, the effectiveness of the PCM model application for the UK companies’ financial risk analysis and the FTSE100 market index forecasting was evaluated. The tests confirmed that the PCM model has more accurate outcomes than stand-alone traditional machine learning methods. By critically evaluating this architecture, I proved its validity and suggested directions for future research

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Arquitectura para la generación de consultas SQL usando lógica de conjuntos

    Get PDF
    This article shows the architecture implemented in the development of an SQL code generating tool (query code) between several tables, where it is desired to perform operations that are related to JOIN operations between tables. The tool exploits the use of LEFT JOIN, RIGHT JOIN, INNER JOIN and FULL OUTHER JOIN. The proposed architecture works for a large number of tables; however, the tool for reasons of computational efficiency in this version, allows up to three tables. The metaphor that follows the SQL code generator, is the metaphor imposed by Venn diagrams, which is useful in multiple computational applications and, in this article, it was used for the construction of a code generator, in which the user selects a concrete Venn diagram. At the end of the article, we show the modifications that are necessary to implement in the architecture, to add other functionalities.El presente artículo muestra, la arquitectura implementada en el desarrollo de una herramienta generadora de código SQL, concretamente código de consulta entre varias tablas, en donde se desean hacer operaciones que guardan relación con las operaciones de JOIN entre tablas. La herramienta explota el uso de LEFT JOIN, RIGHT JOIN, INNER JOIN Y FULL OUTHER JOIN. La arquitectura propuesta funciona para una gran cantidad de tablas, sin embargo, la herramienta por razones de eficiencia computacional en esta versión permite hasta tres tablas. La metáfora que sigue el generador de código SQL, es la metáfora impuesta por los diagramas de Venn, la cual es útil en múltiples aplicaciones computacionales y en este artículo se usó para la construcción de un generador de código en donde el usuario selecciona un diagrama de Venn concreto. Al final del artículo se muestran las modificaciones que se hace necesario implementar a la arquitectura para agregar otras funcionalidades

    A domain-specific language based approach to component composition, error-detection, and fault prediction

    Get PDF
    Current methods of software production are resource-intensive and often require a number of highly skilled professionals. To develop a well-designed and effectively implemented system requires a large investment of resources, often numbering into millions of pounds. The time required may also prove to be prohibitive. However, many parts of the new systems being currently developed already exist, either in the form of whole or parts of existing systems. It is therefore attractive to reuseexisting code when developing new software, in order to reduce the time andresources required. This thesis proposes the application of a domain-specific language (DSL) to automatic component composition, testing and fault-prediction. The DSL ISinherently based on a domain-model which should aid users of the system m knowing how the system is structured and what responsibilities the system fulfils. The DSL structure proposed in this thesis uses a type system and grammar hence enabling the early detection of syntactically incorrect system usage. Each DSL construct's behaviour can also be defined in a testing DSL, described here as DSL-test. This can take the form of input and output parameters, which should suffice for specifying stateless components, or may necessitate the use of a special method call, described here as a White-Box Test (WBT), which allows the external observer to view the abstract state of a component. Each DSL-construct can be mapped to its implementing components i.e. the component, or amalgamation of components, that implement(s) the behaviour as prescribed by the DSL-construct. User-requirements are described using the DS Land appropriate implementing components (if sufficient exist) are automatically located and integrated. That is to say, given a requirement described in terms of the DSL and sufficient components, the architecture (which was named Hydra) will be able to generate an executable which should behave as desired. The DSL-construct behaviour description language (DSL-test) is designed in such a way that it can be translated into a computer programming language, and so code can be inserted between the system automatically to verify that the implementing component is acting in a way consistent with the model of its expected behaviour. Upon detection of an error, the system examines available data (i.e. where the error occurred, what sort of error was it, and what was the structure of the executable), to attempt to predict the location of the fault and, where possible, make remedialaction. A number of case studies have been investigated and it was found that, if applied to the appropriate problem domain, the approach proposed in this thesis shows promise in terms of full automation and integration of black-box or grey-box software. However, further work is required before it can be claimed that this approach should be used in real scale systems

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Knowledge Expansion of a Statistical Machine Translation System using Morphological Resources

    Get PDF
    Translation capability of a Phrase-Based Statistical Machine Translation (PBSMT) system mostly depends on parallel data and phrases that are not present in the training data are not correctly translated. This paper describes a method that efficiently expands the existing knowledge of a PBSMT system without adding more parallel data but using external morphological resources. A set of new phrase associations is added to translation and reordering models; each of them corresponds to a morphological variation of the source/target/both phrases of an existing association. New associations are generated using a string similarity score based on morphosyntactic information. We tested our approach on En-Fr and Fr-En translations and results showed improvements of the performance in terms of automatic scores (BLEU and Meteor) and reduction of out-of-vocabulary (OOV) words. We believe that our knowledge expansion framework is generic and could be used to add different types of information to the model.JRC.G.2-Global security and crisis managemen
    • …
    corecore