5,161 research outputs found

    Applying a Fuzzy Approach to Relaxing Cardinality Constraints

    Get PDF
    9 pages, 3 figures.-- Contributed to: 15th International Conference on Database and Expert Systems Applications (DEXA 2004, Zaragoza, Spain, Aug 30 - Sep 3, 2004).In database applications the verification of cardinality constraints is a serious and complex problem that appears when the modifications operations are performed in a large cascade. Many efforts have been devoted to solve this problem, but some solutions lead to other problems such as the complex execution model or an impact on the database performance. In this paper a method to reducing and simplifying the complex verification of cardinality constraints by relaxing these constraints using fuzzy concepts is proposed.Publicad

    Classifying Web Exploits with Topic Modeling

    Full text link
    This short empirical paper investigates how well topic modeling and database meta-data characteristics can classify web and other proof-of-concept (PoC) exploits for publicly disclosed software vulnerabilities. By using a dataset comprised of over 36 thousand PoC exploits, near a 0.9 accuracy rate is obtained in the empirical experiment. Text mining and topic modeling are a significant boost factor behind this classification performance. In addition to these empirical results, the paper contributes to the research tradition of enhancing software vulnerability information with text mining, providing also a few scholarly observations about the potential for semi-automatic classification of exploits in the existing tracking infrastructures.Comment: Proceedings of the 2017 28th International Workshop on Database and Expert Systems Applications (DEXA). http://ieeexplore.ieee.org/abstract/document/8049693

    A deliberative model for self-adaptation middleware using architectural dependency

    Get PDF
    A crucial prerequisite to externalized adaptation is an understanding of how components are interconnected, or more particularly how and why they depend on one another. Such dependencies can be used to provide an architectural model, which provides a reference point for externalized adaptation. In this paper, it is described how dependencies are used as a basis to systems' self-understanding and subsequent architectural reconfigurations. The approach is based on the combination of: instrumentation services, a dependency meta-model and a system controller. In particular, the latter uses self-healing repair rules (or conflict resolution strategies), based on extensible beliefs, desires and intention (EBDI) model, to reflect reconfiguration changes back to a target application under examination

    A framework for selecting workflow tools in the context of composite information systems

    Get PDF
    When an organization faces the need of integrating some workflow-related activities in its information system, it becomes necessary to have at hand some well-defined informational model to be used as a framework for determining the selection criteria onto which the requirements of the organization can be mapped. Some proposals exist that provide such a framework, remarkably the WfMC reference model, but they are designed to be appl icable when workflow tools are selected independently from other software, and departing from a set of well-known requirements. Often this is not the case: workflow facilities are needed as a part of the procurement of a larger, composite information syste m and therefore the general goals of the system have to be analyzed, assigned to its individual components and further detailed. We propose in this paper the MULTSEC method in charge of analyzing the initial goals of the system, determining the types of components that form the system architecture, building quality models for each type and then mapping the goals into detailed requirements which can be measured using quality criteria. We develop in some detail the quality model (compliant with the ISO/IEC 9126-1 quality standard) for the workflow type of tools; we show how the quality model can be used to refine and clarify the requirements in order to guarantee a highly reliable selection result; and we use it to evaluate two particular workflow solutions a- ailable in the market (kept anonymous in the paper). We develop our proposal using a particular selection experience we have recently been involved in, namely the procurement of a document management subsystem to be integrated in an academic data management information system for our university.Peer ReviewedPostprint (author's final draft

    Extracting partition statistics from semistructured data

    Get PDF
    The effective grouping, or partitioning, of semistructured data is of fundamental importance when providing support for queries. Partitions allow items within the data set that share common structural properties to be identified efficiently. This allows queries that make use of these properties, such as branching path expressions, to be accelerated. Here, we evaluate the effectiveness of several partitioning techniques by establishing the number of partitions that each scheme can identify over a given data set. In particular, we explore the use of parameterised indexes, based upon the notion of forward and backward bisimilarity, as a means of partitioning semistructured data; demonstrating that even restricted instances of such indexes can be used to identify the majority of relevant partitions in the data

    Towards an On-Line Analysis of Tweets Processing

    Get PDF
    International audienceTweets exchanged over the Internet represent an important source of information, even if their characteristics make them dicult to analyze (a maximum of 140 characters, etc.). In this paper, we define a data warehouse model to analyze large volumes of tweets by proposing measures relevant in the context of knowledge discovery. The use of data warehouses as a tool for the storage and analysis of textual documents is not new but current measures are not well-suited to the specificities of the manipulated data. We also propose a new way for extracting the context of a concept in a hierarchy. Experiments carried out on real data underline the relevance of our proposal

    Application of Semantics to Solve Problems in Life Sciences

    Get PDF
    Fecha de lectura de Tesis: 10 de diciembre de 2018La cantidad de información que se genera en la Web se ha incrementado en los últimos años. La mayor parte de esta información se encuentra accesible en texto, siendo el ser humano el principal usuario de la Web. Sin embargo, a pesar de todos los avances producidos en el área del procesamiento del lenguaje natural, los ordenadores tienen problemas para procesar esta información textual. En este cotexto, existen dominios de aplicación en los que se están publicando grandes cantidades de información disponible como datos estructurados como en el área de las Ciencias de la Vida. El análisis de estos datos es de vital importancia no sólo para el avance de la ciencia, sino para producir avances en el ámbito de la salud. Sin embargo, estos datos están localizados en diferentes repositorios y almacenados en diferentes formatos que hacen difícil su integración. En este contexto, el paradigma de los Datos Vinculados como una tecnología que incluye la aplicación de algunos estándares propuestos por la comunidad W3C tales como HTTP URIs, los estándares RDF y OWL. Haciendo uso de esta tecnología, se ha desarrollado esta tesis doctoral basada en cubrir los siguientes objetivos principales: 1) promover el uso de los datos vinculados por parte de la comunidad de usuarios del ámbito de las Ciencias de la Vida 2) facilitar el diseño de consultas SPARQL mediante el descubrimiento del modelo subyacente en los repositorios RDF 3) crear un entorno colaborativo que facilite el consumo de Datos Vinculados por usuarios finales, 4) desarrollar un algoritmo que, de forma automática, permita descubrir el modelo semántico en OWL de un repositorio RDF, 5) desarrollar una representación en OWL de ICD-10-CM llamada Dione que ofrezca una metodología automática para la clasificación de enfermedades de pacientes y su posterior validación haciendo uso de un razonador OWL
    corecore