34 research outputs found

    Annotating Search Results from Web Databases

    Full text link

    Learning Expressive Linkage Rules for Entity Matching using Genetic Programming

    Get PDF
    A central problem in data integration and data cleansing is to identify pairs of entities in data sets that describe the same real-world object. Many existing methods for matching entities rely on explicit linkage rules, which specify how two entities are compared for equivalence. Unfortunately, writing accurate linkage rules by hand is a non-trivial problem that requires detailed knowledge of the involved data sets. Another important issue is the efficient execution of linkage rules. In this thesis, we propose a set of novel methods that cover the complete entity matching workflow from the generation of linkage rules using genetic programming algorithms to their efficient execution on distributed systems. First, we propose a supervised learning algorithm that is capable of generating linkage rules from a gold standard consisting of set of entity pairs that have been labeled as duplicates or non-duplicates. We show that the introduced algorithm outperforms previously proposed entity matching approaches including the state-of-the-art genetic programming approach by de Carvalho et al. and is capable of learning linkage rules that achieve a similar accuracy than the human written rule for the same problem. In order to also cover use cases for which no gold standard is available, we propose a complementary active learning algorithm that generates a gold standard interactively by asking the user to confirm or decline the equivalence of a small number of entity pairs. In the experimental evaluation, labeling at most 50 link candidates was necessary in order to match the performance that is achieved by the supervised GenLink algorithm on the entire gold standard. Finally, we propose an efficient execution workflow that can be run on cluster of multiple machines. The execution workflow employs a novel multidimensional indexing method that allows the efficient execution of learned linkage rules by reducing the number of required comparisons significantly

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    Mining Meaning from Wikipedia

    Get PDF
    Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.Comment: An extensive survey of re-using information in Wikipedia in natural language processing, information retrieval and extraction and ontology building. Accepted for publication in International Journal of Human-Computer Studie

    Sampling Algorithms for Evolving Datasets

    Get PDF
    Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing

    Machine learning techniques for identification using mobile and social media data

    Get PDF
    Networked access and mobile devices provide near constant data generation and collection. Users, environments, applications, each generate different types of data; from the voluntarily provided data posted in social networks to data collected by sensors on mobile devices, it is becoming trivial to access big data caches. Processing sufficiently large amounts of data results in inferences that can be characterized as privacy invasive. In order to address privacy risks we must understand the limits of the data exploring relationships between variables and how the user is reflected in them. In this dissertation we look at data collected from social networks and sensors to identify some aspect of the user or their surroundings. In particular, we find that from social media metadata we identify individual user accounts and from the magnetic field readings we identify both the (unique) cellphone device owned by the user and their course-grained location. In each project we collect real-world datasets and apply supervised learning techniques, particularly multi-class classification algorithms to test our hypotheses. We use both leave-one-out cross validation as well as k-fold cross validation to reduce any bias in the results. Throughout the dissertation we find that unprotected data reveals sensitive information about users. Each chapter also contains a discussion about possible obfuscation techniques or countermeasures and their effectiveness with regards to the conclusions we present. Overall our results show that deriving information about users is attainable and, with each of these results, users would have limited if any indication that any type of analysis was taking place

    Study of stochastic and machine learning tecniques for anomaly-based Web atack detection

    Get PDF
    Mención Internacional en el título de doctorWeb applications are exposed to different threats and it is necessary to protect them. Intrusion Detection Systems (IDSs) are a solution external to the web application that do not require the modification of the application’s code in order to protect it. These systems are located in the network, monitoring events and searching for signs of anomalies or threats that can compromise the security of the information systems. IDSs have been applied to traffic analysis of different protocols, such as TCP, FTP or HTTP. Web Application Firewalls (WAFs) are special cases of IDSs that are specialized in analyzing HTTP traffic with the aim of safeguarding web applications. The increase in the amount of data traveling through the Internet and the growing sophistication of the attacks, make necessary protection mechanisms that are both effective and efficient. This thesis proposes three anomaly-based WAFs with the characteristics of being high-speed, reaching high detection results and having a simple design. The anomaly-based approach defines the normal behavior of web application. Actions that deviate from it are considered anomalous. The proposed WAFs work at the application layer analyzing the payload of HTTP requests. These systems are designed with different detection algorithms in order to compare their results and performance. Two of the systems proposed are based on stochastic techniques: one of them is based on statistical techniques and the other one in Markov chains. The third WAF presented in this thesis is ML-based. Machine Learning (ML) deals with constructing computer programs that automatically learn with experience and can be very helpful in dealing with big amounts of data. Concretely, this third WAF is based on decision trees given their proved effectiveness in intrusion detection. In particular, four algorithms are employed: C4.5, CART, Random Tree and Random Forest. Typically, two phases are distinguished in IDSs: preprocessing and processing. In the case of stochastic systems, preprocessing includes feature extraction. The processing phase consists in training the system in order to learn the normal behavior and later testing how well it classifies the incoming requests as either normal or anomalous. The detection models of the systems are implemented either with statistical techniques or with Markov chains, depending on the system considered. For the system based on decision trees, the preprocessing phase comprises feature extraction as well as feature selection. These two phases are optimized. On the one hand, new feature extraction methods are proposed. They combine features extracted by means of expert knowledge and n-grams, and have the capacity of improving the detection results of both techniques separately. For feature selection, the Generic Feature Selection GeFS measure has been used, which has been proven to be very effective in reducing the number of redundant and irrelevant features. Additionally, for the three systems, a study for establishing the minimum number of requests required to train them in order to achieve a certain detection result has been performed. Reducing the number of training requests can greatly help in the optimization of the resource consumption of WAFs as well as on the data gathering process. Besides designing and implementing the systems, evaluating them is an essential step. For that purpose, a dataset is necessary. Unfortunately, finding labeled and adequate datasets is not an easy task. In fact, the study of the most popular datasets in the intrusion detection field reveals that most of them do not satisfy the requirements for evaluating WAFs. In order to tackle this situation, this thesis proposes the new CSIC dataset, that satisfies the necessary conditions to satisfactorily evaluate WAFs. The proposed systems have been experimentally evaluated. For that, the proposed CSIC dataset and the existing ECML/PKDD dataset have been used. The three presented systems have been compared in terms of their detection results, processing time and number of training requests used. For this comparison, the CSIC dataset has been used. In summary, this thesis proposes three WAFs based on stochastic and ML techniques. Additionally, the systems are compared, what allows to determine which system is the most appropriate for each scenario.Las aplicaciones web están expuestas a diferentes amenazas y es necesario protegerlas. Los sistemas de detección de intrusiones (IDSs del inglés Intrusion Detection Systems) son una solución externa a la aplicación web que no requiere la modificación del código de la aplicación para protegerla. Estos sistemas se sitúan en la red, monitorizando los eventos y buscando señales de anomalías o amenazas que puedan comprometer la seguridad de los sistemas de información. Los IDSs se han aplicado al análisis de tráfico de varios protocolos, tales como TCP, FTP o HTTP. Los Cortafuegos de Aplicaciones Web (WAFs del inglés Web Application Firewall) son un caso especial de los IDSs que están especializados en analizar tráfico HTTP con el objetivo de salvaguardar las aplicaciones web. El incremento en la cantidad de datos circulando por Internet y la creciente sofisticación de los ataques hace necesario contar con mecanismos de protección que sean efectivos y eficientes. Esta tesis propone tres WAFs basados en anomalías que tienen las características de ser de alta velocidad, alcanzar altos resultados de detección y contar con un diseño sencillo. El enfoque basado en anomalías define el comportamiento normal de la aplicación, de modo que las acciones que se desvían del mismo se consideran anómalas. Los WAFs diseñados trabajan en la capa de aplicación y analizan el contenido de las peticiones HTTP. Estos sistemas están diseñados con diferentes algoritmos de detección para comparar sus resultados y rendimiento. Dos de los sistemas propuestos están basados en técnicas estocásticas: una de ellas está basada en técnicas estadísticas y la otra en cadenas de Markov. El tercer WAF presentado en esta tesis está basado en aprendizaje automático. El aprendizaje automático (ML del inglés Machine Learning) se ocupa de cómo construir programas informáticos que aprenden automáticamente con la experiencia y puede ser muy útil cuando se trabaja con grandes cantidades de datos. En concreto, este tercer WAF está basado en árboles de decisión, dada su probada efectividad en la detección de intrusiones. En particular, se han empleado cuatro algoritmos: C4.5, CART, Random Tree y Random Forest. Típicamente se distinguen dos fases en los IDSs: preprocesamiento y procesamiento. En el caso de los sistemas estocásticos, en la fase de preprocesamiento se realiza la extracción de características. El procesamiento consiste en el entrenamiento del sistema para que aprenda el comportamiento normal y más tarde se comprueba cuán bien el sistema es capaz de clasificar las peticiones entrantes como normales o anómalas. Los modelos de detección de los sistemas están implementados bien con técnicas estadísticas o bien con cadenas de Markov, dependiendo del sistema considerado. Para el sistema basado en árboles de decisión la fase de preprocesamiento comprende tanto la extracción de características como la selección de características. Estas dos fases se han optimizado. Por un lado, se proponen nuevos métodos de extracción de características. Éstos combinan características extraídas por medio de conocimiento experto y n-gramas y tienen la capacidad de mejorar los resultados de detección de ambas técnicas por separado. Para la selección de características, se ha utilizado la medida GeFS (del inglés Generic Feature Selection), la cual ha probado ser muy efectiva en la reducción del número de características redundantes e irrelevantes. Además, para los tres sistemas, se ha realizado un estudio para establecer el mínimo número de peticiones necesarias para entrenarlos y obtener un cierto resultado. Reducir el número de peticiones de entrenamiento puede ayudar en gran medida a la optimización del consumo de recursos de los WAFs así como en el proceso de adquisición de datos. Además de diseñar e implementar los sistemas, la tarea de evaluarlos es esencial. Para este propósito es necesario un conjunto de datos. Desafortunadamente, encontrar conjuntos de datos etiquetados y adecuados no es una tarea fácil. De hecho, el estudio de los conjuntos de datos más utilizados en el campo de la detección de intrusiones revela que la mayoría de ellos no cumple los requisitos para evaluar WAFs. Para enfrentar esta situación, esta tesis presenta un nuevo conjunto de datos llamado CSIC, que satisface las condiciones necesarias para evaluar WAFs satisfactoriamente. Los sistemas propuestos se han evaluado experimentalmente. Para ello, se ha utilizado el conjunto de datos propuesto (CSIC) y otro existente llamado ECML/PKDD. Los tres sistemas presentados se han comparado con respecto a sus resultados de detección, tiempo de procesamiento y número de peticiones de entrenamiento utilizadas. Para esta comparación se ha utilizado el conjunto de datos CSIC. En resumen, esta tesis propone tres WAFs basados en técnicas estocásticas y de ML. Además, se han comparado estos sistemas entre sí, lo que permite determinar qué sistema es el más adecuado para cada escenario.Este trabajo ha sido realizado en el marco de las becas predoctorales de la Junta de Amplicación de Estudios (JAE) de la Agencia Estatal Consejo Superior de Investigaciones Científicas (CSIC).Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Luis Hernández Encinas.- Secretario: Juan Manuel Estévez Tapiador.- Vocal: Georg Carl

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    A Development Method for the Conceptual Design of Multi-View Modeling Tools with an Emphasis on Consistency Requirements

    Get PDF
    The main objective of this thesis is to bridge the gap between modeling method experts on the one side and tool developers on the other. More precisely, the focus is on the specification of requirements for multi-view modeling tools. In this regard, the thesis introduces a methodological approach that supports the specification of conceptual designs for multi-view modeling tools in a stepwise manner: the MuVieMoT approach. MuVieMoT utilizes generic multi-view modeling concepts and the model-driven engineering paradigm to establish an overarching specification of multi-view modeling tools with an emphasis on consistency requirements. The approach builds on and extends the theoretical foundation of metamodeling and multi-view modeling: generic multi-view modeling concepts, integrated multi-view modeling approaches, and possibilities for formalized modeling method specifications. Applicability and utility of MuVieMoT are evaluated using an illustrative scenario, therefore specifying a conceptual design for a multi-view modeling tool for the Semantic Object Model enterprise modeling method. The thesis moreover introduces the MuVieMoT modeling environment, enabling the efficient application of the approach as well as the model-driven development of initial multi-view modeling tools based on the conceptual models created with MuVieMoT. Consequently, the approach fosters an intersubjective and unambiguous understanding of the tool requirements between method experts and tool developers
    corecore