342 research outputs found

    Dynamic adaptation of user profiles in recommender systems

    Get PDF
    In a period of time in which the content available through the Internet increases exponentially and is more easily accessible every day, techniques for aiding the selection and extraction of important and personalised information are of vital importance. Recommender Systems (RS) appear as a tool to help the user in a decision making process by evaluating a set of objects or alternatives and aiding the user at choosing which one/s of them suits better his/her interests or preferences. Those preferences need to be accurate enough to produce adequate recommendations and should be updated if the user changes his/her likes or if they are incorrect or incomplete. In this work an adequate model for managing user preferences in a multi-attribute (numerical and categorical) environment is presented to aid at providing recommendations in those kinds of contexts. The evaluation process of the recommender system designed is supported by a new aggregation operator (Unbalanced LOWA) that enables the combination of the information that defines an alternative into a single value, which then is used to rank the whole set of alternatives. After the recommendation has been made, learning processes have been designed to evaluate the user interaction with the system to find out, in a dynamic and unsupervised way, if the user profile in which the recommendation process relies on needs to be updated with new preferences. The work detailed in this document also includes extensive evaluation and testing of all the elements that take part in the recommendation and learning processes

    Semantics-based approach for generating partial views from linked life-cycle highway project data

    Get PDF
    The purpose of this dissertation is to develop methods that can assist data integration and extraction from heterogeneous sources generated throughout the life-cycle of a highway project. In the era of computerized technologies, project data is largely available in digital format. Due to the fragmented nature of the civil infrastructure sector, digital data are created and managed separately by different project actors in proprietary data warehouses. The differences in the data structure and semantics greatly hinder the exchange and fully reuse of digital project data. In order to address those issues, this dissertation carries out the following three individual studies. The first study aims to develop a framework for interconnecting heterogeneous life cycle project data into an unified and linked data space. This is an ontology-based framework that consists of two phases: (1) translating proprietary datasets into homogeneous RDF data graphs; and (2) connecting separate data networks to each other. Three domain ontologies for design, construction, and asset condition survey phases are developed to support data transformation. A merged ontology that integrates the domain ontologies is constructed to provide guidance on how to connect data nodes from domain graphs. The second study is to deal with the terminology inconsistency between data sources. An automated method is developed that employs Natural Language Processing (NLP) and machine learning techniques to support constructing a domain specific lexicon from design manuals. The method utilizes pattern rules to extract technical terms from texts and learns their representation vectors using a neural network based word embedding approach. The study also includes the development of an integrated method of minimal-supervised machine learning, clustering analysis, and word vectors, for computing the term semantics and classifying the relations between terms in the target lexicon. In the last study, a data retrieval technique for extracting subsets of an XML civil data schema is designed and tested. The algorithm takes a keyword input of the end user and returns a ranked list of the most relevant XML branches. This study utilizes a lexicon of the highway domain generated from the second study to analyze the semantics of the end user keywords. A context-based similarity measure is introduced to evaluate the relevance between a certain branch in the source schema and the user query. The methods and algorithms resulting from this research were tested using case studies and empirical experiments. The results indicate that the study successfully address the heterogeneity in the structure and terminology of data and enable a fast extraction of sub-models of data. The study is expected to enhance the efficiency in reusing digital data generated throughout the project life-cycle, and contribute to the success in transitioning from paper-based to digital project delivery for civil infrastructure projects

    Security information management with frame-based attack presentation and first-order reasoning

    Get PDF
    Internet has grown by several orders of magnitude in recent years, and this growth has escalated the importance of computer security. Intrusion Detection System (IDS) is used to protect computer networks. However, the overwhelming flow of log data generated by IDS hamper security administrators from uncovering new insights and hidden attack scenarios. Security Information Management (SIM) is a new growing area of interest for intrusion detection. The research work in this dissertation explores the semantics of attack behaviors and designs Frame-based Attack Representation and First-order logic Automatic Reasoning (FAR-FAR) using linguistics and First-order Logic (FOL) based approaches. Techniques based on linguistics can provide efficient solutions to acquire semantic information from alert contexts, while FOL can tackle a wide variety of problems in attack scenario reasoning and querying. In FAR-FAR, the modified case grammar PCTCG is used to convert raw alerts into frame-structured alert streams and the alert semantic network 2-AASN is used to generate the attack scenarios, which can then inform the security administrator. Based on the alert contexts and attack ontology, Space Vector Model (SVM) is applied to categorize the intrusion stages. Furthermore, a robust Variant Packet Sending-interval Link Padding algorithm (VPSLP) is proposed to prevent links between the IDS sensors and the FAR-FAR agents from traffic analysis attacks. Recent measurements and studies demonstrated that real network traffic exhibits statistical self-similarity over several time scales. The bursty traffic anomaly detection method, Multi-Time scaling Detection (MTD), is proposed to statistically analyze network traffic\u27s Histogram Feature Vector to detect traffic anomalies

    Curvature-based sparse rule base generation for fuzzy rule interpolation

    Get PDF
    Fuzzy logic has been successfully widely utilised in many real-world applications. The most common application of fuzzy logic is the rule-based fuzzy inference system, which is composed of mainly two parts including an inference engine and a fuzzy rule base. Conventional fuzzy inference systems always require a rule base that fully covers the entire problem domain (i.e., a dense rule base). Fuzzy rule interpolation (FRI) makes inference possible with sparse rule bases which may not cover some parts of the problem domain (i.e., a sparse rule base). In addition to extending the applicability of fuzzy inference systems, fuzzy interpolation can also be used to reduce system complexity for over-complex fuzzy inference systems. There are typically two methods to generate fuzzy rule bases, i.e., the knowledge driven and data-driven approaches. Almost all of these approaches only target dense rule bases for conventional fuzzy inference systems. The knowledge-driven methods may be negatively affected by the limited availability of expert knowledge and expert knowledge may be subjective, whilst redundancy often exists in fuzzy rule-based models that are acquired from numerical data. Note that various rule base reduction approaches have been proposed, but they are all based on certain similarity measures and are likely to cause performance deterioration along with the size reduction. This project, for the first time, innovatively applies curvature values to distinguish important features and instances in a dataset, to support the construction of a neat and concise sparse rule base for fuzzy rule interpolation. In addition to working in a three-dimensional problem space, the work also extends the natural three-dimensional curvature calculation to problems with high dimensions, which greatly broadens the applicability of the proposed approach. As a result, the proposed approach alleviates the ‘curse of dimensionality’ and helps to reduce the computational cost for fuzzy inference systems. The proposed approach has been validated and evaluated by three real-world applications. The experimental results demonstrate that the proposed approach is able to generate sparse rule bases with less rules but resulting in better performance, which confirms the power of the proposed system. In addition to fuzzy rule interpolation, the proposed curvature-based approach can also be readily used as a general feature selection tool to work with other machine learning approaches, such as classifiers

    Learning predictive models from massive, semantically disparate data

    Get PDF
    Machine learning approaches offer some of the most successful techniques for constructing predictive models from data. However, applying such techniques in practice requires overcoming several challenges: infeasibility of centralized access to the data because of the massive size of some of the data sets that often exceeds the size of memory available to the learner, distributed nature of data, access restrictions, data fragmentation, semantic disparities between the data sources, and data sources that evolve spatially or temporally (e.g. data streams and genomic data sources in which new data is being submitted continuously). Learning using statistical queries and semantic correspondences that present a unified view of disparate data sources to the learner offer a powerful general framework for addressing some of these challenges. Against this background, this thesis describes (1) approaches to deal with missing values in the statistical query based algorithms for building predictors (Nayve Bayes and decision trees) and the techniques to minimize the number of required queries in such a setting. (2) Sufficient statistics based algorithms for constructing and updating sequence classifiers. (3) Reduction of several aspects of learning from semantically disparate data sources (such as (a) how errors in mappings affect the accuracy of the learned model and (b) how to choose an optimal mapping from among a set of alternative expert-supplied or automatically generated mappings) to the well-studied problems of domain adaptation and learning in presence of noise and (4) a software for learning predictive models from semantically disparate data

    Cognition-based approaches for high-precision text mining

    Get PDF
    This research improves the precision of information extraction from free-form text via the use of cognitive-based approaches to natural language processing (NLP). Cognitive-based approaches are an important, and relatively new, area of research in NLP and search, as well as linguistics. Cognitive approaches enable significant improvements in both the breadth and depth of knowledge extracted from text. This research has made contributions in the areas of a cognitive approach to automated concept recognition in. Cognitive approaches to search, also called concept-based search, have been shown to improve search precision. Given the tremendous amount of electronic text generated in our digital and connected world, cognitive approaches enable substantial opportunities in knowledge discovery. The generation and storage of electronic text is ubiquitous, hence opportunities for improved knowledge discovery span virtually all knowledge domains. While cognition-based search offers superior approaches, challenges exist due to the need to mimic, even in the most rudimentary way, the extraordinary powers of human cognition. This research addresses these challenges in the key area of a cognition-based approach to automated concept recognition. In addition it resulted in a semantic processing system framework for use in applications in any knowledge domain. Confabulation theory was applied to the problem of automated concept recognition. This is a relatively new theory of cognition using a non-Bayesian measure, called cogency, for predicting the results of human cognition. An innovative distance measure derived from cogent confabulation and called inverse cogency, to rank order candidate concepts during the recognition process. When used with a multilayer perceptron, it improved the precision of concept recognition by 5% over published benchmarks. Additional precision improvements are anticipated. These research steps build a foundation for cognition-based, high-precision text mining. Long-term it is anticipated that this foundation enables a cognitive-based approach to automated ontology learning. Such automated ontology learning will mimic human language cognition, and will, in turn, enable the practical use of cognitive-based approaches in virtually any knowledge domain --Abstract, page iii

    Learning ontology aware classifiers

    Get PDF
    Many applications of data-driven knowledge discovery processes call for the exploration of data from multiple points of view that reflect different ontological commitments on the part of the learner. Of particular interest in this context are algorithms for learning classifiers from ontologies and data. Against this background, my dissertation research is aimed at the design and analysis of algorithms for construction of robust, compact, accurate and ontology aware classifiers. We have precisely formulated the problem of learning pattern classifiers from attribute value taxonomies (AVT) and partially specified data. We have designed and implemented efficient and theoretically well-founded AVT-based classifier learners. Based on a general strategy of hypothesis refinement to search in a generalized hypothesis space, our AVT-guided learning algorithm adopts a general learning framework that takes into account the tradeoff between the complexity and the accuracy of the predictive models, which enables us to learn a classifier that is both compact and accurate. We have also extended our approach to learning compact and accurate classifier from semantically heterogeneous data sources. We presented a principled way to reduce the problem of learning from semantically heterogeneous data to the problem of learning from distributed partially specified data by reconciling semantic heterogeneity using AVT mappings, and we described a sufficient statistics based solution

    Prediction markets supporting technology assessment

    Get PDF
    In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical. We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises. Résumé français: Dans cette thèse, nous étudions l'utilisation de marchés de prédictions pour l'évaluation de nouvelles technologies. Nous nous intéressons plus particulièrement aux capacités des marchés de prédictions à évaluer des problématiques complexes, aux contraintes de conception pour une telle utilisation et à leur efficacité par rapport à des techniques traditionnelles. Pour ce faire, nous avons suivi une approche Design Science, développant itérativement plusieurs prototypes, les instanciant, puis les évaluant avant d'en raffiner la conception. Ceci nous a permis de faire de multiples contributions tant pratiques que théoriques. Nous avons tout d'abord montré que les marchés de prédictions étaient adaptés pour correctement apprécier des problématiques complexes. Nous avons également développé une typologie de facteurs de conception ainsi que des propositions de conception pour l'utilisation de ces marchés dans des contextes d'évaluation technologique. Ensuite, nous avons montré que ces marchés pouvaient résoudre une partie des problèmes liés à la gestion des portes-feuille de projets de recherche et développement et proposons une feuille de route pour leur mise en oeuvre. Finalement, en comparant la mise en oeuvre et les résultats d'une méthode de décision multi-critère et d'un marché de prédiction, nous avons montré que ces derniers étaient plus efficaces, tout en offrant des résultats semblables. Nous proposons également un cadre de comparaison des méthodes d'évaluation technologiques, permettant de cerner au mieux les besoins en fonction de facteurs de contingence. En conclusion, notre recherche ouvre un nouveau champ d'application des marchés de prédiction et devrait permettre d'accélérer leur adoption par les entreprises
    corecore