9,828 research outputs found

    Recovering Tech\u27s Humanity

    Get PDF

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Artificial Immune System based Firefly Approach for Web Page Classification

    Get PDF
    WWW is now a famous medium by which people all around the world can spread and gather the information of all kinds. But web pages of various sites that are generated dynamically contain undesired information also. This information is called noisy or irrelevant content. Web publishing techniques create numerous information sources published as HTML pages. Navigation panels, Table of content, advertisements, copyright statements, service catalogs, privacy policies etc. on web pages are considered as relevant and irrelevant content. This paper discusses various methods for web pages classification and a new approach for content extraction based on firefly feature extraction method with danger theory for web pages classification

    Replication issues in syntax-based aspect extraction for opinion mining

    Full text link
    Reproducing experiments is an important instrument to validate previous work and build upon existing approaches. It has been tackled numerous times in different areas of science. In this paper, we introduce an empirical replicability study of three well-known algorithms for syntactic centric aspect-based opinion mining. We show that reproducing results continues to be a difficult endeavor, mainly due to the lack of details regarding preprocessing and parameter setting, as well as due to the absence of available implementations that clarify these details. We consider these are important threats to validity of the research on the field, specifically when compared to other problems in NLP where public datasets and code availability are critical validity components. We conclude by encouraging code-based research, which we think has a key role in helping researchers to understand the meaning of the state-of-the-art better and to generate continuous advances.Comment: Accepted in the EACL 2017 SR

    Information Fusion for Anomaly Detection with the Dendritic Cell Algorithm

    Get PDF
    Dendritic cells are antigen presenting cells that provide a vital link between the innate and adaptive immune system, providing the initial detection of pathogenic invaders. Research into this family of cells has revealed that they perform information fusion which directs immune responses. We have derived a Dendritic Cell Algorithm based on the functionality of these cells, by modelling the biological signals and differentiation pathways to build a control mechanism for an artificial immune system. We present algorithmic details in addition to experimental results, when the algorithm was applied to anomaly detection for the detection of port scans. The results show the Dendritic Cell Algorithm is sucessful at detecting port scans.Comment: 21 pages, 17 figures, Information Fusio

    Topic Modelling of Swedish Newspaper Articles about Coronavirus: a Case Study using Latent Dirichlet Allocation Method

    Full text link
    Topic Modelling (TM) is from the research branches of natural language understanding (NLU) and natural language processing (NLP) that is to facilitate insightful analysis from large documents and datasets, such as a summarisation of main topics and the topic changes. This kind of discovery is getting more popular in real-life applications due to its impact on big data analytics. In this study, from the social-media and healthcare domain, we apply popular Latent Dirichlet Allocation (LDA) methods to model the topic changes in Swedish newspaper articles about Coronavirus. We describe the corpus we created including 6515 articles, methods applied, and statistics on topic changes over approximately 1 year and two months period of time from 17th January 2020 to 13th March 2021. We hope this work can be an asset for grounding applications of topic modelling and can be inspiring for similar case studies in an era with pandemics, to support socio-economic impact research as well as clinical and healthcare analytics. Our data and source code are openly available at https://github. com/poethan/Swed_Covid_TM Keywords: Latent Dirichlet Allocation (LDA); Topic Modelling; Coronavirus; Pandemics; Natural Language Understanding; BERT-topicComment: 14 pages, 14 figure

    Trademark Vigilance in the Twenty-First Century: An Update

    Get PDF
    The trademark laws impose a duty upon brand owners to be vigilant in policing their marks, lest they be subject to the defense of laches, a reduced scope of protection, or even death by genericide. Before the millennium, it was relatively manageable for brand owners to police the retail marketplace for infringements and counterfeits. The Internet changed everything. In ways unforeseen, the Internet has unleashed a tremendously damaging cataclysm upon brands—online counterfeiting. It has created a virtual pipeline directly from factories in China to the American consumer shopping from home or work. The very online platforms that make Internet shopping so convenient, and that have enabled brands to expand their sales, have exposed buyers to unwittingly purchasing fake goods which can jeopardize their health and safety as well as brand reputation. This Article updates a 1999 panel discussion titled Trademark Vigilance in the Twenty-First Century, held at Fordham Law School, and explains all the ways in which vigilance has changed since the Internet has become an inescapable feature of everyday life. It provides trademark owners with a road map for monitoring brand abuse online and solutions for taking action against infringers, counterfeiters and others who threaten to undermine brand value

    A new trend for knowledge-based decision support systems design

    Get PDF
    Knowledge-based decision support systems (KBDSS) have evolved greatly over the last few decades. The key technologies underpinning the development of KBDSS can be classified into three categories: technologies for knowledge modelling and representation, technologies for reasoning and inference and web-based technologies. In the meantime, service systems have emerged and become increasingly important to value adding activities in the current knowledge economy. This paper provides a review on the recent advances in the three types of technologies, as well as the main application domains of KBDSS as service systems. Based on the examination of literature, future research directions are recommended for the development of KBDSS in general and in particular to support decision-making in service industry
    • …
    corecore