15,868 research outputs found

    Vulnerability anti-patterns:a timeless way to capture poor software practices (Vulnerabilities)

    Get PDF
    There is a distinct communication gap between the software engineering and cybersecurity communities when it comes to addressing reoccurring security problems, known as vulnerabilities. Many vulnerabilities are caused by software errors that are created by software developers. Insecure software development practices are common due to a variety of factors, which include inefficiencies within existing knowledge transfer mechanisms based on vulnerability databases (VDBs), software developers perceiving security as an afterthought, and lack of consideration of security as part of the software development lifecycle (SDLC). The resulting communication gap also prevents developers and security experts from successfully sharing essential security knowledge. The cybersecurity community makes their expert knowledge available in forms including vulnerability databases such as CAPEC and CWE, and pattern catalogues such as Security Patterns, Attack Patterns, and Software Fault Patterns. However, these sources are not effective at providing software developers with an understanding of how malicious hackers can exploit vulnerabilities in the software systems they create. As developers are familiar with pattern-based approaches, this paper proposes the use of Vulnerability Anti-Patterns (VAP) to transfer usable vulnerability knowledge to developers, bridging the communication gap between security experts and software developers. The primary contribution of this paper is twofold: (1) it proposes a new pattern template – Vulnerability Anti-Pattern – that uses anti-patterns rather than patterns to capture and communicate knowledge of existing vulnerabilities, and (2) it proposes a catalogue of Vulnerability Anti-Patterns (VAP) based on the most commonly occurring vulnerabilities that software developers can use to learn how malicious hackers can exploit errors in software

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    Resist, comply or workaround? An examination of different facets of user engagement with information systems

    Get PDF
    This paper provides a summary of studies of user resistance to Information Technology (IT) and identifies workaround activity as an understudied and distinct, but related, phenomenon. Previous categorizations of resistance have largely failed to address the relationships between the motivations for divergences from procedure and the associated workaround activity. This paper develops a composite model of resistance/workaround derived from two case study sites. We find four key antecedent conditions derived from both positive and negative resistance rationales and identify associations and links to various resultant workaround behaviours and provide supporting Chains of Evidence from two case studies

    Factors Influencing Revenue Collection for Preventative Maintenance of Community Water Systems: A Fuzzy-Set Qualitative Comparative Analysis

    Get PDF
    This study analyzed combinations of conditions that influence regular payments for water service in resource-limited communities. To do so, the study investigated 16 communities participating in a new preventive maintenance program in the Kamuli District of Uganda under a public–private partnership framework. First, this study identified conditions posited as important for collective payment compliance from a literature review. Then, drawing from data included in a water source report and by conducting semi-structured interviews with households and water user committees (WUC), we identified communities that were compliant with, or suspended from, preventative maintenance service payments. Through qualitative analyses of these data and case knowledge, we identified and characterized conditions that appeared to contribute to these outcomes. Then, we employed fuzzy-set qualitative comparative analysis (fsQCA) to determine the combinations of conditions that led to payment compliance. Overall, the findings from this study reveal distinct pathways of conditions that impact payment compliance and reflect the multifaceted nature of water point sustainability. Practically, the findings identify the processes needed for successful payment compliance, which include a strong WUC with proper support and training, user perceptions that the water quality is high and available in adequate quantities, ongoing support, and a lack of nearby water sources. A comprehensive understanding of the combined factors that lead to payment compliance can improve future preventative maintenance programs, guide the design of water service arrangements, and ultimately increase water service sustainability

    Research Data Management and the Canadian Academic Library: An Organizational Consideration of Data Management and Data Stewardship

    Get PDF
    Research data management (RDM) has become a professional imperative for Canada’s academic librarians. Recent policy considerations by our national research funding agencies that address the ability of Canadian universities to effectively manage the massive amounts of research data they now create has helped library and university administrators recognize this gap in the research enterprise and identify RDM as a solution. RDM is not new to libraries, though. Rather, it draws on existing and evolving organizational functions in order to improve data collection, access, use, and preservation. A successful research data management service requires the skills and knowledge found in a library’s research liaisons, collections experts, policy analysts, IT experts, archivists and preservationists. Like the library, research data management is not singular but multi-faceted. It requires collaboration, technology and policy analysis skills, and project management acumen. This paper examines research data management as a vital information, technical, and policy service in academic libraries today. It situates RDM not only as actions and services but also as a suite of responsibilities that require a high level of planning, collaboration, and judgment, thereby binding people to practice. It shows how RDM aligns with the skill sets and competencies of librarianship and illustrates how RDM spans the library’s organizational structure and intersects with campus stakeholders allied in the research enterprise

    Improving knowledge about the risks of inappropriate uses of geospatial data by introducing a collaborative approach in the design of geospatial databases

    Get PDF
    La disponibilité accrue de l’information géospatiale est, de nos jours, une réalité que plusieurs organisations, et même le grand public, tentent de rentabiliser; la possibilité de réutilisation des jeux de données est désormais une alternative envisageable par les organisations compte tenu des économies de coûts qui en résulteraient. La qualité de données de ces jeux de données peut être variable et discutable selon le contexte d’utilisation. L’enjeu d’inadéquation à l’utilisation de ces données devient d’autant plus important lorsqu’il y a disparité entre les nombreuses expertises des utilisateurs finaux de la donnée géospatiale. La gestion des risques d’usages inappropriés de l’information géospatiale a fait l’objet de plusieurs recherches au cours des quinze dernières années. Dans ce contexte, plusieurs approches ont été proposées pour traiter ces risques : parmi ces approches, certaines sont préventives et d’autres sont plutôt palliatives et gèrent le risque après l'occurrence de ses conséquences; néanmoins, ces approches sont souvent basées sur des initiatives ad-hoc non systémiques. Ainsi, pendant le processus de conception de la base de données géospatiale, l’analyse de risque n’est pas toujours effectuée conformément aux principes d’ingénierie des exigences (Requirements Engineering) ni aux orientations et recommandations des normes et standards ISO. Dans cette thèse, nous émettons l'hypothèse qu’il est possible de définir une nouvelle approche préventive pour l’identification et l’analyse des risques liés à des usages inappropriés de la donnée géospatiale. Nous pensons que l’expertise et la connaissance détenues par les experts (i.e. experts en geoTI), ainsi que par les utilisateurs professionnels de la donnée géospatiale dans le cadre institutionnel de leurs fonctions (i.e. experts du domaine d'application), constituent un élément clé dans l’évaluation des risques liés aux usages inadéquats de ladite donnée, d’où l’importance d’enrichir cette connaissance. Ainsi, nous passons en revue le processus de conception des bases de données géospatiales et proposons une approche collaborative d’analyse des exigences axée sur l’utilisateur. Dans le cadre de cette approche, l’utilisateur expert et professionnel est impliqué dans un processus collaboratif favorisant l’identification a priori des cas d’usages inappropriés. Ensuite, en passant en revue la recherche en analyse de risques, nous proposons une intégration systémique du processus d’analyse de risque au processus de la conception de bases de données géospatiales et ce, via la technique Delphi. Finalement, toujours dans le cadre d’une approche collaborative, un référentiel ontologique de risque est proposé pour enrichir les connaissances sur les risques et pour diffuser cette connaissance aux concepteurs et utilisateurs finaux. L’approche est implantée sous une plateforme web pour mettre en œuvre les concepts et montrer sa faisabilité.Nowadays, the increased availability of geospatial information is a reality that many organizations, and even the general public, are trying to transform to a financial benefit. The reusability of datasets is now a viable alternative that may help organizations to achieve cost savings. The quality of these datasets may vary depending on the usage context. The issue of geospatial data misuse becomes even more important because of the disparity between the different expertises of the geospatial data end-users. Managing the risks of geospatial data misuse has been the subject of several studies over the past fifteen years. In this context, several approaches have been proposed to address these risks, namely preventive approaches and palliative approaches. However, these approaches are often based on ad-hoc initiatives. Thus, during the design process of the geospatial database, risk analysis is not always carried out in accordance neither with the principles/guidelines of requirements engineering nor with the recommendations of ISO standards. In this thesis, we suppose that it is possible to define a preventive approach for the identification and analysis of risks associated to inappropriate use of geospatial data. We believe that the expertise and knowledge held by experts and users of geospatial data are key elements for the assessment of risks of geospatial data misuse of this data. Hence, it becomes important to enrich that knowledge. Thus, we review the geospatial data design process and propose a collaborative and user-centric approach for requirements analysis. Under this approach, the user is involved in a collaborative process that helps provide an a priori identification of inappropriate use of the underlying data. Then, by reviewing research in the domain of risk analysis, we propose to systematically integrate risk analysis – using the Delphi technique – through the design of geospatial databases. Finally, still in the context of a collaborative approach, an ontological risk repository is proposed to enrich the knowledge about the risks of data misuse and to disseminate this knowledge to the design team, developers and end-users. The approach is then implemented using a web platform in order to demonstrate its feasibility and to get the concepts working within a concrete prototype

    08302 Abstracts Collection -- Countering Insider Threats

    Get PDF
    From July 20 to July 25, 2008, the Dagstuhl Seminar 08302 ``Countering Insider Threats \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201
    • …
    corecore