661,161 research outputs found

    Using Semantic Templates to Study Vulnerabilities Recorded in Large Software Repositories

    Get PDF
    Software vulnerabilities allow an attacker to reduce a system\u27s Confidentiality, Availability, and Integrity by exposing information, executing malicious code, and undermine system functionalities that contribute to the overall system purpose and need. With new vulnerabilities discovered everyday in a variety of applications and user environments, a systematic study of their characteristics is a subject of immediate need for the following reasons: The high rate in which information about past and new vulnerabilities are accumulated makes it difficult to absorb and comprehend. Rather than learning from past mistakes, similar types of vulnerabilities are observed repeatedly. As the scale and complexity of current software grows, better mental models will be required for developers to sense the possibility for the occurrence of vulnerabilities. While the software development community has put a significant effort to capture the artifacts related to a discovered vulnerability in organized repositories, much of this information is not amenable to meaningful analysis and requires a deep and manual inspection. ln the software assurance community a body of knowledge that provides an enumeration of common weaknesses has been developed, but it is complicated and not readily usable for the study of vulnerabilities in specific projects and user environments. Also the discovered vulnerabilities from different projects are collected in various databases with general metadata such as dates, person, and natural language descriptions but without the links to other relevant knowledge, they are hard to be utilized for the purpose of understanding vulnerabilities. This research combines the information sources from these communities in a way that facilitates the study of vulnerabilities recorded in large software repositories. We introduce the notion of Semantic Template to integrate the scattered information relevant to understand and discover vulnerabilities. We evaluate the use of semantic templates by applying it to analyze and annotate vulnerabilities recorded in software repositories from the Apache Web Server project. We refer to software repositories in a general sense that includes source code, version control data, bug reports, developer mailing lists and project development websites. We derive semantic templates from community standards such as the Common Weaknesses Enumeration (CWE) and Common Vulnerabilities and Exposures (CVE). We rely on standards in order to facilitate the adoption, sharing and interoperability of semantic templates. This research contributes a novel theory and corresponding mechanisms for the study of vulnerabilities in large software projects. To support these claims, we discuss our experiences and present our findings from the Apache Web Server project

    A Semantic Model for Enhancing Data-Driven Open Banking Services

    Get PDF
    In current Open Banking services, the European Payment Services Directive (PSD2) allows the secure collection of bank customer information, on their behalf and with their consent, to analyze their financial status and needs. The PSD2 directive has lead to a massive number of daily transactions between Fintech entities which require the automatic management of the data involved, generally coming from multiple and heterogeneous sources and formats. In this context, one of the main challenges lies in defining and implementing common data integration schemes to easily merge them into knowledge-base repositories, hence allowing data reconciliation and sophisticated analysis. In this sense, Semantic Web technologies constitute a suitable framework for the semantic integration of data that makes linking with external sources possible and enhances systematic querying. With this motivation, an ontology approach is proposed in this work to operate as a semantic data mediator in real-world open banking operations. According to semantic reconciliation mechanisms, the underpinning knowledge graph is populated with data involved in PSD2 open banking transactions, which are aligned with information from invoices. A series of semantic rules is defined in this work to show how the financial solvency classification of client entities and transaction concept suggestions can be inferred from the proposed semantic model.This research has been partially funded by the Spanish Ministry of Science and Innovation via the Aether Project with grant number PID2020-112540RB-C41 (AEI/FEDER, UE), the Ministry of Industry, Commerce and Tourism via the Helix initiative with grant number AEI-010500-2020-34, and the Andalusian PAIDI program with grant number P18-RT-2799. Partial funding for open access charge: Universidad de Málag

    An integrated participative spatial decision support system for smart energy urban scenarios: A financial and economic approach

    Get PDF
    The decision-making about heating supply system options in an urban perspective is extremely challenging. Nowadays, this type of evaluation is not only a technical and economic issue, but also a political and environmental choice. Aware of this widening of the problem, recent approaches propose to combine financial evaluations (DCF, CBA, ROI, energy budget costs –VEDI SITO ENTRANZE VEDI CORGNATI VEDI INGARAMO)…with Multicriteria Decision Analyses (MCDA), able to consider quantitative and qualitative aspects. However, there is another specific feature of the problem that is rarely considered: the territorial dimension. In fact, it is possible to notice that few Spatial Decision Support System (SDSS) have been currently developed in this realm. The paper aims to present a new method finalised to support urban energy decisions in real-time processes, developed in the context of a European project (DIMMER). The method is composed by three parts: i) a new Web-based Spatial Decision Support System (SDSS), called “Dashboard”; ii) an Energy-Attribute Analysis (EEA) develop ad hoc to be integrated in the Dashboard; iii) a MCDA. Differently from other SDSS, one of the main strengths of the Dashboard is the ability of acquiring, storing and managing geo-referenced as well as non geo-referenced data performing real-time analyses of spatial problems taking into account a wide range of information. In this sense, the Dashboard can formally visualize and assess a potentially infinite number of attributes and information being able to read and process enormous web-databases. This character makes the Dashboard a very effective tool that can be used in real-time during focus groups or workshops to understand how the criterion trade-offs evolve when one or several decision parameters change. The paper describes the main procedure of the new method and the Dashboard’s test according to a district in Turin (Italy)

    Analysis, discovery and exploitation of open data for the creation of question-answering systems

    Get PDF
    The open data ("open data", in English) are those data that are freely accessible in order to be reused and redistributed by those people or organizations that want it, without having any type of restrictions for it. In this sense, open data portals become vitally important as mechanisms to facilitate access to this data through the Web, and enhance its distribution and reuse. On the one hand, the opening of data makes it possible for these to be totally accessible in a simple and free way. On the other hand, in addition to a social impact derived from the exercise of transparency, the opening of data has a significant economic impact. In fact, in the words of the Vice-President of the European Commission responsible for the Digital Agenda, Neelie Kroes, "data is the fuel of the new economy, [...], the new oil of the digital age", since Data and technology can be associated to generate value through applications, Web content, etc. In order for this social and economic impact to be significant, open data portals become vitally important as mechanisms to facilitate access to this data through the Web, and to enhance its distribution and reuse. However, at present we have a huge amount of open data that is underutilized due to its difficulty of interpretation. Therefore, what is the use of having so much information if we do not know how to exploit it and transform it into knowledge? In this work we propose the development of an automatic and integrated method in a chatbot in which, from a user question, the set of open data most appropriate to that query is discovered, analyze the existing data and generate an appropriate answer to that question, together with another possible question to help or guide the user in the type of information he wants to obtain. Both the questions and the answers will also be generated automatically using Natural Language Processing techniques and / or Natural Language Generation

    Real-time topic detection with bursty n-grams: RGU's submission to the 2014 SNOW challenge.

    Get PDF
    Twitter is becoming an ever more popular platform for discovering and sharing information about current events, both personal and global. The scale and diversity of messages makes the discovery and analysis of breaking news very challenging. Nonetheless, journalists and other news consumers are increasingly relying on tools to help them make sense of Twitter. Here, we describe a fully-automated system capable of detecting trends related to breaking news in real-time. It identifies words or phrases that `burst' with sudden increased frequencies, and groups these into topics. It identifies a diverse set of recent tweets that are related to these topics, and uses these to create a suitable human-readable headline. In addition, images coming from the diverse tweets are also added to the topic. Our system was evaluated using 24 hours of tweets as part of the Social News On the Web (SNOW) 2014 data challenge

    RDF, the semantic web, Jordan, Jordan and Jordan

    Get PDF
    This collection is addressed to archivists and library professionals, and so has a slight focus on implications implications for them. This chapter is nonetheless intended to be a more-or-less generic introduction to the Semantic Web and RDF, which isn't specific to that domain

    Support for Process Oriented Requirements Engineering

    Get PDF
    Supporting a Process Oriented Requirements Method Process modelling as a way to inform requirements has seen somewhat of a resurgence in recent years, particularly in those methods that utilise a Role Activity Diagram based approach. By using these models within the requirements phase the client needs can be captured effectively in a notation that makes sense to the business user, whilst also providing a rigorous description. However, the move to specification still has pitfalls, notably in ensuring that the understanding gained from process modelling can be transferred effectively to the specification so that alignment of business need and software system is maintained. This talk outlines issues and potential solutions in ensuring such alignment, and incorporates our recent experiences in attempting to provide tool support. In particular, we describe tool support for model driven development (as part of the collaborative EC project VIDE), and the use of process mashups, including work undertaken at SAP research. In both cases our focus has been on providing sets of notations and tools which are accessible to a variety of users, often including those stakeholders who are not IT experts. Mashups are a relatively new approach, and use web 2.0 technologies to combine data from different sources to create valuable information, principally for data aggregation applications. This utilises the potential of the internet and related technologies, to allow users to process tasks collaboratively, and form communities among those with similar interests. We present currently available mashup platforms and demonstrate how situational enterprise applications can be built by combining social software, feeds, widgets, web services, open APIs, and so on. The session does not require any specific technical skills, though experienced process engineers will have the opportunity to share their views. Participants will learn: • How to use simple role based process models • Issues in moving from process model to specification • How to use simple notational devices to ensure alignment • How development tools can help, with a particular focus on the use of mashups • Current process mashup approaches and future directions
    • …
    corecore