4,063 research outputs found

    A Method to Screen, Assess, and Prepare Open Data for Use

    Get PDF
    Open data's value-creating capabilities and innovation potential are widely recognized, resulting in a notable increase in the number of published open data sources. A crucial challenge for companies intending to leverage open data is to identify suitable open datasets that support specific business scenarios and prepare these datasets for use. Researchers have developed several open data assessment techniques, but those are restricted in scope, do not consider the use context, and are not embedded in the complete set of activities required for open data consumption in enterprises. Therefore, our research aims to develop prescriptive knowledge in the form of a meaningful method to screen, assess, and prepare open data for use in an enterprise setting. Our findings complement existing open data assessment techniques by providing methodological guidance to prepare open data of uncertain quality for use in a value-adding and demand-oriented manner, enabled by knowledge graphs and linked data concepts. From an academic perspective, our research conceptualizes open data preparation as a purposeful and value-creating process

    Towards A Taxonomy of Emerging Topics in Open Government Data: A Bibliometric Mapping Approach

    Get PDF
    The purpose of this paper is to capture the emerging research topics in Open Government Data (OGD) through a bibliometric mapping approach. Previous OGD research has covered the evolution of the discipline with the application of bibliometric mapping tools. However, none of these studies have extended the bibliometric mapping approach for taxonomy building. Realizing this potential, we used a bibliometric tool to perform keyword analysis as a foundation for taxonomy construction. A set of keyword clusters was constructed, and qualitative analysis software was used for taxonomy creation. Emerging topics were identified in a taxonomy form. This study contributes towards the development of an OGD taxonomy. This study contributes to the procedural realignment of a past study by incorporating taxonomy building elements for taxonomy creation. These contributions are significant because there is insufficient taxonomy research in the OGD discipline. The taxonomy building procedures extended in this study are applicable to other fields

    Framework for Security Transparency in Cloud Computing

    Get PDF
    The migration of sensitive data and applications from the on-premise data centre to a cloud environment increases cyber risks to users, mainly because the cloud environment is managed and maintained by a third-party. In particular, the partial surrender of sensitive data and application to a cloud environment creates numerous concerns that are related to a lack of security transparency. Security transparency involves the disclosure of information by cloud service providers about the security measures being put in place to protect assets and meet the expectations of customers. It establishes trust in service relationship between cloud service providers and customers, and without evidence of continuous transparency, trust and confidence are affected and are likely to hinder extensive usage of cloud services. Also, insufficient security transparency is considered as an added level of risk and increases the difficulty of demonstrating conformance to customer requirements and ensuring that the cloud service providers adequately implement security obligations. The research community have acknowledged the pressing need to address security transparency concerns, and although technical aspects for ensuring security and privacy have been researched widely, the focus on security transparency is still scarce. The relatively few literature mostly approach the issue of security transparency from cloud providers’ perspective, while other works have contributed feasible techniques for comparison and selection of cloud service providers using metrics such as transparency and trustworthiness. However, there is still a shortage of research that focuses on improving security transparency from cloud users’ point of view. In particular, there is still a gap in the literature that (i) dissects security transparency from the lens of conceptual knowledge up to implementation from organizational and technical perspectives and; (ii) support continuous transparency by enabling the vetting and probing of cloud service providers’ conformity to specific customer requirements. The significant growth in moving business to the cloud – due to its scalability and perceived effectiveness – underlines the dire need for research in this area. This thesis presents a framework that comprises the core conceptual elements that constitute security transparency in cloud computing. It contributes to the knowledge domain of security transparency in cloud computing by proposing the following. Firstly, the research analyses the basics of cloud security transparency by exploring the notion and foundational concepts that constitute security transparency. Secondly, it proposes a framework which integrates various concepts from requirement engineering domain and an accompanying process that could be followed to implement the framework. The framework and its process provide an essential set of conceptual ideas, activities and steps that can be followed at an organizational level to attain security transparency, which are based on the principles of industry standards and best practices. Thirdly, for ensuring continuous transparency, the thesis proposes an essential tool that supports the collection and assessment of evidence from cloud providers, including the establishment of remedial actions for redressing deficiencies in cloud provider practices. The tool serves as a supplementary component of the proposed framework that enables continuous inspection of how predefined customer requirements are being satisfied. The thesis also validates the proposed security transparency framework and tool in terms of validity, applicability, adaptability, and acceptability using two different case studies. Feedbacks are collected from stakeholders and analysed using essential criteria such as ease of use, relevance, usability, etc. The result of the analysis illustrates the validity and acceptability of both the framework and tool in enhancing security transparency in a real-world environment

    Map4Scrutiny – a linked open data solution for politicians interest registers

    Get PDF
    Dissertação de mestrado em Sistemas de InformaçãoO trabalho desenvolvido no âmbito desta dissertação descreve o processo de recolha, uniformização e transformação de dados abertos em formato de texto e tabelas (CSV) para dados abertos ligados (Linked Open Data). Especificamente, dados sobre os registos de interesses dos deputados à assembleia da república portuguesa e contratação pública, ligados pelas organizações que são mencionadas em ambos. O estado da arte inclui uma análise de fundo aos conceitos de corrupção, transparência, dados abertos, e dados abertos ligados, tal como a projetos de dados abertos e dados abertos ligados relevantes. A seleção dos dados a utilizar, com respeito aos tópicos de conjuntos de dados relevantes e ao interesse público, o desenho da solução proposta e a seleção de ferramentas, métodos e processos, seguiu a proposta de três ciclos de Hevner para uma abordagem ao desenho de investigação na ciência. O processo de implementação é iniciado com a recolha de dados das fontes utilizando bibliotecas Python para web Scraping e a transformação dos mesmos em tabelas (CSV). Estes dados são depois limpos e uniformizados com auxílio do OpenRefine. Esta ferramenta é também usada para mapear os dados da tabela para triples que são exportados em ficheiros Turtle. Este mapeamento foi previamente desenhado num perfil de aplicação que serviu de base para a criação das formas dos dados (ShExC) usadas para conduzir o processo de validação nos ficheiros Turtle. Esta validação assegura que os ficheiros gerados pelo OpenRefine são conformes com o perfil de aplicação. Para descrever adequadamente os dados foram usados vocabulários já existentes complementados, quando necessário, com a criação de novas classes, propriedades e valores. Este processo está também descrito e os vocabulários estão disponíveis para consulta e reutilização. Por fim, foram feitas consultas modelo em SPARQL para ilustrar a diferença entre os dados originais e o conjunto de dados transformado. O objetivo deste trabalho é contribuir para as áreas de dados abertos ligados e dados abertos para a transparência e escrutínio público. Os contributos principais para o primeiro são um novo esquema de dados e a descrição de todos os passos do processo de transformação. Para o segundo o contributo que se destaca é mais uma implementação que demonstra o potencial do escrutínio de dados no aumento da transparência através da comparação entra as consultas possíveis aos conjuntos de dados originais e ao resultante da solução proposta. O processo de implementação está documentado abaixo e os ficheiros resultantes disponibilizados para consulta.The work developed in the scope of this dissertation describes the process of sourcing, uniformizing, and transforming text and tabular (CSV) open data to linked open data. More exactly, data on Portuguese parliamentarians’ interest registers and public procurement, linked by the organisations mentioned in both. The state of the art presented includes a background analysis on the concepts of corruption, transparency, open data, and linked open data and an analysis of relevant open data and linked open data projects. The research was conducted using Hevner’s three-cycle design science research approach which led to the definition of the data scope concerning relevant dataset topics and the public’s interest, the design of the proposed solution, and the selected tools, methods, and processes. The implementation process starts with Scraping the data from the sources with the aid of python libraries and generating tabular (CSV) outputs. These are cleaned and uniformized in OpenRefine. OpenRefine is also the tool used to map the data on the tables into triples and generate outputs in Turtle. The map was designed in an application profile that also served as a base for writing the shapes (in ShExC) and conducting validation on the exported Turtle files. This validation ensures that the data is conformant with the application profile. To successfully describe the data in triples, on top of the external vocabularies used, new classes, properties and values had to be created. This process is also thoroughly described, and the outputs are open to access and reuse. Finally, sample SPARQL queries were made to showcase the difference between the sourced data and the resulting dataset. The goal is to contribute to the field of linked open data and open data for transparency and public scrutiny. The main contributions to the first are a new data scheme and the description of every step in the transformation process, while to the latter the contribution is a further implementation showcasing the scrutiny potential of data in improving transparency by comparing the querying possibilities of the final dataset with the originals. Every step taken is documented below and the resulting outputs of the different stages are available for consultation

    Extension of ImageNotion to Allow Privacy-Aware Image Sharing

    Get PDF
    A growing number of users in Web 2.0 based social network sites and photo sharing portals upload millions of images per day. In many cases, this leads to serious privacy threats. The images reveal not only the personal relationships and attitudes of the user who uploads the images, but those of other persons displayed in the images as well. In this paper, we propose a system architecture for privacy-aware image sharing. Our approach is based on the ImageNotion application, which combines automated processes to create high-quality semantic image annotations

    Fostering Comparability in Research Dissemination: A Research Portal-based Approach

    Get PDF
    In this paper, we address the problem of lacking consistency andcomparability in the dissemination of research information. Weseek to solve this problem using research portals, which arecommunity-based research information systems on the Internet.The idea of our solution is to customize research portals to betterfit to individual application scenarios. To this end, we propose aconceptual specification of a generic portal structure allowing forsemantic standardization. For a given application scenario, thisbasis has to be customized regarding portal structure andsemantics of textual descriptions. We demonstrate such acustomization for an exemplary research portal addressing designscience research. Furthermore, we describe an exemplary researchprocess using the customized portal definition. We conclude thatour approach has the potential to increase the consistency andcomparability of research dissemination with research portals.This goal is achieved with a) an individually customizable portalstructure, which is able to reflect the nature of a specificapplication scenario better than generic structures and b) asemantic standardization of textual descriptions, which enforcesthem to be precise, compact, and apply the vocabulary of thedomain

    Web2Touch 2016: Evolution and security of collaborative web knowledge

    Get PDF
    This report introduces the Web2Touch 2016, a Track at the 25th IEEE WETICE Conference. This track involves works from collaborative web knowledge research community and related themes. Web2Touch 2016 explores the state-of-the-art on users' practical experiences, as well as trends and research topics paving the way for future collaborative approaches to knowledge management. Papers come from areas such as computational analysis, management of contextual information, support to personalized information management, collaborative knowledge production, consistency, knowledge engineering and security modeling for multiple knowledge sources. The overall focus is on determining how to route, organize, and present contextual and meaningful information and services to facilitate collaboration

    An ontology framework for developing platform-independent knowledge-based engineering systems in the aerospace industry

    Get PDF
    This paper presents the development of a novel knowledge-based engineering (KBE) framework for implementing platform-independent knowledge-enabled product design systems within the aerospace industry. The aim of the KBE framework is to strengthen the structure, reuse and portability of knowledge consumed within KBE systems in view of supporting the cost-effective and long-term preservation of knowledge within such systems. The proposed KBE framework uses an ontology-based approach for semantic knowledge management and adopts a model-driven architecture style from the software engineering discipline. Its phases are mainly (1) Capture knowledge required for KBE system; (2) Ontology model construct of KBE system; (3) Platform-independent model (PIM) technology selection and implementation and (4) Integration of PIM KBE knowledge with computer-aided design system. A rigorous methodology is employed which is comprised of five qualitative phases namely, requirement analysis for the KBE framework, identifying software and ontological engineering elements, integration of both elements, proof of concept prototype demonstrator and finally experts validation. A case study investigating four primitive three-dimensional geometry shapes is used to quantify the applicability of the KBE framework in the aerospace industry. Additionally, experts within the aerospace and software engineering sector validated the strengths/benefits and limitations of the KBE framework. The major benefits of the developed approach are in the reduction of man-hours required for developing KBE systems within the aerospace industry and the maintainability and abstraction of the knowledge required for developing KBE systems. This approach strengthens knowledge reuse and eliminates platform-specific approaches to developing KBE systems ensuring the preservation of KBE knowledge for the long term
    corecore