1,385,535 research outputs found

    An Access Control Model for Linked Data

    Get PDF
    International audienceLinked Open Data refers to a set of best practices for the publication and interlinking of structured data on the Web in order to create a global interconnected data space called Web of Data. To ensure the resources featured in a dataset are richly described and, at the same time, protected against malicious users, we need to specify the conditions under which a dataset is accessible. Being able to specify access terms should also encourage data providers to publish their data. We introduce a lightweight vocabulary, called Social Semantic SPARQL Security for Access Control Ontology (S4AC), allowing the definition of fine-grained access control policies formalized in SPARQL, and enforced when querying Linked Data. In particular, we define an access control model providing the users with means to define policies for restricting the access to specific RDF data, based on social tags, and contextual information

    Privacy-Preserving Reengineering of Model-View-Controller Application Architectures Using Linked Data

    Get PDF
    When a legacy system’s software architecture cannot be redesigned, implementing additional privacy requirements is often complex, unreliable and costly to maintain. This paper presents a privacy-by-design approach to reengineer web applications as linked data-enabled and implement access control and privacy preservation properties. The method is based on the knowledge of the application architecture, which for the Web of data is commonly designed on the basis of a model-view-controller pattern. Whereas wrapping techniques commonly used to link data of web applications duplicate the security source code, the new approach allows for the controlled disclosure of an application’s data, while preserving non-functional properties such as privacy preservation. The solution has been implemented and compared with existing linked data frameworks in terms of reliability, maintainability and complexity

    Tracing where and who provenance in linked data - a calculus -

    Get PDF
    Linked Data provides some sensible guidelines for publishing and consuming data on the Web. Data published on the Web has no inherent truth, yet its quality can often be assessed based on its provenance. This work introduces a new approach to provenance for Linked Data. The simplest notion of provenance-viz., a named graph indicating where the data is now-is extended with a richer provenance format. The format reflects the behaviour of processes interacting with Linked Data, tracing where the data has been published and who published it. An executable model is presented based on abstract syntax and operational semantics, providing a proof of concept and the means to statically evaluate provenance driven access control using a type system

    Linked Data in Libraries: A Case Study of Harvesting and Sharing Bibliographic Metadata with BIBFRAME

    Get PDF
    By way of a case study this paper illustrates and evaluates the Bibliographic Framework (or BIBFRAME) as means for harvesting and sharing bibliographic metadata over the Web for libraries. BIBFRAME is an emerging framework developed by the Library of Congress for bibliographic description based on Linked Data. Much like Semantic Web, the goal of Linked Data is to make Web “data aware” and transform the existing Web of documents into a Web of data. Linked Data leverages the existing Web infrastructure and allows linking and sharing of structured data for human and machine consumption. The BIBFRAME model attempts to contextualize the Linked Data technology for libraries. Library applications and systems contain high-quality structured metadata but this data is generally static in its presentation and seldom integrated with other internal metadata sources or linked to external Web resources. With BIBFRAME existing disparate library metadata sources such as catalogs and digital collections can be harvested and integrated over the Web. In addition, bibliographic data enriched with Linked Data could offer richer navigational control and access points for users. With Linked Data principles, metadata from libraries could also become harvestable by search engines, transforming dormant catalogs and digital collections into active knowledge repositories. Thus experimenting with Linked Data using existing bibliographic metadata holds the potential to empower libraries to harness the reach of commercial search engines to continuously discover, navigate, and obtain new domain specific knowledge resources on the basis of their verified metadata. The initial part of the paper introduces BIBFRAME and discusses Linked Data in the context of libraries. The final part of this paper outlines a step-by-step process for implementing BIBFRAME with existing library metadata

    CageCoach: Sharing-Oriented Redaction-Capable Distributed Cryptographic File System

    Full text link
    The modern data economy is built on sharing data. However, sharing data can be an expensive and risky endeavour. Existing sharing systems like Distributed File Systems provide full read, write, and execute Role-based Access Control (RBAC) for sharing data, but can be expensive and difficult to scale. Likewise such systems operate on a binary access model for their data, either a user can read all the data or read none of the data. This approach is not necessary for a more read-only oriented data landscape, and one where data contains many dimensions that represent a risk if overshared. In order to encourage users to share data and smooth out the process of accessing such data a new approach is needed. This new approach must simplify the RBAC of older DFS approaches to something more read-only and something that integrates redaction for user protections. To accomplish this we present CageCoach, a simple sharing-oriented Distributed Cryptographic File System (DCFS). CageCoach leverages the simplicity and speed of basic HTTP, linked data concepts, and automatic redaction systems to facilitate safe and easy sharing of user data. The implementation of CageCoach is available at https://github.umn.edu/CARPE415/CageCoach

    Elevation and cholera: an epidemiological spatial analysis of the cholera epidemic in Harare, Zimbabwe, 2008-2009

    Get PDF
    BACKGROUND: In highly populated African urban areas where access to clean water is a challenge, water source contamination is one of the most cited risk factors in a cholera epidemic. During the rainy season, where there is either no sewage disposal or working sewer system, runoff of rains follows the slopes and gets into the lower parts of towns where shallow wells could easily become contaminated by excretes. In cholera endemic areas, spatial information about topographical elevation could help to guide preventive interventions. This study aims to analyze the association between topographic elevation and the distribution of cholera cases in Harare during the cholera epidemic in 2008 and 2009. METHODS: We developed an ecological study using secondary data. First, we described attack rates by suburb and then calculated rate ratios using whole Harare as reference. We illustrated the average elevation and cholera cases by suburbs using geographical information. Finally, we estimated a generalized linear mixed model (under the assumption of a Poisson distribution) with an Empirical Bayesian approach to model the relation between the risk of cholera and the elevation in meters in Harare. We used a random intercept to allow for spatial correlation of neighboring suburbs. RESULTS: This study identifies a spatial pattern of the distribution of cholera cases in the Harare epidemic, characterized by a lower cholera risk in the highest elevation suburbs of Harare. The generalized linear mixed model showed that for each 100 meters of increase in the topographical elevation, the cholera risk was 30% lower with a rate ratio of 0.70 (95% confidence interval=0.66-0.76). Sensitivity analysis confirmed the risk reduction with an overall estimate of the rate ratio between 20% and 40%. CONCLUSION: This study highlights the importance of considering topographical elevation as a geographical and environmental risk factor in order to plan cholera preventive activities linked with water and sanitation in endemic areas. Furthermore, elevation information, among other risk factors, could help to spatially orientate cholera control interventions during an epidemic

    Bernstein's inequality for algebraic polynomials on circular arcs

    Get PDF
    Today information is managed within increasingly complicated Web applications which often rely on similar information models. Finding a reusable and sufficiently generic information model for managing resources and their metadata would greatly simplify development of Web applications. This article presents such an information model, namely the Resource and Metadata Management Model (ReM3). The information model builds upon Web architecture and standards, more specifically the Linked Data principles when managing resources together with their metadata. It allows to express relations between metadata and to keep track of provenance and access control. In addition to this information model, the architecture of the reference implementation is described along with a Web application that builds upon it. To show the taken approach in practice, several real-world examples are presented as showcases. The information model and its reference implementation have been evaluated from several perspectives, such as the suitability for resource annotation, a preliminary scalability analysis and the adoption in a number of projects. This evaluation in various complementary dimensions shows that ReM3 has been successfully applied in practice and can be considered a serious alternative when developing Web applications where resources are managed along with their metadata.QC 20140417</p

    Personalising lung cancer screening with machine learning

    Get PDF
    Personalised screening is based on a straightforward concept: repeated risk assessment linked to tailored management. However, delivering such programmes at scale is complex. In this work, I aimed to contribute to two areas: the simplification of risk assessment to facilitate the implementation of personalised screening for lung cancer; and, the use of synthetic data to support privacy-preserving analytics in the absence of access to patient records. I first present parsimonious machine learning models for lung cancer screening, demonstrating an approach that couples the performance of model-based risk prediction with the simplicity of risk-factor-based criteria. I trained models to predict the five-year risk of developing or dying from lung cancer using UK Biobank and US National Lung Screening Trial participants before external validation amongst temporally and geographically distinct ever-smokers in the US Prostate, Lung, Colorectal and Ovarian Screening trial. I found that three predictors – age, smoking duration, and pack-years – within an ensemble machine learning framework achieved or exceeded parity in discrimination, calibration, and net benefit with comparators. Furthermore, I show that these models are more sensitive than risk-factor-based criteria, such as those currently recommended by the US Preventive Services Taskforce. For the implementation of more personalised healthcare, researchers and developers require ready access to high-quality datasets. As such data are sensitive, their use is subject to tight control, whilst the majority of data present in electronic records are not available for research use. Synthetic data are algorithmically generated but can maintain the statistical relationships present within an original dataset. In this work, I used explicitly privacy-preserving generators to create synthetic versions of the UK Biobank before we performed exploratory data analysis and prognostic model development. Comparing results when using the synthetic against the real datasets, we show the potential for synthetic data in facilitating prognostic modelling

    Analytical and simulation performance modelling of indoor infrared wireless data communications protocols

    Get PDF
    The Infrared (IR) optical medium provides an alternative to radio frequencies (RF) for low cost, low power and short-range indoor wireless data communications. Low-cost optoelectronic components with an unregulated IR spectrum provide the potential for very high-speed wireless communication with good security. However IR links have a limited range and are susceptible to high noise levels from ambient light sources. The Infrared Data Association (IrDA) has produced a set of communication protocol standards (IrDA I. x) for directed point-to-point IR wireless links using a HDLC (High-level Data Link Control) based data link layer which have been widely adopted. To address the requirement for multi-point ad-hoc wireless connectivity, IrDA have produced a new standard (Advanced Infrared -AIr) to support multiple-device non-directed IR Wireless Local Area Networks (WLANs). AIr employs an enhanced physical layer and a CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) based MAC (Media Access Control) layer employing RTS/CTS (Request To Send / Clear To Send) media reservation. This thesis is concerned with the design of IrDA based IR wireless links at the datalink layer, media access sub-layer, and physical layer and presents protocol performance models with the aim of highlighting the critical factors affecting performance and providing recommendations to system designers for parameter settings and protocol enhancements to optimise performance. An analytical model of the IrDA 1.x data link layer (IrLAP Infrared Link Access -Protocol) using Markov analysis of the transmission window width providing saturation condition throughput in relation to the link bit-error-rate (BER), datarate andprotocol parameter settings is presented. Results are presented for simultaneous optimisation of the data packetsize and transmission window size. A simulation model of the IrDA l. x protocol, developed with OPNETTM Modeler, is used for validation of analytical results and to produce non-saturation throughput and delay performance results. An analytical model of the AIr MAC protocol providing saturation condition utilisation and delay results in relation to the number of contending devices and MAC protocol parametersis presented.Results indicate contention window size values for optimum utilisation. The effectiveness of the AIr contention window linear back-off process is examined through Markov analysis. An OPNET simulation model of the Alf protocol is used for validation of the analytical model results and provides non-reservation throughput and delay results. An analytical model of the IR link physical layer is presented and derives expressions for signal-to-noise ratio (SNR) and BER in relation to link transmitter and receiver characteristics, link geometry, noise levels and line encoding schemes. The effect of third user interference on BER and resulting link asymmetry is also examined, indicating the minimum separation distance for adjacent links. Expressions for BER are linked to the data link layer analysis to provide optimum throughput results in relation to physical layer propertiesandlink distance

    The Impact Of Technology Trust On The Acceptance Of Mobile Banking Technology Within Nigeria

    Get PDF
    With advancement in the use of information technology seen as a key factor in economic development, developed countries are increasingly reviewing traditional systems, in various sectors such as education, health, transport and finance, and identifying how they may be improved or replaced with automated systems. In this study, the authors examine the role of technology trust in the acceptance of mobile banking in Nigeria as the country attempts to transition into a cashless economy. For Nigeria, like many other countries, its economic growth is linked, at least in part, to its improvement in information technology infrastructure, as well as establishing secure, convenient and reliable payments systems. Utilising the Technology Acceptance Model, this study investigates causal relationships between technology trust and other factors influencing user’s intention to adopt technology; focusing on the impact of seven factors contributing to technology trust. Data from 1725 respondents was analysed using confirmatory factor analysis and the results showed that confidentiality, integrity, authentication, access control, best business practices and non-repudiation significantly influenced technology trust. Technology trust showed a direct significant influence on perceived ease of use and usefulness, a direct influence on intention to use as well as an indirect influence on intention to use through its impact on perceived usefulness and perceived ease of use. Furthermore, perceived ease of use and perceived usefulness showed significant influence on consumer’s intention to adopt the technology. With mobile banking being a key driver of Nigeria’s cashless economy goals, this study provides quantitative knowledge regarding technology trust and adoption behaviour in Nigeria as well as significant insight on areas where policy makers and mobile banking vendors can focus strategies engineered to improve trust in mobile banking and increase user adoption of their technology
    corecore