108,271 research outputs found

    CamFlow: Managed Data-sharing for Cloud Services

    Full text link
    A model of cloud services is emerging whereby a few trusted providers manage the underlying hardware and communications whereas many companies build on this infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS applications. From the start, strong isolation between cloud tenants was seen to be of paramount importance, provided first by virtual machines (VM) and later by containers, which share the operating system (OS) kernel. Increasingly it is the case that applications also require facilities to effect isolation and protection of data managed by those applications. They also require flexible data sharing with other applications, often across the traditional cloud-isolation boundaries; for example, when government provides many related services for its citizens on a common platform. Similar considerations apply to the end-users of applications. But in particular, the incorporation of cloud services within `Internet of Things' architectures is driving the requirements for both protection and cross-application data sharing. These concerns relate to the management of data. Traditional access control is application and principal/role specific, applied at policy enforcement points, after which there is no subsequent control over where data flows; a crucial issue once data has left its owner's control by cloud-hosted applications and within cloud-services. Information Flow Control (IFC), in addition, offers system-wide, end-to-end, flow control based on the properties of the data. We discuss the potential of cloud-deployed IFC for enforcing owners' dataflow policy with regard to protection and sharing, as well as safeguarding against malicious or buggy software. In addition, the audit log associated with IFC provides transparency, giving configurable system-wide visibility over data flows. [...]Comment: 14 pages, 8 figure

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Rural sustainable drainage systems:a practical design and build guide for Scotland's farmers and landowners

    Get PDF
    Soil cultivation, manure / fertiliser applications and chemical spraying can all contribute to diffuse pollution from agricultural land. Rainfall runoff from farm roads, tracks, yards and dusty roofs are also potential sources of diffuse pollution. Whilst many changes in farming practice have dealt with these sources of pollution there still remains instances where small amounts escape from a farmyard into a nearby ditch or where sediment laden overland field flows make their way into a ditch or burn, river or natural wetland and finally the sea. This not only has cost implications for a farmer but these incidents across a catchment have a huge impact on our water environment. Rural Sustainable Drainage Systems (Rural SuDS) will reduce agricultural diffuse pollution impacts as they are physical barriers that treat rainfall runoff. They are low cost, above ground drainage structures that capture soil particles, organic matter, nutrients and pesticides before they enter our water environment. Rural SuDS for steadings prevent blockages in drains and ditches. They contribute to good environmental practice and farm assurance schemes. In fields they can be used for returning fertile soil back to farmland and will help your business become more resilient to the impacts of climate change. Trapping soils, organic matter and nutrients means that valuable assets can be reclaimed – recent studies indicate savings of £88 per hectare per year! This Design and Build guide can be used by farmers and land managers to reduce diffuse pollution

    Automatic vs Manual Provenance Abstractions: Mind the Gap

    Full text link
    In recent years the need to simplify or to hide sensitive information in provenance has given way to research on provenance abstraction. In the context of scientific workflows, existing research provides techniques to semi automatically create abstractions of a given workflow description, which is in turn used as filters over the workflow's provenance traces. An alternative approach that is commonly adopted by scientists is to build workflows with abstractions embedded into the workflow's design, such as using sub-workflows. This paper reports on the comparison of manual versus semi-automated approaches in a context where result abstractions are used to filter report-worthy results of computational scientific analyses. Specifically; we take a real-world workflow containing user-created design abstractions and compare these with abstractions created by ZOOM UserViews and Workflow Summaries systems. Our comparison shows that semi-automatic and manual approaches largely overlap from a process perspective, meanwhile, there is a dramatic mismatch in terms of data artefacts retained in an abstracted account of derivation. We discuss reasons and suggest future research directions.Comment: Preprint accepted to the 2016 workshop on the Theory and Applications of Provenance, TAPP 201

    Exploiting rules and processes for increasing flexibility in service composition

    Get PDF
    Recent trends in the use of service oriented architecture for designing, developing, managing, and using distributed applications have resulted in an increasing number of independently developed and physically distributed services. These services can be discovered, selected and composed to develop new applications and to meet emerging user requirements. Service composition is generally defined on the basis of business processes in which the underlying composition logic is guided by specifying control and data flows through Web service interfaces. User demands as well as the services themselves may change over time, which leads to replacing or adjusting the composition logic of previously defined processes. Coping with change is still one of the fundamental problems in current process based composition approaches. In this paper, we exploit declarative and imperative design styles to achieve better flexibility in service composition

    Evaluating the Contextual Integrity of Privacy Regulation: Parents' IoT Toy Privacy Norms Versus COPPA

    Full text link
    Increased concern about data privacy has prompted new and updated data protection regulations worldwide. However, there has been no rigorous way to test whether the practices mandated by these regulations actually align with the privacy norms of affected populations. Here, we demonstrate that surveys based on the theory of contextual integrity provide a quantifiable and scalable method for measuring the conformity of specific regulatory provisions to privacy norms. We apply this method to the U.S. Children's Online Privacy Protection Act (COPPA), surveying 195 parents and providing the first data that COPPA's mandates generally align with parents' privacy expectations for Internet-connected "smart" children's toys. Nevertheless, variations in the acceptability of data collection across specific smart toys, information types, parent ages, and other conditions emphasize the importance of detailed contextual factors to privacy norms, which may not be adequately captured by COPPA.Comment: 18 pages, 1 table, 4 figures, 2 appendice
    corecore