709 research outputs found

    Safeguarding against new privacy threats in inter-enterprise collaboration environments

    Get PDF
    Inter-enterprise collaboration has become essential for the success of enterprises. As competition increasingly takes place between supply chains and networks of enterprises, there is a strategic business need to participate in multiple collaborations simultaneously. Collaborations based on an open market of autonomous actors set special requirements for computing facilities supporting the setup and management of these business networks of enterprises. Currently, the safeguards against privacy threats in collaborations crossing organizational borders are both insufficient and incompatible to the open market. A broader understanding is needed of the architecture of defense structures, and privacy threats must be detected not only on the level of a private person or enterprise, but on the community and ecosystem levels as well. Control measures must be automated wherever possible in order to keep the cost and effort of collaboration management reasonable. This article contributes to the understanding of the modern inter-enterprise collaboration environment and privacy threats in it, and presents the automated control measures required to ensure that actors in inter-enterprise collaborations behave correctly to preserve privacy.Peer reviewe

    Smittestopp − A Case Study on Digital Contact Tracing

    Get PDF
    This open access book describes Smittestopp, the first Norwegian system for digital contact tracing of Covid-19 infections, which was developed in March and early April 2020. The system was deployed after five weeks of development and was active for a little more than two months, when a drop in infection levels in Norway and privacy concerns led to shutting it down. The intention of this book is twofold. First, it reports on the design choices made in the development phase. Second, as one of the only systems in the world that collected population data into a central database and which was used for an entire population, we can share experience on how the design choices impacted the system's operation. By sharing lessons learned and the challenges faced during the development and deployment of the technology, we hope that this book can be a valuable guide for experts from different domains, such as big data collection and analysis, application development, and deployment in a national population, as well as digital tracing

    Newman v. Google

    Get PDF
    3rd amended complain

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist häufig rechtlich bindend für Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung führen können. Wir präsentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch Seitenkanäle schützen. Qapla verhindert direkte Offenlegungen in datenbankgestützten Anwendungen. Herkömmliche Methoden binden Richtlinienprüfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die Konformität ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können über die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulässt und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre Nützlichkeit demonstriert wird

    Agile and Lean Systems Engineering: Kanban in Systems Engineering

    Get PDF
    This is the 2nd of two reports that were created for research on this topic funded through SERC. The first report, SERC-TR-032-1 dated March 13, 2012, constituted the 2011-2012 Annual Technical Report and the Final Technical Report of the SERC Research Task RT-6: Software Intensive Systems Data Quality and Estimation Research In Support of Future Defense Cost Analysis. The overall objectives of RT-6 were to use data submitted to DoD in the Software Resources Data Report (SRDR) forms to provide guidance for DoD projects in estimating software costs for future DoD projects. In analyzing the data, the project found variances in productivity data that made such SRDR-based estimates highly variable. The project then performed additional analyses that provided better bases of estimate, but also identified ambiguities in the SRDR data definitions that enabled the project to help the DoD DCARC organization to develop better SRDR data definitions. In SERC-TR-2012-032-1, the resulting Manual provided the guidance elements for software cost estimation performers and users. Several appendices provide further related information on acronyms, sizing, nomograms, work breakdown structures, and references. SERC-TR-2013-032-2 (current report), included the “Software Cost Estimation Metrics Manual.” This constitutes the 2012-2013 Annual Technical Report and the Final Technical Report of the SERC Research Task Order 0024, RT-6: Software Intensive Systems Cost and Schedule Estimation Estimating the cost to develop a software application is different from almost any other manufacturing process. In other manufacturing disciplines, the product is developed once and replicated many times using physical processes. Replication improves physical process productivity (duplicate machines produce more items faster), reduces learning curve effects on people and spreads unit cost over many items. Whereas a software application is a single production item, i.e. every application is unique. The only physical processes are the documentation of ideas, their translation into computer instructions and their validation and verification. Production productivity reduces, not increases, when more people are employed to develop the software application. Savings through replication are only realized in the development processes and on the learning curve effects on the management and technical staff. Unit cost is not reduced by creating the software application over and over again. This manual helps analysts and decision makers develop accurate, easy and quick software cost estimates for different operating environments such as ground, shipboard, air and space. It was developed by the Air Force Cost Analysis Agency (AFCAA) in conjunction with DoD Service Cost Agencies, and assisted by the SERC through involving the University of Southern California and the Naval Postgraduate School. The intent is to improve quality and consistency of estimating methods across cost agencies and program offices through guidance, standardization, and knowledge sharing. The manual consists of chapters on metric definitions, e.g., what is meant by equivalent lines of code, examples of metric definitions from commercially available cost models, the data collection and repository form, guidelines for preparing the data for analysis, analysis results, cost estimating relationships found in the data, productivity benchmarks, future cost estimation challenges and a very large appendix.SERCU.S. Department of DefenseSystems Engineering Research Center (SERC)Systems Engineering Research Center (SERC) Contract H98230-08-D-0171

    Pattern operators for grid

    Get PDF
    The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operatorsDepartamento de Informática and Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa; Centro de Informática e Tecnologias da Informação of the FCT/UNL; Reitoria da Universidade Nova de Lisboa; Distributed Collaborative Computing Group, Cardiff University, United Kingdom; Fundação para a Ciência e Tecnologia; Instituto de Cooperação Científica e Tecnológica Internacional; French Embassy in Portugal; European Union Commission through the Agentcities.NET and Coordina projects; and the European Science Foundation, EURESCO

    Pattern Operators for Grid Environments

    Get PDF
    The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operatorsDepartamento de Informática and Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa; Centro de Informática e Tecnologias da Informação of the FCT/UNL; Reitoria da Universidade Nova de Lisboa; Distributed Collaborative Computing Group, Cardiff University, United Kingdom; Fundação para a Ciência e Tecnologia; Instituto de Cooperação Científica e Tecnológica Internacional; French Embassy in Portugal; European Union Commission through the Agentcities.NET and Coordina projects; and the European Science Foundation, EURESCO

    Constructing and restraining the societies of surveillance: Accountability, from the rise of intelligence services to the expansion of personal data networks in Spain and Brazil (1975-2020)

    Get PDF
    541 p.The objective of this study is to examine the development of socio-technical accountability mechanisms in order to: a) preserve and increase the autonomy of individuals subjected to surveillance and b) replenish the asymmetry of power between those who watch and those who are watched. To do so, we address two surveillance realms: intelligence services and personal data networks. The cases studied are Spain and Brazil, from the beginning of the political transitions in the 1970s (in the realm of intelligence), and from the expansion of Internet digital networks in the 1990s (in the realm of personal data) to the present time. The examination of accountability, thus, comprises a holistic evolution of institutions, regulations, market strategies, as well as resistance tactics. The conclusion summarizes the accountability mechanisms and proposes universal principles to improve the legitimacy of authority in surveillance and politics in a broad sense

    Energieeffizienz in Büroumgebungen

    Get PDF
    The increasing cost of energy and the worldwide desire to reduce CO2 emissions has raised concern about the energy efficiency of information and communication technology. Whilst research has focused on data centres recently, this thesis identifies office computing environments as significant consumers of energy. Office computing environments offer great potential for energy savings: On one hand, such environments consist of a large number of hosts. On the other hand, these hosts often remain turned on 24~hours per day while being underutilised or even idle. This thesis analyzes the energy consumption within office computing environments and suggests an energy-efficient virtualized office environment. The office environment is virtualized to achieve flexible virtualized office resources that enable an energy-based resource management. This resource management stops idle services and idle hosts from consuming resources within the office and consolidates utilised office services on office hosts. This increases the utilisation of some hosts while other hosts are turned off to save energy. The suggested architecture is based on a decentralized approach that can be applied to all kinds of office computing environments, even if no centralized data centre infrastructure is available. The thesis develops the architecture of the virtualized office environment together with an energy consumption model that is able to estimate the energy consumption of hosts and network within office environments. The model enables the energy-related comparison of ordinary and virtualized office environments, considering the energy-efficient management of services. Furthermore, this thesis evaluates energy efficiency and overhead of the suggested approach. First, it theoretically proves the energy efficiency of the virtualized office environment with respect to the energy consumption model. Second, it uses Markov processes to evaluate the impact of user behaviour on the suggested architecture. Finally, the thesis develops a discrete-event simulation that enables the simulation and evaluation of office computing environments with respect to varying virtualization approaches, resource management parameters, user behaviour, and office equipment. The evaluation shows that the virtualized office environment saves more than half of the energy consumption within office computing environments, depending on user behaviour and office equipment.Die steigenden Kosten von Energie und die weltweiten Bemühungen CO2-Emmissionen zu reduzieren, führt aktuell zu einer intensiven Untersuchung der Energieeffizienz von Informations- und Kommunikationstechnologien. Während ein großer Teil der aktuellen Forschung sich auf Rechenzentren fokussiert, betrachtet diese Arbeit Büroumgebungen mit ihren Rechnern und dem verbindenden Netzwerk. Eine energieeffiziente Architektur wird vorgeschlagen, die auf die Virtualisierung und Konsolidierung von Diensten setzt, ohne auf zentralisierte Rechenzentrumshardware oder Thin Clients angewiesen zu sein
    corecore