12 research outputs found

    MixFlow: Assessing Mixnets Anonymity with Contrastive Architectures and Semantic Network Information

    Get PDF
    Traffic correlation attacks have illustrated challenges with protecting communication meta-data, yet short flows as in messaging applications like Signal have been protected by practical Mixnets such as Loopix from prior traffic correlation attacks. This paper introduces a novel traffic correlation attack against short-flow applications like Signal that are tunneled through practical Mixnets like Loopix. We propose the MixFlow model, an approach for analyzing the unlinkability of communications through Mix networks. As a prominent example, we do our analysis on Loopix. The MixFlow is a contrastive model that looks for semantic relationships between entry and exit flows, even if the traffic is tunneled through Mixnets that protect meta-data like Loopix via Poisson mixing delay and cover traffic. We use the MixFlow model to evaluate the resistance of Loopix Mix networks against an adversary that observes only the inflow and outflow of Mixnet and tries to correlate communication flows. Our experiments indicate that the MixFlow model is exceptionally proficient in connecting end-to-end flows, even when the Poison delay and cover traffic are increased. These findings challenge the conventional notion that adding Poisson mixing delay and cover traffic can obscure the metadata patterns and relationships between communicating parties. Despite the implementation of Poisson mixing countermeasures in Mixnets, MixFlow is still capable of effectively linking end-to-end flows, enabling the extraction of meta-information and correlation between inflows and outflows. Our findings have important implications for existing Poisson-mixing techniques and open up new opportunities for analyzing the anonymity and unlinkability of communication protocols

    MPC With Delayed Parties Over Star-Like Networks

    Get PDF
    While the efficiency of secure multi-party computation protocols has greatly increased in the last few years, these improvements and protocols are often based on rather unrealistic, idealised, assumptions about how technology is deployed in the real world. In this work we examine multi-party computation protocols in the presence of two major constraints present in deployed systems. Firstly, we consider the situation where the parties are connected not by direct point-to-point connections, but by a star-like topology with a few central post-office style relays. Secondly, we consider MPC protocols with a strong honest majority (n≫t/2n \gg t/2) in which we have stragglers (some parties are progressing slower than others). We model stragglers by allowing the adversary to delay messages to and from some parties for a given length of time. We first show that having only a single honest rely is enough to ensure consensus of the messages sent within a protocol; secondly, we show that special care must be taken to describe multiplication protocols in the case of relays and stragglers and that some well known protocols do not guarantee privacy and correctness in this setting; thirdly, we present an efficient honest-majority MPC protocol which can be run on top of the relays and which provides active-security with abort in the case of a strong honest majority, even when run with stragglers. We back up our protocol presentation with both experimental evaluations and simulations of the effect of the relays and delays on our protocol

    WF-Interop: Adaptive and reflective REST interfaces for interoperability between workflow engines

    No full text
    Software service providers are evolving towards a business process outsourcing (BPO) model to benefit from specialised services and facilities of external partners. Activation of external processes as well as having long-term and coarse-grained interaction with the outsourced processes results in remote workflow interactions between heterogeneous and federated workflow systems. WF-Interop aims at addressing the interoperability issues by defining a set of REST interfaces that enable standardised communication between these workflow engines. The WF-Interop interface focuses on deployment, activation and progress monitoring of workflows. It intends to be an interface for new as well as the existing workflow engines in order to expose their functionalities in a RESTful architecture. Amongst all functionalities proposed by WF-Interop, some may not be supported by some engines. As such, our standard API should be adaptive to the capabilities of each workflow engine and be reflective to the consumers by describing supported capabilities on demand. As a validation of the principles and architecture of WF-interop, we created a proof-of-concept middleware and prototyped an accounting workflow with outsourced billing workflow on top of it using jBPM and Ruote workflow engines.status: publishe

    Infracomposer: Policy-driven adaptive and reflective middleware for the cloudification of simulation & optimization workflows

    No full text
    The simulation and optimization of complex engineering designs in automotive or aerospace involves multiple mathematical tools, long-running workflows and resource-intensive computations on distributed infrastructures. Finding the optimal deployment in terms of task distribution, parallelization, collocation and resource assignment for each execution is a step-wise process involving both human input with domain-specific knowledge about the tools as well as the acquisition of new knowledge based on the actual execution history. In this paper, we present a policy-driven adaptive and reflective middleware that supports smart cloud-based deployment and execution of engineering workflows. This middleware supports deep inspection of the workflow task structure and execution, as well as of the very specific mathematical tools, their executions and used parameters. The reflective capabilities are based on multiple meta-models to reflect workflow structure, deployment, execution and resources. Adaptive deployment is driven by both human input as meta-data annotations as well as adaptation policies that reason over the actual execution history of the workflows. We validate and evaluate this middleware in real-life application cases and scenarios in the domain of aeronautics.status: Published onlin

    Adaptive and reflective middleware for the cloudification of simulation & optimization workflows

    No full text
    The simulation and optimization of complex engineering designs in automotive or aerospace involves multiple mathematical tools, long-running workflows and resource-intensive computations on distributed infrastructures. Finding the optimal deployment in terms of task distribution, parallelization, collocation and resource assignment for each execution is a step-wise process involving both human input with domain-specific knowledge about the tools as well as the acquisition of new knowledge based on the actual execution history. In this paper, we present motivating scenarios as well as an architecture for adaptive and reflective middleware that supports smart cloud-based deployment and execution of engineering workflows. This middleware supports deep inspection of the workflow task structure and execution, as well as of the very specific mathematical tools, their executions and used parameters. The reflective capabilities are based on multiple meta-models to reflect workflow structure, deployment, execution and resources. Adaptive deployment is driven by both human input as meta-data annotations as well as the actual execution history of the workflows.status: publishe

    A framework for black-box SLO tuning of multi-tenant applications in Kubernetes

    No full text
    Resource management concepts of container orchestration platforms such as Kubernetes can be used to achieve multi-tenancy with quality of service differentiation between tenants. However, to support cost-effective enforcement of Service Level Objectives (SLOs) about response time or throughput, an automated resource optimization approach is needed for mapping custom SLOs of different tenants to cost-efficient resource allocation policies. We propose a versatile tool for cost-effective SLO tuning, named k8-resource-optimizer, that relies on black-box performance tuning algorithms. We illustrate and validate the tool for optimizing different resource configuration properties of a simple job processing application. Our experiments showed that k8-resource-optimizer can find near-optimal configurations for different multi-tenant deployment settings and different types of resource parameters. However an open research challenge is that, when the number of parameters increases, the total tuning cost may also increase beyond what is acceptable for contemporary cloud-native applications. We shortly discuss three possible complementary solutions to tackle this challenge.status: Published onlin

    Leveraging Kubernetes for adaptive and cost-efficient resource management

    No full text
    Software providers face the challenge of minimizing the amount of resources used while still meeting their customer's requirements. Several frameworks to manage resources and applications in a distributed environment are available, but their development is still ongoing and the state of the art is rapidly evolving, making it a challenge to use such frameworks and their features effectively in practice. The goal of this paper is to research how applications can be enhanced with adaptive performance management by relying on the capabilities of Kubernetes, a popular framework for container orchestration. In particular, horizontal as well as vertical scaling concepts of Kubernetes may prove useful to support adaptive resource allocation. Moreover, concepts for oversubscription as a way to simulate vertical scaling without having to reschedule applications, are evaluated. Through a series of experiments involving multiple applications and workloads, the effects of different configurations and combinations of horizontal and vertical scaling in Kubernetes are explored. Both the resource utilization of the nodes and the applications' performance are taken into account. In brief, the resource management concepts of Kubernetes allow to simulate vertical scaling without a negative effect on performance. The effectiveness of the default horizontal autoscaler, however, depends on the type of application and the user workload at hand.status: Published onlin

    DataBlinder: A distributed data protection middleware supporting search and computation on encrypted data

    No full text
    Business application owners want to outsource data storage, including sensitive data, to the public cloud for economical reasons. This is often challenging since these businesses are and remain responsible for regulatory compliance and data protection, even though cloud providers may do their best to offer (data) protection. Meanwhile, data protection techniques evolve and get better because of continuous research and improvement of advanced encryption. Numerous cryptographic tactics have been proposed, e.g., searchable symmetric encryption (SSE) and homomorphic encryption (HE), that support search and aggregation functions on encrypted data. Each of these tactics has a trade-off between security, performance and functionality, but there is no one-size-fits-all solution. For the application developer, the underpinning concepts of these tactics are complex to comprehend, complex to integrate in a distributed application, and prone to implementation mistakes. In this paper we present DataBlinder, a distributed data access middleware that provides crypto agility by means of configurable fine-grained data protection at the application level. DataBlinder supports adaptive runtime selection of data protection tactics, and offers a plugin architecture for such tactics based on a key abstraction model for protection level, performance and supported query functionality. We have developed this middleware in close collaboration with businesses that face these challenges and offer cloud-based applications in e-finance, and e-health, by implementing and integrating state-of-the-art cryptographic schemes to DataBlinder. This paper illustrates the case of medical data protection with FHIR-compliant [30] medical data.status: Published onlin
    corecore