38 research outputs found

    Discriminating if a network flow could have been created from a given sequence of network packets

    Get PDF
    This thesis aims to design a neural network (NN), that is capable of discriminating if a network flow could have been created based on a sequence of packets and can be used as a discriminative network (DN) for a Generative Adversarial Network (GAN) in future work. For this, we first determined the features of network flows and packets alike, which are relevant to this task. We then created a dataset by extracting the relevant features from well-known network traffic datasets from the field of network intrusion detection (NID), as well as falsifying said datapoints to provide negative samples. We also provide a pipeline for the process of creating such datasets. For our NN model we compared available architectures of recurrent neural networks (RNNs): simple RNN (simpleRNN), Long Short Term Memory (LSTM), and Gated Recurrent Units (GRUs). Furthermore our model uses a special kind of RNN called a conditional RNN (condRNN), which already has provided good results for a mixture of conditional and sequential input in the field of image region classification. This is necessary as a flow is the conditional counterpart to a sequence of packets. We aim to test the effectiveness of the different RNN architectures in regards to our problem and in the context of condRNNs

    A Relaxation Model for the Non-Isothermal Navier-Stokes-Korteweg Equations in Confined Domains

    Full text link
    The Navier-Stokes-Korteweg (NSK) system is a classical diffuse interface model which is based on van der Waals theory of capillarity. Diffuse interface methods have gained much interest to model two-phase flow in porous media. However, for the numerical solution of the NSK equations two major challenges have to be faced. First, an extended numerical stencil is required due to a third-order term in the linear momentum and the total energy equations. In addition, the dispersive contribution in the linear momentum equations prevents the straightforward use of contact angle boundary conditions. Secondly, any real gas equation of state is based on a non-convex Helmholtz free energy potential which may cause the eigenvalues of the Jacobian of the first-order fluxes to become imaginary numbers inside the spinodal region. In this work, a thermodynamically consistent relaxation model is presented which is used to approximate the NSK equations. The model is complimented by thermodynamically consistent non-equilibrium boundary conditions which take contact angle effects into account. Due to the relaxation approach, the contribution of the Korteweg tensor in the linear momentum and total energy equations can be reduced to second-order terms which enables a straightforward implementation of contact angle boundary conditions in a numerical scheme. Moreover, the definition of a modified pressure function enables to formulate first-order fluxes which remain strictly hyperbolic in the entire spinodal region. The present work is a generalization of a previously presented parabolic relaxation model for the isothermal NSK equations

    User Label Leakage from Gradients in Federated Learning

    Full text link
    Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we propose Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users' training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We empirically and mathematically demonstrate the validity of our attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to prevent our attack

    Effect of body deformability on microswimming

    Get PDF
    In this work we consider the following question: given a mechanical microswimming mechanism, does increased deformability of the swimmer body hinder or promote the motility of the swimmer? To answer this we study a microswimmer model composed of deformable beads connected with springs. We determine the velocity of the swimmer analytically, starting from the forces driving the motion and assuming that the oscillations in the effective radii of the beads are known and are much smaller than the radii themselves. We find that to the lowest order, only the driving frequency mode of the surface oscillations contributes to the swimming velocity, and that this velocity may both rise and fall with the deformability of the beads depending on the spring constant. To test these results, we run immersed boundary lattice Boltzmann simulations of the swimmer, and show that they reproduce both the velocity-promoting and velocity-hindering effects of bead deformability correctly in the predicted parameter ranges. Our results mean that for a general swimmer, its elasticity determines whether passive deformations of the swimmer body, induced by the fluid flow, aid or oppose the motion

    A Common Variant of PNPLA3 (p.I148M) Is Not Associated with Alcoholic Chronic Pancreatitis

    Get PDF
    Contains fulltext : 110441.pdf (publisher's version ) (Open Access)BACKGROUND: Chronic pancreatitis (CP) is an inflammatory disease that in some patients leads to exocrine and endocrine dysfunction. In industrialized countries the most common aetiology is chronic alcohol abuse. Descriptions of associated genetic alterations in alcoholic CP are rare. However, a common PNPLA3 variant (p.I148M) is associated with the development of alcoholic liver cirrhosis (ALC). Since, alcoholic CP and ALC share the same aetiology PNPLA3 variant (p.I148M) possibly influences the development of alcoholic CP. METHODS: Using melting curve analysis we genotyped the variant in 1510 patients with pancreatitis or liver disease (961 German and Dutch alcoholic CP patients, 414 German patients with idiopathic or hereditary CP, and 135 patients with ALC). In addition, we included in total 2781 healthy controls in the study. RESULTS: The previously published overrepresentation of GG-genotype was replicated in our cohort of ALC (p-value <0.0001, OR 2.3, 95% CI 1.6-3.3). Distributions of genotype and allele frequencies of the p.I148M variant were comparable in patients with alcoholic CP, idiopathic and hereditary CP and in healthy controls. CONCLUSIONS: The absence of an association of PNPLA3 p.I148M with alcoholic CP seems not to point to a common pathway in the development of alcoholic CP and alcoholic liver cirrhosis

    Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

    No full text
    Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special&mdash;sometimes impractical&mdash;assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations

    Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

    No full text
    Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special—sometimes impractical—assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations
    corecore