24,954 research outputs found
Secure multi-party computation for analytics deployed as a lightweight web application
We describe the definition, design, implementation, and deployment of a secure multi-party computation protocol and web application. The protocol and application allow groups of cooperating parties with minimal expertise and no specialized resources to compute basic statistical analytics on their collective data sets without revealing the contributions of individual participants. The application was developed specifically to support a Boston Womenâs Workforce Council (BWWC) study of wage disparities within employer organizations in the Greater Boston Area. The application has been deployed successfully to support two data collection sessions (in 2015 and in 2016) to obtain data pertaining to compensation levels across genders and demographics. Our experience provides insights into the particular security and usability requirements (and tradeoffs) a successful âMPC-as-a-serviceâ platform design and implementation must negotiate.We would like to acknowledge all the members of the Boston Womenâs Workforce Council, and to thank in particular MaryRose Mazzola, Christina M. Knowles, and Katie A. Johnston, who led the efforts to organize participants and deploy the protocol as part of the 100% Talent: The Boston Womenâs Compact [31], [32] data collections. We also thank the Boston University Initiative on Cities (IOC), and in particular Executive Director Katherine Lusk, who brought this potential application of secure multi-party computation to our attention. The BWWC, the IOC, and several sponsors contributed funding to complete this work. Support was also provided in part by Smart-city Cloud-based Open Platform and Ecosystem (SCOPE), an NSF Division of Industrial Innovation and Partnerships PFI:BIC project under award #1430145, and by Modular Approach to Cloud Security (MACS), an NSF CISE CNS SaTC Frontier project under award #1414119
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
Recommended from our members
Specification of initial connection handling in TCP using structured Petri nets
This paper uses structured Petri nets to specify how connection establishment is handled by the DoD Transmission Control Protocol. The purpose of this paper is to demonstrate an alternate specification technique by examining its application to a portion of a protocol of reasonable complexity.Initially we briefly present the semantics of structured Petri nets. Following this, a terse discussion of the problems of establishing connections in a network takes place. This discussion centers on the use of the three-way handshake, which is used by TCP, as a solution for many of these problems. Finally, the specification of the three-way handshake used in TCP is made. The specification is presented in three sections: first, a general set of notes concerning the nature of this particular specification is discussed; second, the data definitions of the specification are given; and, third, the actual nets themselves are presented.This paper is condensed from a portion of the author's dissertation, which is still in preparation. In the interests of brevity, some components of the specification, such a retransmission handling, have been omitted. Interested readers should contact the author for a more detailed paper
Issues in development, evaluation, and use of the NASA Preflight Adaptation Trainer (PAT)
The Preflight Adaptation Trainer (PAT) is intended to reduce or alleviate space adaptation syndrome by providing opportunities for portions of that adaptation to occur under normal gravity conditions prior to space flight. Since the adaptation aspects of the PAT objectives involve modification not only of the behavior of the trainee, but also of sensiomotor skills which underly the behavioral generation, the defining of training objectives of the PAT utilizes four mechanisms: familiarization, demonstration, training and adaptation. These mechanisms serve as structural reference points for evaluation, drive the content and organization of the training procedures, and help to define the roles of the PAT instructors and operators. It was determined that three psychomotor properties are most critical for PAT evaluation: reliability; sensitivity; and relevance. It is cause for concern that the number of measures available to examine PAT effects exceed those that can be properly studied with the available sample sizes; special attention will be required in selection of the candidate measure set. The issues in PAT use and application within a training system context are addressed through linking the three training related mechanisms of familiarization, demonstration and training to the fourth mechanism, adaptation
Traffic Alert and Collision Avoidance System (TCAS): Cockpit Display of Traffic Information (CDTI) investigation. Phase 1: Feasibility study
The possibility of the Threat Alert and Collision Avoidance System (TCAS) traffic sensor and display being used for meaningful Cockpit Display of Traffic Information (CDTI) applications has resulted in the Federal Aviation Administration initiating a project to establish the technical and operational requirements to realize this potential. Phase 1 of the project is presented here. Phase 1 was organized to define specific CDTI applications for the terminal area, to determine what has already been learned about CDTI technology relevant to these applications, and to define the engineering required to supply the remaining TCAS-CDTI technology for capacity benefit realization. The CDTI applications examined have been limited to those appropriate to the final approach and departure phases of flight
Reference models for network trace anonymization
Network security research can benefit greatly from testing environments that are capable of generating realistic, repeatable and configurable background traffic. In order to conduct network security experiments on systems such as Intrusion Detection Systems and Intrusion Prevention Systems, researchers require isolated testbeds capable of recreating actual network environments, complete with infrastructure and traffic details. Unfortunately, due to privacy and flexibility concerns, actual network traffic is rarely shared by organizations as sensitive information, such as IP addresses, device identity and behavioral information can be inferred from the traffic. Trace data anonymization is one solution to this problem. The research community has responded to this sanitization problem with anonymization tools that aim to remove sensitive information from network traces, and attacks on anonymized traces that aim to evaluate the efficacy of the anonymization schemes. However there is continued lack of a comprehensive model that distills all elements of the sanitization problem in to a functional reference model.;In this thesis we offer such a comprehensive functional reference model that identifies and binds together all the entities required to formulate the problem of network data anonymization. We build a new information flow model that illustrates the overly optimistic nature of inference attacks on anonymized traces. We also provide a probabilistic interpretation of the information model and develop a privacy metric for anonymized traces. Finally, we develop the architecture for a highly configurable, multi-layer network trace collection and sanitization tool. In addition to addressing privacy and flexibility concerns, our architecture allows for uniformity of anonymization and ease of data aggregation
Assessing the statistical validity of momentum-deficit-based measurements in turbulent configurations
An application-agnostic procedure is outlined for checking the validity of momentum-deficit-based drag measurements performed under different turbulent conditions in a wind tunnel. The approach defines a two-step methodology: the first stage characterizes the turbulent flowfield generated downstream a passive grid through a set of statistical parameters. Acceptable values for such parameters are determined by means of two criteria: compliance with the threshold value set by an analysis of the experimental uncertainties, and fulfilment of the isotropic condition for ensuring a well-established turbulent flowfield. Those two prerequisites define a set of turbulent configurations for which the momentum-deficit-based technique applies feasibly.
The second stage of the procedure is configuration-specific, and undertakes drag measurements upon a NACA0021 airfoil subjected to a set of different turbulent configurations. It is shown that performing measurements under invalid turbulent conditions leads to inconsistent drag curves, which serves for defining a validity map based on the testable cases
Distributed, cooperating knowledge-based systems
Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated
- âŠ