921 research outputs found

    PANDAcap: A framework for streamlining collection of full-system traces

    Get PDF
    Full-system, deterministic record and replay has proven to be an invaluable tool for reverse engineering and systems analysis. However, acquiring a full-system recording typically involves signifcant planning and manual effort. This represents a distraction from the actual goal of recording a trace, i.e. analyzing it. We present PANDAcap, a framework based on PANDA full-system record and replay tool. PANDAcap combines off-the-shelf and custom-built components in order to streamline the process of recording PANDA traces. More importantly, in addition to making the setup of one-off experiments easier, PANDAcap also caters to the streamlining of systematic repeatable experiments in order to create PANDA trace datasets. As a demonstration, we have used PANDAcap to deploy an ssh honeypot aiming to study the actions of brute-force ssh attacks

    Effects of age variance on repeatability estimates of egg dimensions of Bovan Nera Black laying chickens

    Get PDF
    AbstractThe present research was designed to examine the effects of age variance on repeatability estimates of egg length, egg breadth and egg shape index of Bovan Nera Black laying chickens at 25, 51, 72weeks and combined ages of the bird. For this purpose thirty birds were selected from the flock of layers in the Babcock University Teaching and Research Farm. They were individually housed in labeled separate battery cage. A total of thirty (30) eggs were collected daily from the birds continuously for five (5) days of egg production, at each age of 25, 51 and 72weeks. The total number of eggs collected at each age were 150 and 450 for the total of three age periods. Data were collected on egg production traits for egg length, egg breadth and egg shape index. These data were subjected to statistical analysis using Completely Randomized Design. General linear model procedure of statistical analytical system (SAS) was used to obtain the variance components for the estimation of repeatability. Moderate repeatability estimates were obtained when the age variance was included in the computation and low estimates were registered when the age variance was excluded from the computation. The repeatability estimates from different egg quality traits were low to high. Since most of the traits recorded low repeatability values, these traits can be improve by mass selection thereby culminating into egg production with optimal quality

    Testing of hydrogen sensor based on organic materials

    Get PDF
    Práce je zaměřena na problematiku bezpečnostních vodíkových senzorů. Základní principy a teorie vodíkových senzorů je rozebrána v první části práce. Je navržena metodologie testování organických vodíkových senzorů vyvinutých a vyrobených na Fakultě Chemické Vysokého Učení Technického v Brně. Nejslibnější organický material byl testován. V závěrečné části byl navržen teplotní regulátor pro použití s keramickou senzorovou platformou.This thesis is focused on topic of safety hydrogen sensors. Theory of hydrogen sensors and main sensor principles are discussed. Methodology for testing of organic hydrogen sensors developed and fabricated at the Faculty of Chemistry of Brno University of Technology is outlined. A set of tests is done for the most promising organic material. Also, temperature regulator for ceramic sensor platform is designed.

    CGC monitor: A vetting system for the DARPA cyber grand challenge

    Get PDF
    The article of record as published may be found at https://doi.org/10.1016/j.diin.2018.04.016In PressThe CGC Monitor is available at https://github.com/mfthomps/ cgc-monitor. Analysis results from CFE, generated by the monitor, are at https://github.com/mfthomps/CGC-Analysis.The DARPA Cyber Grand Challenge (CGC) pit autonomous machines against one another in a battle to discover, mitigate, and take advantage of software vulnerabilities. The competitors repeatedly formulated and submitted binary software for execution against opponents, and to mitigate attacks mounted by opponents. The US Government sought confidence that competitors legitimately won their rewards (a prize pool of up to $6.75 million USD), and competitors deserved evidence that all parties operated in accordance with the rules, which prohibited attempts to subvert the competition infrastructure. To support those goals, we developed an analysis system to vet competitor software submissions destined for execution on the competition infrastructure, the classic situation of running untrusted software. In this work, we describe the design and implementation of this vetting system, as well as results gathered in deployment of the system as part of the CGC competition. The analysis system is imple- mented upon a high-fidelity full-system simulator requiring no modifications to the monitored operating system. We used this system to vet software submitted during the CGC Qualifying Event, and the CGC Final Event. The overwhelming majority of the vetting occurred in an automated fashion, with the system automatically monitoring the full x86-based system to detection corruption of operating system execution paths and data structures. However, the vetting system also facilitates investigation of any execution deemed suspicious by the automated process (or indeed any analysis required to answer queries related to the competition). An analyst may replay any software interaction using an IDA Pro plug-in, which utilizes the IDA debugger client to execute the session in reverse. In post-mortem analysis, we found no evidence of attempted infrastructure subversion and further conclude that of the 20 vulnerable software services exploited in the CGC Final Event, half were exploited in ways unintended by the service authors. Six services were exploited due to vulnerabilities accidentally included by the authors, while an additional four were exploited via the author-intended vulnerability, but via an unanticipated path.This work was supported in part by the Defense Advanced Research Projects AgencyAir Force award number FA8750- 12-D-0005Approved for public release; distribution is unlimited

    Divide-and-Conquer Distributed Learning: Privacy-Preserving Offloading of Neural Network Computations

    Get PDF
    Machine learning has become a highly utilized technology to perform decision making on high dimensional data. As dataset sizes have become increasingly large so too have the neural networks to learn the complex patterns hidden within. This expansion has continued to the degree that it may be infeasible to train a model from a singular device due to computational or memory limitations of underlying hardware. Purpose built computing clusters for training large models are commonplace while access to networks of heterogeneous devices is still typically more accessible. In addition, with the rise of 5G networks, computation at the edge becoming more commonplace, and inspired by the successes of the folding@home project utilizing crowdsourced computation, we consider the scenario of the crowdsourcing the computation required for training of a neural network particularly appealing. Distributed learning promises to bridge the widening gap between singular device performance and large-scale model computational requirements, but unfortunately, current distributed learning techniques do not maintain privacy of both the model and input with- out an accuracy or computational tradeoff. In response, we present Divide and Conquer Learning (DCL), an innovative approach that enables quantifiable privacy guarantees while offloading the computational burden of training to a network of devices. A user can divide the training computation of its neural network into neuron-sized computation tasks and dis- tribute them to devices based on their available resources. The results will be returned to the user and aggregated in an iterative process to obtain the final neural network model. To protect the privacy of the user’s data and model, shuffling is done to both the data and the neural network model before the computation task is distributed to devices. Our strict adherence to the order of operations allows a user to verify the correctness of performed computations through assigning a task to multiple devices and cross-validating their results. This can protect against network churns and detect faulty or misbehaving devices

    The Development of a Collaborative Tool to Teach Debugging

    Get PDF
    Debugging is rarely formally taught, despite being used by programmers every day. Research indicates that it is valuable to teach debugging, and suggests that teaching it collaboratively may be maximally effective. The goal of this project is to create a collaborative debugger. The debugger aims to be the ideal platform to teach and learn debugging. This paper briefly reviews relevant literature covering teaching debugging and teaching programming collaboratively. Most of the paper is devoted to the design of the collaborative debugger

    Cutting Through the Complexity of Reverse Engineering Embedded Devices

    Get PDF
    Performing security analysis of embedded devices is a challenging task. They present many difficulties not usually found when analyzing commodity systems: undocumented peripherals, esoteric instruction sets, and limited tool support. Thus, a significant amount of reverse engineering is almost always required to analyze such devices. In this paper, we present Incision, an architecture and operating-system agnostic reverse engineering framework. Incision tackles the problem of reducing the upfront effort to analyze complex end-user devices. It combines static and dynamic analyses in a feedback loop, enabling information from each to be used in tandem to improve our overall understanding of the firmware analyzed. We use Incision to analyze a variety of devices and firmware. Our evaluation spans firmware based on three RTOSes, an automotive ECU, and a 4G/LTE baseband. We demonstrate that Incision does not introduce significant complexity to the standard reverse engineering process and requires little manual effort to use. Moreover, its analyses produce correct results with high confidence and are robust across different OSes and ISAs
    corecore