188 research outputs found

    Network Traffic Analysis Using Stochastic Grammars

    Get PDF
    Network traffic analysis is widely used to infer information from Internet traffic. This is possible even if the traffic is encrypted. Previous work uses traffic characteristics, such as port numbers, packet sizes, and frequency, without looking for more subtle patterns in the network traffic. In this work, we use stochastic grammars, hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs), as pattern recognition tools for traffic analysis. HMMs are widely used for pattern recognition and detection. We use a HMM inference approach. With inferred HMMs, we use confidence intervals (CI) to detect if a data sequence matches the HMM. To compare HMMs, we define a normalized Markov metric. A statistical test is used to determine model equivalence. Our metric systematically removes the least likely events from both HMMs until the remaining models are statistically equivalent. This defines the distance between models. We extend the use of HMMs to PCFGs, which have more expressive power. We estimate PCFG production probabilities from data. A statistical test is used for detection. We present three applications of HMM and PCFG detection to network traffic analysis. First, we infer the presence of protocol tunneling through Tor (the onion router) anonymization network. The Markov metric quantifies the similarity of network traffic HMMs in Tor to identify the protocol. It also measures communication noise in Tor network. We use HMMs to detect centralized botnet traffic. We infer HMMs from botnet traffic data and detect botnet infections. Experimental results show that HMMs can accurately detect Zeus botnet traffic. To hide their locations better, newer botnets have P2P control structures. Hierarchical P2P botnets contain recursive and hierarchical patterns. We use PCFGs to detect P2P botnet traffic. Experimentation on real-world traffic data shows that PCFGs can accurately differentiate between P2P botnet traffic and normal Internet traffic

    Low Power Design Techniques for Digital Logic Circuits.

    Get PDF
    With the rapid increase in the density and the size of chips and systems, area and power dissipationbecome critical concern in Very Large Scale Integrated (VLSI) circuit design. Low powerdesign techniques are essential for today's VLSI industry. The history of symbolic logic and sometypical techniques for finite state machine (FSM) logic synthesis are reviewed.The state assignment is used to optimize area and power dissipation for FSMs. Two costfunctions, targeting area and power, are presented. The Genetic Algorithm (GA) is used to searchfor a good state assignment to minimize the cost functions. The algorithm has been implementedin C. The program can produce better results than NOVA, which is integrated into SIS by DCBerkeley, and other publications both in area and power tested by MCNC benchmarks.Flip-flops are the core components of FSMs. The reduction of power dissipation from flip-flopscan save power for digital systems significantly. Three new kinds of flip-flops, called differentialCMOS single edge-triggered flip-flop with clock gating, double edge-triggered and multiple valuedflip-flops employing multiple valued clocks, are proposed. All circuits are simulated using PSpice.Most researchers have focused on developing low-power techniques in AND/OR or NAND& NOR based circuits. The low power techniques for AND /XOR based circuits are still intheir early stage of development. To implement a complex function involving many inputs,a form of decomposition into smaller subfunctions is required such that the subfunctions fitinto the primitive elements to be used in the implementation. Best polarity based XOR gatedecomposition technique has been developed, which targets low power using Huffman algorithm.Compared to the published results, the proposed method shows considerable improvement inpower dissipation. Further, Boolean functions can be expressed by Fixed Polarity Reed-Muller(FPRM) forms. Based on polarity transformation, an algorithm is developed and implementedin C language which can find the best polarity for power and area optimization. Benchmarkexamples of up to 21 inputs run on a personal computer are given

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    Protocol engineering from Estelle specifications

    Get PDF
    Bibliography: leaves 129-132.The design of efficient, reliable communication protocols has long been an area of active research in computer science and engineering, and will remain so while the technology continues to evolve, and information becomes increasingly distributed. This thesis examines the problem of predicting . the performance of a multi-layered protocol system directly from formal specifications in the ISO specification language Estelle, a general-purpose Pascal-based language with support for concurrent processes in the form of communicating extended finite-state machines. The thesis begins with an overview of protocol engineering, and a discusses the areas of performance evaluation and protocol specification. Important parts of the mathematics of discrete-time semi-Markov processes are presented to assist in understanding the approaches to performance evaluation described later. Not much work has been done to date in the area of performance prediction from specifications. The idea was first mooted by Rudin, who illustrated it with a simple model based on the global state reachability graph of a set of synchronous communicating FSMs. About the same time Kritzinger proposed a closed multiclass queueing model. Both of these approaches are described, and their respective strengths and weaknesses pointed out. Two new methods are then presented. They have been implemented as part of an Estelle-based CASE tool, the Protocol Engineering Workbench (PE!V). In the first approach, we show how discrete-time semi-Markov chain models can be derived from meta-executions of Estelle specifications, and consider ways of using these models predictively. The second approach uses a structure similar to a global-state graph. Many of the limitations of Rudin's approach are overcome, and our technique produces highly accurate performance predictions. The PEW is also described in some detail, and its use in performance evaluation illustrated with some examples. The thesis concludes with a discussion of the strengths and weaknesses of the new methods, and possible ways of improving them

    Identifying Network Correlates of Memory Consolidation

    Full text link
    Neuronal spiking activity carries information about our experiences in the waking world but exactly how the brain can quickly and efficiently encode sensory information into a useful neural code and then subsequently consolidate that information into memory remains a mystery. While neuronal networks are known to play a vital role in these processes, detangling the properties of network activity from the complex spiking dynamics observed is a formidable challenge, requiring collaborations across scientific disciplines. In this work, I outline my contributions in computational modeling and data analysis toward understanding how network dynamics facilitate memory consolidation. For experimental perspective, I investigate hippocampal recordings of mice that are subjected to contextual fear conditioning and subsequently undergo sleep-dependent fear memory consolidation. First, I outline the development of a functional connectivity algorithm which rapidly and robustly assesses network structure based on neuronal spike timing. I show that the relative stability of these functional networks can be used to identify global network dynamics, revealing that an increase in functional network stability correlates with successful fear memory consolidation in vivo. Using an attractor-based model to simulate memory encoding and consolidation, I go on to show that dynamics associated with a second-order phase transition, at a critical point in phase-space, are necessary for recruiting additional neurons into network dynamics associated with memory consolidation. I show that successful consolidation subsequently shifts dynamics away from a critical point and towards sub-critical dynamics. Investigations of in vivo spiking dynamics likewise revealed that hippocampal dynamics during non-rapid-eye-movement (NREM) sleep show features of being near a critical point and that fear memory consolidation leads to a shift in dynamics. Finally, I investigate the role of NREM sleep in facilitating memory consolidation using a conductance-based model of neuronal activity that can easily switch between modes of activity loosely representing waking and NREM sleep. Analysis of model simulations revealed that oscillations associated with NREM sleep promote a phase-based coding of information; neurons with high firing rates during periods of wake lead spiking activity during NREM oscillations. I show that when phase-coding is active in both simulations and in vivo, synaptic plasticity selectively strengthens the input to neurons firing late in the oscillation while simultaneously reducing input to neurons firing early in the oscillation. The effect is a net homogenization of firing rates observed in multiple other studies, and subsequently leads to recruitment of new neurons into a memory engram and information transfer from fast firing neurons to slow firing neurons. Taken together, my work outlines important, newly-discovered features of neuronal network dynamics related to memory encoding and consolidation: networks near criticality promote recruitment of additional neurons into stable firing patterns through NREM-associated oscillations and subsequently consolidates information into memories through phase-based coding.PHDBiophysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162991/1/qmskill_1.pd

    Discrete Event Systems: Models and Applications; Proceedings of an IIASA Conference, Sopron, Hungary, August 3-7, 1987

    Get PDF
    Work in discrete event systems has just begun. There is a great deal of activity now, and much enthusiasm. There is considerable diversity reflecting differences in the intellectual formation of workers in the field and in the applications that guide their effort. This diversity is manifested in a proliferation of DEM formalisms. Some of the formalisms are essentially different. Some of the "new" formalisms are reinventions of existing formalisms presented in new terms. These "duplications" reveal both the new domains of intended application as well as the difficulty in keeping up with work that is published in journals on computer science, communications, signal processing, automatic control, and mathematical systems theory - to name the main disciplines with active research programs in discrete event systems. The first eight papers deal with models at the logical level, the next four are at the temporal level and the last six are at the stochastic level. Of these eighteen papers, three focus on manufacturing, four on communication networks, one on digital signal processing, the remaining ten papers address methodological issues ranging from simulation to computational complexity of some synthesis problems. The authors have made good efforts to make their contributions self-contained and to provide a representative bibliography. The volume should therefore be both accessible and useful to those who are just getting interested in discrete event systems
    corecore