18 research outputs found

    Generalizing List Scheduling for Stochastic Soft Real-time Parallel Applications

    Get PDF
    Advanced architecture processors provide features such as caches and branch prediction that result in improved, but variable, execution time of software. Hard real-time systems require tasks to complete within timing constraints. Consequently, hard real-time systems are typically designed conservatively through the use of tasks? worst-case execution times (WCET) in order to compute deterministic schedules that guarantee task?s execution within giving time constraints. This use of pessimistic execution time assumptions provides real-time guarantees at the cost of decreased performance and resource utilization. In soft real-time systems, however, meeting deadlines is not an absolute requirement (i.e., missing a few deadlines does not severely degrade system performance or cause catastrophic failure). In such systems, a guaranteed minimum probability of completing by the deadline is sufficient. Therefore, there is considerable latitude in such systems for improving resource utilization and performance as compared with hard real-time systems, through the use of more realistic execution time assumptions. Given probability distribution functions (PDFs) representing tasks? execution time requirements, and tasks? communication and precedence requirements, represented as a directed acyclic graph (DAG), this dissertation proposes and investigates algorithms for constructing non-preemptive stochastic schedules. New PDF manipulation operators developed in this dissertation are used to compute tasks? start and completion time PDFs during schedule construction. PDFs of the schedules? completion times are also computed and used to systematically trade the probability of meeting end-to-end deadlines for schedule length and jitter in task completion times. Because of the NP-hard nature of the non-preemptive DAG scheduling problem, the new stochastic scheduling algorithms extend traditional heuristic list scheduling and genetic list scheduling algorithms for DAGs by using PDFs instead of fixed time values for task execution requirements. The stochastic scheduling algorithms also account for delays caused by communication contention, typically ignored in prior DAG scheduling research. Extensive experimental results are used to demonstrate the efficacy of the new algorithms in constructing stochastic schedules. Results also show that through the use of the techniques developed in this dissertation, the probability of meeting deadlines can be usefully traded for performance and jitter in soft real-time systems

    Research Toward a Partially-Automated, and Crime Specific Digital Triage Process Model

    Get PDF
    The digital forensic process as traditionally laid out begins with the collection, duplication, and authentication of every piece of digital media prior to examination. These first three phases of the digital forensic process are by far the most costly. However, complete forensic duplication is standard practice among digital forensic laboratories. The time it takes to complete these stages is quickly becoming a serious problem. Digital forensic laboratories do not have the resources and time to keep up with the growing demand for digital forensic examinations with the current methodologies. One solution to this problem is the use of pre-examination techniques commonly referred to as digital triage. Pre-examination techniques can assist the examiner with intelligence that can be used to prioritize and lead the examination process. This work discusses a proposed model for digital triage that is currently under development at Mississippi State University

    The Proteogenomic Mapping Tool

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>High-throughput mass spectrometry (MS) proteomics data is increasingly being used to complement traditional structural genome annotation methods. To keep pace with the high speed of experimental data generation and to aid in structural genome annotation, experimentally observed peptides need to be mapped back to their source genome location quickly and exactly. Previously, the tools to do this have been limited to custom scripts designed by individual research groups to analyze their own data, are generally not widely available, and do not scale well with large eukaryotic genomes.</p> <p>Results</p> <p>The Proteogenomic Mapping Tool includes a Java implementation of the Aho-Corasick string searching algorithm which takes as input standardized file types and rapidly searches experimentally observed peptides against a given genome translated in all 6 reading frames for exact matches. The Java implementation allows the application to scale well with larger eukaryotic genomes while providing cross-platform functionality.</p> <p>Conclusions</p> <p>The Proteogenomic Mapping Tool provides a standalone application for mapping peptides back to their source genome on a number of operating system platforms with standard desktop computer hardware and executes very rapidly for a variety of datasets. Allowing the selection of different genetic codes for different organisms allows researchers to easily customize the tool to their own research interests and is recommended for anyone working to structurally annotate genomes using MS derived proteomics data.</p

    Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences).</p> <p>Results</p> <p>This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs.</p> <p>Conclusion</p> <p>Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation.</p

    An Approach to Model Network Exploitations Using Exploitation Graphs

    No full text
    In this article, a modeling process is defined to address challenges in analyzing attack scenarios and mitigating vulnerabilities in networked environments. Known system vulnerability data, system configuration data, and vulnerability scanner results are considered to create exploitation graphs (egraphs) that are used to represent attack scenarios. Experiments carried out in a cluster computing environment showed the usefulness of proposed techniques in providing in-depth attack scenario analyses for security engineering. Critical vulnerabilities can be identified by employing graph algorithms. Several factors were used to measure the difficulty in executing an attack. A cost/benefit analysis was used for more accurate quantitative analysis of attack scenarios.The authors also show how the attack scenario analyses better help deployment of security products and design of network topologies

    An Approach to Model Network Exploitations Using Exploitation Graphs

    No full text
    In this article, a modeling process is defined to address challenges in analyzing attack scenarios and mitigating vulnerabilities in networked environments. Known system vulnerability data, system configuration data, and vulnerability scanner results are considered to create exploitation graphs (egraphs) that are used to represent attack scenarios. Experiments carried out in a cluster computing environment showed the usefulness of proposed techniques in providing in-depth attack scenario analyses for security engineering. Critical vulnerabilities can be identified by employing graph algorithms. Several factors were used to measure the difficulty in executing an attack. A cost/benefit analysis was used for more accurate quantitative analysis of attack scenarios.The authors also show how the attack scenario analyses better help deployment of security products and design of network topologies
    corecore