32,924 research outputs found

    Software Engineering Challenges for Investigating Cyber-Physical Incidents

    Get PDF
    Cyber-Physical Systems (CPS) are characterized by the interplay between digital and physical spaces. This characteristic has extended the attack surface that could be exploited by an offender to cause harm. An increasing number of cyber-physical incidents may occur depending on the configuration of the physical and digital spaces and their interplay. Traditional investigation processes are not adequate to investigate these incidents, as they may overlook the extended attack surface resulting from such interplay, leading to relevant evidence being missed and testing flawed hypotheses explaining the incidents. The software engineering research community can contribute to addressing this problem, by deploying existing formalisms to model digital and physical spaces, and using analysis techniques to reason about their interplay and evolution. In this paper, supported by a motivating example, we describe some emerging software engineering challenges to support investigations of cyber-physical incidents. We review and critique existing research proposed to address these challenges, and sketch an initial solution based on a meta-model to represent cyber-physical incidents and a representation of the topology of digital and physical spaces that supports reasoning about their interplay

    Reasoning About a Simulated Printer Case Investigation with Forensic Lucid

    Get PDF
    In this work we model the ACME (a fictitious company name) "printer case incident" and make its specification in Forensic Lucid, a Lucid- and intensional-logic-based programming language for cyberforensic analysis and event reconstruction specification. The printer case involves a dispute between two parties that was previously solved using the finite-state automata (FSA) approach, and is now re-done in a more usable way in Forensic Lucid. Our simulation is based on the said case modeling by encoding concepts like evidence and the related witness accounts as an evidential statement context in a Forensic Lucid program, which is an input to the transition function that models the possible deductions in the case. We then invoke the transition function (actually its reverse) with the evidential statement context to see if the evidence we encoded agrees with one's claims and then attempt to reconstruct the sequence of events that may explain the claim or disprove it.Comment: 18 pages, 3 figures, 7 listings, TOC, index; this article closely relates to arXiv:0906.0049 and arXiv:0904.3789 but to remain stand-alone repeats some of the background and introductory content; abstract presented at HSC'09 and the full updated paper at ICDF2C'11. This is an updated/edited version after ICDF2C proceedings with more references and correction

    The Need to Support of Data Flow Graph Visualization of Forensic Lucid Programs, Forensic Evidence, and their Evaluation by GIPSY

    Full text link
    Lucid programs are data-flow programs and can be visually represented as data flow graphs (DFGs) and composed visually. Forensic Lucid, a Lucid dialect, is a language to specify and reason about cyberforensic cases. It includes the encoding of the evidence (representing the context of evaluation) and the crime scene modeling in order to validate claims against the model and perform event reconstruction, potentially within large swaths of digital evidence. To aid investigators to model the scene and evaluate it, instead of typing a Forensic Lucid program, we propose to expand the design and implementation of the Lucid DFG programming onto Forensic Lucid case modeling and specification to enhance the usability of the language and the system and its behavior. We briefly discuss the related work on visual programming an DFG modeling in an attempt to define and select one approach or a composition of approaches for Forensic Lucid based on various criteria such as previous implementation, wide use, formal backing in terms of semantics and translation. In the end, we solicit the readers' constructive, opinions, feedback, comments, and recommendations within the context of this short discussion.Comment: 11 pages, 7 figures, index; extended abstract presented at VizSec'10 at http://www.vizsec2010.org/posters ; short paper accepted at PST'1

    Information and the reconstruction of quantum physics

    Full text link
    The reconstruction of quantum physics has been connected with the interpretation of the quantum formalism, and has continued to be so with the recent deeper consideration of the relation of information to quantum states and processes. This recent form of reconstruction has mainly involved conceiving quantum theory on the basis of informational principles, providing new perspectives on physical correlations and entanglement that can be used to encode information. By contrast to the traditional, interpretational approach to the foundations of quantum mechanics, which attempts directly to establish the meaning of the elements of the theory and often touches on metaphysical issues, the newer, more purely reconstructive approach sometimes defers this task, focusing instead on the mathematical derivation of the theoretical apparatus from simple principles or axioms. In its most pure form, this sort of theory reconstruction is fundamentally the mathematical derivation of the elements of theory from explicitly presented, often operational principles involving a minimum of extra‐mathematical content. Here, a representative series of specifically information‐based treatments—from partial reconstructions that make connections with information to rigorous axiomatizations, including those involving the theories of generalized probability and abstract systems—is reviewed.Accepted manuscrip

    Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Full text link
    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.Comment: (Under review
    • …
    corecore