279 research outputs found
Automatic detection of DoS vulnerabilities of cryptographic protocols
In this article the subject of DoS vulnerabilities of cryptographic key establishment and authentication protocols is discussed. The system for computer-aided DoS protocol resistance analysis, which employs the Petri nets formalism and Spin model-checker, is presented
Seventh Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, Aarhus, Denmark, October 24-26, 2006
This booklet contains the proceedings of the Seventh Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 24-26, 2006. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0
Reversible Computation: Extending Horizons of Computing
This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
A model to study cyber attack mechanics and denial-of-service exploits over the internet\u27s router infrastructure using colored petri nets
The Internet‟s router infrastructure, a scale-free computer network, is vulnerable to targeted denial-of-service (DoS) attacks. Protecting this infrastructure‟s stability is a vital national interest because of the dependence of economic and national security transactions on the Internet. Current defensive countermeasures that rely on monitoring specific router traffic have been shown to be costly, inefficient, impractical, and reactive rather than anticipatory.
To address these issues, this research investigation considers a new paradigm that relies on the systemic changes that occur during a cyber attack, rather than individual router traffic anomalies. It has been hypothesized in the literature that systemic knowledge of cyber attack mechanics can be used to infer the existence of an exploit in its formative stages, before severe network degradation occurs. The study described here targeted DoS attacks against large-scale computer networks. To determine whether this new paradigm can be expressed though the study of subtle changes in the physical characteristics of the Internet‟s connectivity environment, this research developed a first of its kind Colored Petri Net (CPN) model of the United States AT&T router connectivity topology.
By simulating the systemic affects of a DoS attack over this infrastructure, the objectives of this research were to (1) determine whether it is possible to detect small subtle changes in the connectivity environment of the Internet‟s router connectivity infrastructure that occur during a cyber attack; and (2) if the first premise is valid, to ascertain the feasibility of using these changes as a means for (a) early infrastructure attack detection and (b) router infrastructure protection strategy development against these attacks.
Using CPN simulations, this study determined that systemic network changes can be detected in the early stages of a cyber attack. Specifically, this research has provided evidence that using knowledge of the Internet‟s connectivity topology and its physical characteristics to protect the router infrastructure from targeted DoS attacks is feasible. In addition, it is plausible to use these techniques to detect targeted DoS attacks and may lead to new network security tools
Reversible Computation: Extending Horizons of Computing
This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets
Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos
Recommended from our members
A novel authentication protocol based on biometric and identity-based cryptography
Recently, considerable attention has been devoted to distributed systems. It has become obvious that a high security level should be a fundamental prerequisite for organisations' processes, both in the commercial and public sectors. A crucial foundation for securing a network is the ability to reliably authenticate ommunication parties. However, these systems face some critical security risks and challenges when they attempt to stabilise between security, efficiency and functionality. Developing a secure authentication protocol can be challenging; this thesis proposes an authentication scheme that employs two authentication factors involving something you know (password) and something you are (biometric) based on Identity-Based Cryptography and Elliptic Curve Cryptography. Two protocols have been chosen that provide mutual authentication and secure key exchange, which are the equivalent to the Diffie-Hellman key exchange. Due to a potential flaw in the protocols, guarding against attacks can be challenging. In order to alleviate some of the issues encountered with the new protocol, this thesis uses the encrypt-then-authenticate method. Formal verification methods are used to evaluate the new protocol. First, finite-state machines are used to examine and predict the behaviour of the protocol. Modelling with this method shows that the new protocol can function correctly and behave correctly within the protocol description, even with invalid input or time delay. Second, Petri nets are used to model, simulate and analyse the new protocol. This thesis formulates several attack models via Petri nets in which the security of the proposed protocols is discussed precisely. Ultimately, this novel work ensures that the new protocol provides a coherent security concept and can be implemented over insecure channels while offering secure mutual authentication
Predicting the outcomes of HIV treatment interruptions using computational modelling
In the past 30 years, HIV infection made a transition from fatal to chronic disease due to the emergence of potent treatment largely suppressing viral replication. However, this medication must be administered life-long on a
regular basis to maintain viral suppression and is not always well tolerated. Any interruption of treatment causes residual virus to be reactivated and infection to progress, where the underlying processes occurring and
consequences for the immune system are still poorly understood. Nonetheless, treatment interruptions are common due to adherence issues or limited access to antiretroviral drugs. Early clinical studies, aiming at
application of treatment interruptions in a structured way, gave contradictory results concerning patient safety, discouraging further trials. In-silico models potentially add to knowledge but a review of the Literature indicates most
current models used for studying treatment interruptions (equation-based), neglect recent clinical findings of collagen formation in lymphatic tissue due to HIV and its crucial role in immune system stability and efficacy. The aim
of this research is the construction and application of so-called ‘Bottom-Up’ models to allow improved assessment of these processes in relation to HIV treatment interruptions. In this regard, a novel computational model based on
2D Cellular Automata for lymphatic tissue depletion and associated damage to the immune system was developed. Hence, (i) using this model, the influence of spatial distribution of collagen formation on HIV infection
progression speed was evaluated while discussing aspects of computational performance. Further, (ii) direct Monte Carlo simulations were employed to explore the accumulation of tissue impairment due to repeated treatment interruptions and consequences for long-term prognosis. Finally, (iii) an inverse Monte Carlo approach was used to reconstruct yet unknown characteristics of patient groups. This is based on sparse data from past
clinical studies on treatment interruptions with the aim of explaining their contradictory results
- …