3,310 research outputs found
Resilient networking in wireless sensor networks
This report deals with security in wireless sensor networks (WSNs),
especially in network layer. Multiple secure routing protocols have been
proposed in the literature. However, they often use the cryptography to secure
routing functionalities. The cryptography alone is not enough to defend against
multiple attacks due to the node compromise. Therefore, we need more
algorithmic solutions. In this report, we focus on the behavior of routing
protocols to determine which properties make them more resilient to attacks.
Our aim is to find some answers to the following questions. Are there any
existing protocols, not designed initially for security, but which already
contain some inherently resilient properties against attacks under which some
portion of the network nodes is compromised? If yes, which specific behaviors
are making these protocols more resilient? We propose in this report an
overview of security strategies for WSNs in general, including existing attacks
and defensive measures. In this report we focus at the network layer in
particular, and an analysis of the behavior of four particular routing
protocols is provided to determine their inherent resiliency to insider
attacks. The protocols considered are: Dynamic Source Routing (DSR),
Gradient-Based Routing (GBR), Greedy Forwarding (GF) and Random Walk Routing
(RWR)
Stochastic model checking for predicting component failures and service availability
When a component fails in a critical communications service, how urgent is a repair? If we repair within 1 hour, 2 hours, or
n hours, how does this affect the likelihood of service failure? Can a formal model support assessing the impact, prioritisation, and
scheduling of repairs in the event of component failures, and forecasting of maintenance costs? These are some of the questions
posed to us by a large organisation and here we report on our experience of developing a stochastic framework based on a discrete
space model and temporal logic to answer them. We define and explore both standard steady-state and transient temporal logic
properties concerning the likelihood of service failure within certain time bounds, forecasting maintenance costs, and we introduce a
new concept of envelopes of behaviour that quantify the effect of the status of lower level components on service availability. The
resulting model is highly parameterised and user interaction for experimentation is supported by a lightweight, web-based interface
Software timing analysis for complex hardware with survivability and risk analysis
The increasing automation of safety-critical real-time systems, such as those in cars and planes, leads, to more complex and performance-demanding on-board software and the subsequent adoption of multicores and accelerators. This causes software's execution time dispersion to increase due to variable-latency resources such as caches, NoCs, advanced memory controllers and the like. Statistical analysis has been proposed to model the Worst-Case Execution Time (WCET) of software running such complex systems by providing reliable probabilistic WCET (pWCET) estimates. However, statistical models used so far, which are based on risk analysis, are overly pessimistic by construction. In this paper we prove that statistical survivability and risk analyses are equivalent in terms of tail analysis and, building upon survivability analysis theory, we show that Weibull tail models can be used to estimate pWCET distributions reliably and tightly. In particular, our methodology proves the correctness-by-construction of the approach, and our evaluation provides evidence about the tightness of the pWCET estimates obtained, which allow decreasing them reliably by 40% for a railway case study w.r.t. state-of-the-art exponential tails.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the
U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02-
06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft
Recommended from our members
Probabilistic approach to EMP assessment
The development of nuclear EMP hardness requirements must account for uncertainties in the environment, in interaction and coupling, and in the susceptibility of subsystems and components. Typical uncertainties of the last two kinds are briefly summarized, and an assessment methodology is outlined, based on a probabilistic approach that encompasses the basic concepts of reliability. It is suggested that statements of survivability be made compatible with system reliability. Validation of the approach taken for simple antenna/circuit systems is performed with experiments and calculations that involve a Transient Electromagnetic Range, numerical antenna modeling, separate device failure data, and a failure analysis computer program
Hybrid Cloud Model Checking Using the Interaction Layer of HARMS for Ambient Intelligent Systems
Soon, humans will be co-living and taking advantage of the help of multi-agent systems in a broader way than the present. Such systems will involve machines or devices of any variety, including robots. These kind of solutions will adapt to the special needs of each individual. However, to the concern of this research effort, systems like the ones mentioned above might encounter situations that will not be seen before execution time. It is understood that there are two possible outcomes that could materialize; either keep working without corrective measures, which could lead to an entirely different end or completely stop working. Both results should be avoided, specially in cases where the end user will depend on a high level guidance provided by the system, such as in ambient intelligence applications.
This dissertation worked towards two specific goals. First, to assure that the system will always work, independently of which of the agents performs the different tasks needed to accomplish a bigger objective. Second, to provide initial steps towards autonomous survivable systems which can change their future actions in order to achieve the original final goals. Therefore, the use of the third layer of the HARMS model was proposed to insure the indistinguishability of the actors accomplishing each task and sub-task without regard of the intrinsic complexity of the activity. Additionally, a framework was proposed using model checking methodology during run-time for providing possible solutions to issues encountered in execution time, as a part of the survivability feature of the systems final goals
Assessing and augmenting SCADA cyber security: a survey of techniques
SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability
Integrated helicopter survivability
A high level of survivability is important to protect military personnel and equipment and is
central to UK defence policy. Integrated Survivability is the systems engineering
methodology to achieve optimum survivability at an affordable cost, enabling a mission to
be completed successfully in the face of a hostile environment. âIntegrated Helicopter
Survivabilityâ is an emerging discipline that is applying this systems engineering approach
within the helicopter domain. Philosophically the overall survivability objective is âzero
attritionâ, even though this is unobtainable in practice.
The research question was: âHow can helicopter survivability be assessed in an integrated
way so that the best possible level of survivability can be achieved within the constraints and
how will the associated methods support the acquisition process?â
The research found that principles from safety management could be applied to the
survivability problem, in particular reducing survivability risk to as low as reasonably
practicable (ALARP). A survivability assessment process was developed to support this
approach and was linked into the military helicopter life cycle. This process positioned the
survivability assessment methods and associated input data derivation activities.
The system influence diagram method was effective at defining the problem and capturing
the wider survivability interactions, including those with the defence lines of development
(DLOD). Influence diagrams and Quality Function Deployment (QFD) methods were
effective visual tools to elicit stakeholder requirements and improve communication across
organisational and domain boundaries.
The semi-quantitative nature of the QFD method leads to numbers that are not real. These
results are suitable for helping to prioritise requirements early in the helicopter life cycle, but
they cannot provide the quantifiable estimate of risk needed to demonstrate ALARP. The probabilistic approach implemented within the Integrated Survivability Assessment
Model (ISAM) was developed to provide a quantitative estimate of âriskâ to support the
approach of reducing survivability risks to ALARP. Limitations in available input data for
the rate of encountering threats leads to a probability of survival that is not a real number that
can be used to assess actual loss rates. However, the method does support an assessment
across platform options, provided that the âtest environmentâ remains consistent throughout
the assessment. The survivability assessment process and ISAM have been applied to an
acquisition programme, where they have been tested to support the survivability decision
making and design process.
The survivability âtest environmentâ is an essential element of the survivability assessment
process and is required by integrated survivability tools such as ISAM. This test
environment, comprising of threatening situations that span the complete spectrum of
helicopter operations requires further development. The âtest environmentâ would be used
throughout the helicopter life cycle from selection of design concepts through to test and
evaluation of delivered solutions. It would be updated as part of the through life capability
management (TLCM) process.
A framework of survivability analysis tools requires development that can provide
probabilistic input data into ISAM and allow derivation of confidence limits. This systems
level framework would be capable of informing more detailed survivability design work
later in the life cycle and could be enabled through a MATLABÂź based approach.
Survivability is an emerging system property that influences the whole system capability.
There is a need for holistic capability level analysis tools that quantify survivability along
with other influencing capabilities such as: mobility (payload / range), lethality, situational
awareness, sustainability and other mission capabilities.
It is recommended that an investigation of capability level analysis methods across defence
should be undertaken to ensure a coherent and compliant approach to systems engineering
that adopts best practice from across the domains. Systems dynamics techniques should be
considered for further use by Dstl and the wider MOD, particularly within the survivability
and operational analysis domains. This would improve understanding of the problem space,
promote a more holistic approach and enable a better balance of capability, within which
survivability is one essential element.
There would be value in considering accidental losses within a more comprehensive
âsurvivabilityâ analysis. This approach would enable a better balance to be struck between
safety and survivability risk mitigations and would lead to an improved, more integrated
overall design
- âŠ