66,724 research outputs found
Why We Cannot (Yet) Ensure the Cybersecurity of Safety-Critical Systems
There is a growing threat to the cyber-security of safety-critical systems.
The introduction of Commercial Off The Shelf (COTS) software, including
Linux, specialist VOIP applications and Satellite Based Augmentation Systems
across the aviation, maritime, rail and power-generation infrastructures has created
common, vulnerabilities. In consequence, more people now possess the technical
skills required to identify and exploit vulnerabilities in safety-critical systems.
Arguably for the first time there is the potential for cross-modal attacks
leading to future ‘cyber storms’. This situation is compounded by the failure of
public-private partnerships to establish the cyber-security of safety critical applications.
The fiscal crisis has prevented governments from attracting and retaining
competent regulators at the intersection of safety and cyber-security. In particular,
we argue that superficial similarities between safety and security have led
to security policies that cannot be implemented in safety-critical systems. Existing
office-based security standards, such as the ISO27k series, cannot easily be integrated
with standards such as IEC61508 or ISO26262. Hybrid standards such as
IEC 62443 lack credible validation. There is an urgent need to move beyond
high-level policies and address the more detailed engineering challenges that
threaten the cyber-security of safety-critical systems. In particular, we consider
the ways in which cyber-security concerns undermine traditional forms of safety
engineering, for example by invalidating conventional forms of risk assessment.
We also summarise the ways in which safety concerns frustrate the deployment of
conventional mechanisms for cyber-security, including intrusion detection systems
Medical Cyber-Physical Systems Development: A Forensics-Driven Approach
The synthesis of technology and the medical industry has partly contributed
to the increasing interest in Medical Cyber-Physical Systems (MCPS). While
these systems provide benefits to patients and professionals, they also
introduce new attack vectors for malicious actors (e.g. financially-and/or
criminally-motivated actors). A successful breach involving a MCPS can impact
patient data and system availability. The complexity and operating requirements
of a MCPS complicates digital investigations. Coupling this information with
the potentially vast amounts of information that a MCPS produces and/or has
access to is generating discussions on, not only, how to compromise these
systems but, more importantly, how to investigate these systems. The paper
proposes the integration of forensics principles and concepts into the design
and development of a MCPS to strengthen an organization's investigative
posture. The framework sets the foundation for future research in the
refinement of specific solutions for MCPS investigations.Comment: This is the pre-print version of a paper presented at the 2nd
International Workshop on Security, Privacy, and Trustworthiness in Medical
Cyber-Physical Systems (MedSPT 2017
Regional Data Archiving and Management for Northeast Illinois
This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor
and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the
regional transportation agencies, from both technical and business perspectives, about building such a comprehensive
transportation information system. Several implementation alternatives are identified and analyzed. This research is carried
out in three phases.
In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a
thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect
information on all data elements that they store, including the format, system, and granularity. Their perception of a data
archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the
database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and
examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We
estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the
hardware, software, labor, and resource requirements. We also identify possible revenue opportunities.
A few implementation options for the archive system are summarized in this report; namely:
1. System hosted by a partnering agency
2. System contracted to a university
3. System contracted to a national laboratory
4. System outsourced to a service provider
The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe
Generic Continuity of Operations/Continuity of Government Plan for State-Level Transportation Agencies, Research Report 11-01
The Homeland Security Presidential Directive 20 (HSPD-20) requires all local, state, tribal and territorial government agencies, and private sector owners of critical infrastructure and key resources (CI/KR) to create a Continuity of Operations/Continuity of Government Plan (COOP/COG). There is planning and training guidance for generic transportation agency COOP/COG work, and the Transportation Research Board has offered guidance for transportation organizations. However, the special concerns of the state-level transportation agency’s (State DOT’s) plan development are not included, notably the responsibilities for the entire State Highway System and the responsibility to support specific essential functions related to the State DOT Director’s role in the Governor’s cabinet. There is also no guidance on where the COOP/COG planning and organizing fits into the National Incident Management System (NIMS) at the local or state-level department or agency. This report covers the research conducted to determine how to integrate COOP/COG into the overall NIMS approach to emergency management, including a connection between the emergency operations center (EOC) and the COOP/COG activity. The first section is a presentation of the research and its findings and analysis. The second section provides training for the EOC staff of a state-level transportation agency, using a hybrid model of FEMA’s ICS and ESF approaches, including a complete set of EOC position checklists, and other training support material. The third section provides training for the COOP/COG Branch staff of a state-level transportation agency, including a set of personnel position descriptions for the COOP/COG Branch members
The Role of Trust and Interaction in GPS Related Accidents: A Human Factors Safety Assessment of the Global Positioning System (GPS)
The Global Positioning System (GPS) uses a network of orbiting and geostationary satellites to calculate the position of a receiver over time. This technology has revolutionised a wide range of safety-critical industries and leisure applications ranging from commercial fisheries through to mountain running. These systems provide diverse benefits; supplementing the users existing navigation skills and reducing the uncertainty that often characterises many route planning tasks. GPS applications can also help to reduce workload by automating tasks that would otherwise require finite cognitive and perceptual resources. However, the operation of these systems has been identified as a contributory factor in a range of recent accidents. Users often come to rely on GPS applications and, therefore, fail to notice when they develop faults or when errors occur in the other systems that use the data from these systems. Further accidents can stem from the ‘over confidence’ that arises when users assume automated warnings will be issued when they stray from an intended route. Unless greater attention is paid to the human factors of GPS applications then there is a danger that we will see an increasing number of these failures as positioning technologies are integrated into increasing numbers of application
Enabling Data-Driven Transportation Safety Improvements in Rural Alaska
Safety improvements require funding. A clear need must be demonstrated to secure funding. For transportation safety, data, especially data about past crashes, is the usual method of demonstrating need. However, in rural locations, such data is often not available, or is not in a form amenable to use in funding applications. This research aids rural entities, often federally recognized tribes and small villages acquire data needed for funding applications. Two aspects of work product are the development of a traffic counting application for an iPad or similar device, and a review of the data requirements of the major transportation funding agencies. The traffic-counting app, UAF Traffic, demonstrated its ability to count traffic and turning movements for cars and trucks, as well as ATVs, snow machines, pedestrians, bicycles, and dog sleds. The review of the major agencies demonstrated that all the likely funders would accept qualitative data and Road Safety Audits. However, quantitative data, if it was available, was helpful
Identifying common problems in the acquisition and deployment of large-scale software projects in the US and UK healthcare systems
Public and private organizations are investing increasing amounts into the development of
healthcare information technology. These applications are perceived to offer numerous benefits.
Software systems can improve the exchange of information between healthcare facilities. They
support standardised procedures that can help to increase consistency between different service
providers. Electronic patient records ensure minimum standards across the trajectory of care when
patients move between different specializations. Healthcare information systems also offer economic
benefits through efficiency savings; for example by providing the data that helps to identify potential
bottlenecks in the provision and administration of care. However, a number of high-profile failures
reveal the problems that arise when staff must cope with the loss of these applications. In particular,
teams have to retrieve paper based records that often lack the detail on electronic systems.
Individuals who have only used electronic information systems face particular problems in learning
how to apply paper-based fallbacks. The following pages compare two different failures of
Healthcare Information Systems in the UK and North America. The intention is to ensure that future
initiatives to extend the integration of electronic patient records will build on the ‘lessons learned’
from previous systems
Moving from a "human-as-problem" to a "human-as-solution" cybersecurity mindset
Cybersecurity has gained prominence, with a number of widely publicised security incidents, hacking attacks and data breaches reaching the news over the last few years. The escalation in the numbers of cyber incidents shows no sign of abating, and it seems appropriate to take a look at the way cybersecurity is conceptualised and to consider whether there is a need for a mindset change.To consider this question, we applied a "problematization" approach to assess current conceptualisations of the cybersecurity problem by government, industry and hackers. Our analysis revealed that individual human actors, in a variety of roles, are generally considered to be "a problem". We also discovered that deployed solutions primarily focus on preventing adverse events by building resistance: i.e. implementing new security layers and policies that control humans and constrain their problematic behaviours. In essence, this treats all humans in the system as if they might well be malicious actors, and the solutions are designed to prevent their ill-advised behaviours. Given the continuing incidences of data breaches and successful hacks, it seems wise to rethink the status quo approach, which we refer to as "Cybersecurity, Currently". In particular, we suggest that there is a need to reconsider the core assumptions and characterisations of the well-intentioned human's role in the cybersecurity socio-technical system. Treating everyone as a problem does not seem to work, given the current cyber security landscape.Benefiting from research in other fields, we propose a new mindset i.e. "Cybersecurity, Differently". This approach rests on recognition of the fact that the problem is actually the high complexity, interconnectedness and emergent qualities of socio-technical systems. The "differently" mindset acknowledges the well-intentioned human's ability to be an important contributor to organisational cybersecurity, as well as their potential to be "part of the solution" rather than "the problem". In essence, this new approach initially treats all humans in the system as if they are well-intentioned. The focus is on enhancing factors that contribute to positive outcomes and resilience. We conclude by proposing a set of key principles and, with the help of a prototypical fictional organisation, consider how this mindset could enhance and improve cybersecurity across the socio-technical system
The natural history of bugs: using formal methods to analyse software related failures in space missions
Space missions force engineers to make complex trade-offs between many different constraints including cost, mass, power, functionality and reliability. These constraints create a continual need to innovate. Many advances rely upon software, for instance to control and monitor the next generation ‘electron cyclotron resonance’ ion-drives for deep space missions.Programmers face numerous challenges. It is extremely difficult to conduct valid ground-based tests for the code used in space missions. Abstract models and simulations of satellites can be misleading. These issues are compounded by the use of ‘band-aid’ software to fix design mistakes and compromises in other aspects of space systems engineering. Programmers must often re-code missions in flight. This introduces considerable risks. It should, therefore, not be a surprise that so many space missions fail to achieve their objectives. The costs of failure are considerable. Small launch vehicles, such as the U.S. Pegasus system, cost around 4 million up to 73 million from the failure of a single uninsured satellite. It is clearly important that we learn as much as possible from those failures that do occur. The following pages examine the roles that formal methods might play in the analysis of software failures in space missions
- …