20,481 research outputs found

    Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund

    Modelling and simulation framework for reactive transport of organic contaminants in bed-sediments using a pure java object - oriented paradigm

    Get PDF
    Numerical modelling and simulation of organic contaminant reactive transport in the environment is being increasingly relied upon for a wide range of tasks associated with risk-based decision-making, such as prediction of contaminant profiles, optimisation of remediation methods, and monitoring of changes resulting from an implemented remediation scheme. The lack of integration of multiple mechanistic models to a single modelling framework, however, has prevented the field of reactive transport modelling in bed-sediments from developing a cohesive understanding of contaminant fate and behaviour in the aquatic sediment environment. This paper will investigate the problems involved in the model integration process, discuss modelling and software development approaches, and present preliminary results from use of CORETRANS, a predictive modelling framework that simulates 1-dimensional organic contaminant reaction and transport in bed-sediments

    Prochlo: Strong Privacy for Analytics in the Crowd

    Full text link
    The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (ESA)---for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacy-preserving software monitoring. (cont.; see the paper

    Digital service analysis and design : the role of process modelling

    Get PDF
    Digital libraries are evolving from content-centric systems to person-centric systems. Emergent services are interactive and multidimensional, associated systems multi-tiered and distributed. A holistic perspective is essential to their effective analysis and design, for beyond technical considerations, there are complex social, economic, organisational, and ergonomic requirements and relationships to consider. Such a perspective cannot be gained without direct user involvement, yet evidence suggests that development teams may be failing to effectively engage with users, relying on requirements derived from anecdotal evidence or prior experience. In such instances, there is a risk that services might be well designed, but functionally useless. This paper highlights the role of process modelling in gaining such perspective. Process modelling challenges, approaches, and success factors are considered, discussed with reference to a recent evaluation of usability and usefulness of a UK National Health Service (NHS) digital library. Reflecting on lessons learnt, recommendations are made regarding appropriate process modelling approach and application

    SODA: Generating SQL for Business Users

    Full text link
    The purpose of data warehouses is to enable business analysts to make better decisions. Over the years the technology has matured and data warehouses have become extremely successful. As a consequence, more and more data has been added to the data warehouses and their schemas have become increasingly complex. These systems still work great in order to generate pre-canned reports. However, with their current complexity, they tend to be a poor match for non tech-savvy business analysts who need answers to ad-hoc queries that were not anticipated. This paper describes the design, implementation, and experience of the SODA system (Search over DAta Warehouse). SODA bridges the gap between the business needs of analysts and the technical complexity of current data warehouses. SODA enables a Google-like search experience for data warehouses by taking keyword queries of business users and automatically generating executable SQL. The key idea is to use a graph pattern matching algorithm that uses the metadata model of the data warehouse. Our results with real data from a global player in the financial services industry show that SODA produces queries with high precision and recall, and makes it much easier for business users to interactively explore highly-complex data warehouses.Comment: VLDB201

    Identifying attack surfaces in the evolving space industry using reference architectures

    Get PDF
    The space environment is currently undergoing a substantial change and many new entrants to the market are deploying devices, satellites and systems in space; this evolution has been termed as NewSpace. The change is complicated by technological developments such as deploying machine learning based autonomous space systems and the Internet of Space Things (IoST). In the IoST, space systems will rely on satellite-to-x communication and interactions with wider aspects of the ground segment to a greater degree than existing systems. Such developments will inevitably lead to a change in the cyber security threat landscape of space systems. Inevitably, there will be a greater number of attack vectors for adversaries to exploit, and previously infeasible threats can be realised, and thus require mitigation. In this paper, we present a reference architecture (RA) that can be used to abstractly model in situ applications of this new space landscape. The RA specifies high-level system components and their interactions. By instantiating the RA for two scenarios we demonstrate how to analyse the attack surface using attack trees

    Cyber physical security of avionic systems

    Get PDF
    “Cyber-physical security is a significant concern for critical infrastructures. The exponential growth of cyber-physical systems (CPSs) and the strong inter-dependency between the cyber and physical components introduces integrity issues such as vulnerability to injecting malicious data and projecting fake sensor measurements. Traditional security models partition the CPS from a security perspective into just two domains: high and low. However, this absolute partition is not adequate to address the challenges in the current CPSs as they are composed of multiple overlapping partitions. Information flow properties are one of the significant classes of cyber-physical security methods that model how inputs of a system affect its outputs across the security partition. Information flow supports traceability that helps in detecting vulnerabilities and anomalous sources, as well as helps in rendering mitigation measures. To address the challenges associated with securing CPSs, two novel approaches are introduced by representing a CPS in terms of a graph structure. The first approach is an automated graph-based information flow model introduced to identify information flow paths in the avionics system and partition them into security domains. This approach is applied to selected aspects of the avionic systems to identify the vulnerabilities in case of a system failure or an attack and provide possible mitigation measures. The second approach is based on graph neural networks (GNN) to classify the graphs into different security domains. Using these two approaches, successful partitioning of the CPS into different security domains is possible in addition to identifying their optimal coverage. These approaches enable designers and engineers to ensure the integrity of the CPS. The engineers and operators can use this process during design-time and in real-time to identify failures or attacks on the system”--Abstract, page iii

    A Domain Specific Approach to High Performance Heterogeneous Computing

    Full text link
    Users of heterogeneous computing systems face two problems: firstly, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and secondly, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program, which can be solved using heuristics, machine learning or Mixed Integer Linear Programming (MILP) frameworks. These claims are illustrated using the example domain of derivatives pricing in computational finance, with the domain metrics of workload latency or makespan and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10% of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen.Comment: 14 pages, preprint draft, minor revisio
    • 

    corecore