15 research outputs found

    Real-Time Physiological Simulation and Modeling toward Dependable Patient Monitoring Systems

    Get PDF
    We present a novel approach to describe dependability measures for intelligent patient monitoring devices. The strategy is based on using a combination of methods from system theory and real-time physiological simulations. For the first time not only the technical device but also the patient is taken into consideration. Including the patient requires prediction of physiology which is achieved by a real-time physiological simulation in a continuous time domain, whereby one of the main ingredients is a temporal reasoning element. The quality of the reasoning is expressed by a dependability analysis strategy. Thereby, anomalies are expressed as differences between simulation and real world data. Deviations are detected for current and they are forecasted for future points in time and can express critical situations. By this method, patient specific differences in terms of physiological reactions are described, allowing early detection of critical states

    An Information Security Threat Assessment Model based on Bayesian Network and OWA Operator

    Full text link

    The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Get PDF
    Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1) Summarizes the advantages of the assessment of patient clinical outcome; 2) reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3) demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models

    Stochastic fault tree analysis for agropark project appraisal

    Get PDF
    Agroparks offer in theory a variety of economic advantages and environmental benefits. Since agropark projects are typically capital intensive and with high societal impact, appraisal from lenders and policy makers will play a key role in the realisation of the concept. In practice, however, project appraisal is hampered by the complexity of the concept and the multitude of risks. In this paper, a methodology based on stochastic fault-tree analysis (FTA) was developed to support project managers and policy makers in making agropark investment decisions. The methodology is illustrated with an example agropark project

    An Integrated Approach in Risk Management Process for Identifying Information Security Threats using Medical Research Design

    Get PDF
    In this paper, we attempt to introduce a new method for performing risk analysis studies by effectively adopting and adapting medical research design namely a prospective cohort study based survival analysis approach into risk management process framework. Under survival analysis approach, a method which is known as Cox Proportional Hazards (PH) Model will be applied in order to identify potential information security threats. The risk management process in this research will be based on Australian/New Zealand Standard for Risk Management (AS/NZS ISO 31000:2009). AS/NZS ISO 31000:2009 provides a sequencing of the core part of the risk management process namely establishing the context, risk identification, risk analysis, risk evaluation and risk treatment. Moreover, it seems that the integration of risk management process with medical approach indeed brings very useful new insights. Thus, the contribution of the paper will be introducing a new method for performing a risk analysis studies in information security domain

    Developing a Bayesian Network risk model to enhance Lean Six Sigma

    Get PDF
    In today\u27s global market, manufacturing organizations are striving to improve their pro- duction performance in order to remain competitive advantages. For the past few decades, many efforts have been conducted by both researchers and practitioners to develop managerial and technical approaches to improve manufacturing processes. Among them, Lean and Six Sigma have become the two most recognized methodologies and together they comprise the primary components of process improvement strategies. However, with the manufacturing system and its external environment becoming more and more complex, a great range of risk factors can affect the results of the Lean Six Sigma initiatives. Consequently, the organization is constantly exposed to risks of not being able to generate a quality product to meet the customer\u27s requirements. The existence of risk is often neglected because there is no easy way to perform the risk analysis for Lean Six Sigma activities due to their complexity. The purpose of this study is to develop a risk-informed model that provides a systematic evaluation for potential risks to enhance the implementation of Lean Six Sigma initiatives. The methodology derives from the Bayesian Network methodology and is incorporated with other risk management techniques. Combining graphical approach to represent cause-and-effect relationships between events of interests and probabilistic inference to estimate their likelihoods, Bayesian Network provides an effective method to evaluate the reliability of Lean Six Sigma. The developed model can be used for assessing the potential risks associated with Lean Six Sigma initiatives and prioritizing efforts to minimize their impacts. The model can serve as a primary component of the decision-making toolbox for maximizing the effectiveness of Lean Six Sigma initiatives and subsequently increasing the competitiveness of a manufacturing firm

    SAFE-FLOW : a systematic approach for safety analysis of clinical workflows

    Get PDF
    The increasing use of technology in delivering clinical services brings substantial benefits to the healthcare industry. At the same time, it introduces potential new complications to clinical workflows that generate new risks and hazards with the potential to affect patients’ safety. These workflows are safety critical and can have a damaging impact on all the involved parties if they fail.Due to the large number of processes included in the delivery of a clinical service, it can be difficult to determine the individuals or the processes that are responsible for adverse events. Using methodological approaches and automated tools to carry out an analysis of the workflow can help in determining the origins of potential adverse events and consequently help in avoiding preventable errors. There is a scarcity of studies addressing this problem; this was a partial motivation for this thesis.The main aim of the research is to demonstrate the potential value of computer science based dependability approaches to healthcare and in particular, the appropriateness and benefits of these dependability approaches to overall clinical workflows. A particular focus is to show that model-based safety analysis techniques can be usefully applied to such areas and then to evaluate this application.This thesis develops the SAFE-FLOW approach for safety analysis of clinical workflows in order to establish the relevance of such application. SAFE-FLOW detailed steps and guidelines for its application are explained. Then, SAFE-FLOW is applied to a case study and is systematically evaluated. The proposed evaluation design provides a generic evaluation strategy that can be used to evaluate the adoption of safety analysis methods in healthcare.It is concluded that safety of clinical workflows can be significantly improved by performing safety analysis on workflow models. The evaluation results show that SAFE-FLOW is feasible and it has the potential to provide various benefits; it provides a mechanism for a systematic identification of both adverse events and safeguards, which is helpful in terms of identifying the causes of possible adverse events before they happen and can assist in the design of workflows to avoid such occurrences. The clear definition of the workflow including its processes and tasks provides a valuable opportunity for formulation of safety improvement strategies

    Proceedings of the International Workshop on the Design of Dependable Critical Systems “Hardware, Software, and Human Factors in Dependable System Design”

    Get PDF
    As technology advances, technical systems become increasingly complex not only in terms of functionality and structure but also regarding their handling and operation. In order to keep such complex safety-critical and mission-critical systems controllable, they are required to be highly dependable. Since the costs for designing, testing, operating, and maintaining such systems significantly increase with the dependability requirements, new design approaches for the cost effective development and production of dependable systems are required, covering hardware, software, and human factor aspects. This workshop aims at presenting and discussing the latest developments in this field, spanning the entire spectrum from theoretical works on system architecture and dependability measures to practical applications in safety and mission critical domains

    Transparent User Authentication For Mobile Applications

    Get PDF
    The use of smartphones in our daily lives has grown steadily, due to the combination of mobility and round-the-clock multi-connectivity. In particular, smartphones are used to perform activities, such as sending emails, transferring money via mobile Internet banking, making calls, texting, surfing the Internet, viewing documents, storing medical, confidential and personal information, shopping online and playing games. Some active applications are considered sensitive and confidential and the risks are high in the event of the loss of any sensitive data or privacy breaches. In addition, after the point of entry, using techniques such as a PIN or password, the user of the device can perform almost all tasks, of different risk levels, without having to re-authenticate periodically to re-validate the user’s identity. Furthermore, the current point-of-entry authentication mechanisms consider all the applications on a mobile device to have the same level of importance and so do not apply any further access control rules. As a result, with the rapid growth of smartphones for use in daily life, securing the sensitive data stored upon them makes authentication of paramount importance. In this research, it is argued that within a single mobile application there are different processes operating on the same data but with differing risks attached. The unauthorised disclosure or modification of mobile data has the potential to lead to a number of undesirable consequences for the user. Thus, there is no single level of risk associated with a given application and the risk level changes during use. In this context, a novel mobile applications data risk assessment model is proposed to appreciate the risk involved within an application (intra-process security). Accordingly, there is a need to suggest a method to be applied continuously and transparently (i.e., without obstructing the user’s activities) to authenticate legitimate users, which is maintained beyond point of entry, without the explicit involvement of the user. To this end, a transparent and continuous authentication mechanism provides a basis for convenient and secure re-authentication of the user. The mechanism is used to gather user data in the background without requiring any dedicated activity, by regularly and periodically checking user behaviour to provide continuous monitoring for the protection of the smartphone. In order to investigate the feasibility of the proposed system, a study involving data collected from 76 participants over a one-month period using 12 mobile applications was undertaken. A series of four experiments were conducted based upon data from one month of normal device usage. The first experiment sought to explore the intra-process (i.e., within-app) and inter-process (i.e., access-only app) access levels across different time windows. The experimental results show that this approach achieved desirable outcomes for applying a transparent authentication system at an intra-process level, with an average of 6% intrusive authentication requests. Having achieved promising experimental results, it was identified that there were some users who undertook an insufficient number of activities on the device and, therefore, achieved a high level of intrusive authentication requests. As a result, there was a need to investigate whether a specific combination of time windows would perform better with a specific type of user. To do this, the numbers of intrusive authentication requests were computed based on three usage levels (high, medium and low) at both the intra- and inter-process access levels. This approach achieved better results when compared with the first set of results: the average percentage of intrusive authentication requests was 3%, which indicates a clear enhancement. The second and third experiments investigated only the intra-process and inter-process, respectively, to examine the effect of the access level. Finally, the fourth experiment investigated the impact of specific biometric modalities on overall system performance. In this research study, a Non-Intrusive Continuous Authentication (NICA) framework was applied by utilising two security mechanisms: Alert Level (AL) and Integrity Level (IL). During specific time windows, the AL process is used to seek valid samples. If there are no samples, the identity confidence is periodically reduced by a degradation function, which is 10% of current confidence in order to save power while the mobile device is inactive. In the case of the mobile user requesting to perform a task, the IL is applied to check the legitimacy of that user. If the identity confidence level is equal to or greater than the specified risk action level, transparent access is allowed. Otherwise, an intrusive authentication request is required in order to proceed with the service. In summary, the experimental results show that this approach achieved sufficiently high results to fulfil the security obligations. The shortest time window of AL= 2 min / IL = 5 min produced an average intrusive authentication request rate of 18%, whereas the largest time window (AL= 20 min / IL = 20 min) provided 6%. Interestingly, when the participants were divided into three levels of usage, the average intrusive authentication request rate was 12% and 3% for the shortest time window (AL = 2 min / IL = 5 min) and the largest time window (AL= 20 min / IL = 20), respectively. Therefore, this approach has been demonstrated to provide transparent and continuous protection to ensure the validity of the current user by understanding the risk involved within a given application.Royal Embassy of Saudi Arabia Cultural Bureau in U
    corecore