55,161 research outputs found

    Architecture-based Qualitative Risk Analysis for Availability of IT Infrastructures

    Get PDF
    An IT risk assessment must deliver the best possible quality of results in a time-effective way. Organisations are used to customise the general-purpose standard risk assessment methods in a way that can satisfy their requirements. In this paper we present the QualTD Model and method, which is meant to be employed together with standard risk assessment methods for the qualitative assessment of availability risks of IT architectures, or parts of them. The QualTD Model is based on our previous quantitative model, but geared to industrial practice since it does not require quantitative data which is often too costly to acquire. We validate the model and method in a real-world case by performing a risk assessment on the authentication and authorisation system of a large multinational company and by evaluating the results w.r.t. the goals of the stakeholders of the system. We also perform a review of the most popular standard risk assessment methods and an analysis of which one can be actually integrated with our QualTD Model

    Investigation into reliability based methods to include risk of failure in life cycle cost analysis of reinforced concrete bridge rehabilitation

    Get PDF
    Reliability based life cycle cost analysis is becoming an important consideration for decision-making in relation to bridge design, maintenance and rehabilitation. An optimal solution should ensure reliability during service life while minimizing the life cycle cost. Risk of failure is an important component in whole of life cycle cost for both new and existing structures. Research work presented here aimed to develop a methodology for evaluation of the risk of failure of reinforced concrete bridges to assist in decision making on rehabilitation. Methodology proposed here combines fault tree analysis and probabilistic time-dependent reliability analysis to achieve qualitative and quantitative assessment of the risk of failure. Various uncertainties are considered including the degradation of resistance due to initiation of a particular distress mechanism, increasing load effects, changes in resistance as a result of rehabilitation, environmental variables, material properties and model errors. It was shown that the proposed methodology has the ability to provide users two alternative approaches for qualitative or quantitative assessment of the risk of failure depending on availability of detailed data. This work will assist the managers of bridge infrastructures in making decisions in relation to optimization of rehabilitation options for aging bridges

    Design and initial validation of the Raster method for telecom service availability risk assessment

    Get PDF
    Crisis organisations depend on telecommunication services; unavailability of these services reduces the effectiveness of crisis response. Crisis organisations should therefore be aware of availability risks, and need a suitable risk assessment method. Such a method needs to be aware of the exceptional circumstances in which crisis organisations operate, and of the commercial structure of modern telecom services. We found that existing risk assessment methods are unsuitable for this problem domain. Hence, crisis organisations do not perform any risk assessment, trust their supplier, or rely on service level agreements, which are not meaningful during crisis situations. We have therefore developed a new risk assessment method, which we call RASTER. We have tested RASTER using a case study at the crisis organisation of a government agency, and improved the method based on the analysis of case results. Our initial validation suggests that the method can yield practical results

    Assessing Value for Money in PFI Projects: A Comparative Study of Practices in the UK and Italy

    Get PDF
    The Value for Money assessment is a critical process in procuring a Private Finance Initiative and it requires accurate ex-ante performance measurement methodologies. The British Government has set new requirements for evaluating VFM through a new assessment model composed of three main stages, namely: programme level assessment, project level assessment, and procurement level assessment. The objective of the new model for VFM assessment is to change the costly, inflexible and opaque side of PFIs in order to deliver cost-effective and improve the quality of public service provision. A theoretical analysis of the implementation of PFI shows the UK as the leading user of this procurement in Europe and Italy is the second. However, there is a disparity in the manner PFI is actually implemented in these two countries and especially how VFM is assessed. Aiming at underlying the best practices of this evaluation process for the most achievable VFM, this paper presents the new VFM assessment model of the UK and a suggestion for its potential application to the Italian PFI procurement process to improve outcomes therei

    Quantify resilience enhancement of UTS through exploiting connect community and internet of everything emerging technologies

    Get PDF
    This work aims at investigating and quantifying the Urban Transport System (UTS) resilience enhancement enabled by the adoption of emerging technology such as Internet of Everything (IoE) and the new trend of the Connected Community (CC). A conceptual extension of Functional Resonance Analysis Method (FRAM) and its formalization have been proposed and used to model UTS complexity. The scope is to identify the system functions and their interdependencies with a particular focus on those that have a relation and impact on people and communities. Network analysis techniques have been applied to the FRAM model to identify and estimate the most critical community-related functions. The notion of Variability Rate (VR) has been defined as the amount of output variability generated by an upstream function that can be tolerated/absorbed by a downstream function, without significantly increasing of its subsequent output variability. A fuzzy based quantification of the VR on expert judgment has been developed when quantitative data are not available. Our approach has been applied to a critical scenario (water bomb/flash flooding) considering two cases: when UTS has CC and IoE implemented or not. The results show a remarkable VR enhancement if CC and IoE are deploye

    Towards Optimal IT Availability Planning: Methods and Tools

    Get PDF
    The availability of an organisation’s IT infrastructure is of vital importance for supporting business activities. IT outages are a cause of competitive liability, chipping away at a company financial performance and reputation. To achieve the maximum possible IT availability within the available budget, organisations need to carry out a set of analysis activities to prioritise efforts and take decisions based on the business needs. This set of analysis activities is called IT availability planning. Most (large) organisations address IT availability planning from one or more of the three main angles: information risk management, business continuity and service level management. Information risk management consists of identifying, analysing, evaluating and mitigating the risks that can affect the information processed by an organisation and the information-processing (IT) systems. Business continuity consists of creating a logistic plan, called business continuity plan, which contains the procedures and all the useful information needed to recover an organisations’ critical processes after major disruption. Service level management mainly consists of organising, documenting and ensuring a certain quality level (e.g. the availability level) for the services offered by IT systems to the business units of an organisation. There exist several standard documents that provide the guidelines to set up the processes of risk, business continuity and service level management. However, to be as generally applicable as possible, these standards do not include implementation details. Consequently, to do IT availability planning each organisation needs to develop the concrete techniques that suit its needs. To be of practical use, these techniques must be accurate enough to deal with the increasing complexity of IT infrastructures, but remain feasible within the budget available to organisations. As we argue in this dissertation, basic approaches currently adopted by organisations are feasible but often lack of accuracy. In this thesis we propose a graph-based framework for modelling the availability dependencies of the components of an IT infrastructure and we develop techniques based on this framework to support availability planning. In more detail we present: 1. the Time Dependency model, which is meant to support IT managers in the selection of a cost-optimal set of countermeasures to mitigate availability-related IT risks; 2. the Qualitative Time Dependency model, which is meant to be used to systematically assess availability-related IT risks in combination with existing risk assessment methods; 3. the Time Dependency and Recovery model, which provides a tool for IT managers to set or validate the recovery time objectives on the components of an IT architecture, which are then used to create the IT-related part of a business continuity plan; 4. A2THOS, to verify if availability SLAs, regulating the provisioning of IT services between business units of the same organisation, can be respected when the implementation of these services is partially outsourced to external companies, and to choose outsourcing offers accordingly. We run case studies with the data of a primary insurance company and a large multinational company to test the proposed techniques. The results indicate that organisations such as insurance or manufacturing companies, which use IT to support their business can benefit from the optimisation of the availability of their IT infrastructure: it is possible to develop techniques that support IT availability planning while guaranteeing feasibility within budget. The framework we propose shows that the structure of the IT architecture can be practically employed with such techniques to increase their accuracy over current practice

    Bringing self assessment home: repository profiling and key lines of enquiry within DRAMBORA

    Get PDF
    Digital repositories are a manifestation of complex organizational, financial, legal, technological, procedural, and political interrelationships. Accompanying each of these are innate uncertainties, exacerbated by the relative immaturity of understanding prevalent within the digital preservation domain. Recent efforts have sought to identify core characteristics that must be demonstrable by successful digital repositories, expressed in the form of check-list documents, intended to support the processes of repository accreditation and certification. In isolation though, the available guidelines lack practical applicability; confusion over evidential requirements and difficulties associated with the diversity that exists among repositories (in terms of mandate, available resources, supported content and legal context) are particularly problematic. A gap exists between the available criteria and the ways and extent to which conformity can be demonstrated. The Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) is a methodology for undertaking repository self assessment, developed jointly by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE). DRAMBORA requires repositories to expose their organization, policies and infrastructures to rigorous scrutiny through a series of highly structured exercises, enabling them to build a comprehensive registry of their most pertinent risks, arranged into a structure that facilitates effective management. It draws on experiences accumulated throughout 18 evaluative pilot assessments undertaken in an internationally diverse selection of repositories, digital libraries and data centres (including institutions and services such as the UK National Digital Archive of Datasets, the National Archives of Scotland, Gallica at the National Library of France and the CERN Document Server). Other organizations, such as the British Library, have been using sections of DRAMBORA within their own risk assessment procedures. Despite the attractive benefits of a bottom up approach, there are implicit challenges posed by neglecting a more objective perspective. Following a sustained period of pilot audits undertaken by DPE, DCC and the DELOS Digital Preservation Cluster aimed at evaluating DRAMBORA, it was stated that had respective project members not been present to facilitate each assessment, and contribute their objective, external perspectives, the results may have been less useful. Consequently, DRAMBORA has developed in a number of ways, to enable knowledge transfer from the responses of comparable repositories, and incorporate more opportunities for structured question sets, or key lines of enquiry, that provoke more comprehensive awareness of the applicability of particular threats and opportunities
    • …
    corecore