8,333 research outputs found

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Designing business continuity response

    Get PDF
    Die rasch Ă€ndernden Risikobedingungen, mit denen sich Unternehmen heutzutage konfrontiert sehen, stellen Business Continuity und Resilience Verantwortliche vor neue Herausforderungen. Durch die zunehmende AbhĂ€ngigkeit von Lieferanten und GeschĂ€ftspartnern sowie steigende VerfĂŒgbarkeitsanforderungen von Services wird es immer bedeutsamer, eine effektive und effiziente Reaktion auf Störungen und AusfĂ€lle zur VerfĂŒgung zu stellen, um Ruf und Marke zu schĂŒtzen sowie finanzielle Ziele zu erreichen. Da die Vorbereitung und Planung einer Reaktion auf unvorhergesehene Ereignisse Ă€ußerst kostenintensiv sein kann, ist es notwendig, die Vorteile eines effizienten Notfallmanagements (Business Continuity Managements) nachvollziehbar zu begrĂŒnden. Der in dieser Arbeit vorgestellte Ansatz erweitert das Konzept des Risk-Aware Business Process Managements, um Auswirkungen von Workarounds und dynamischen Ressourcenzuweisungen zu analysieren. Die Ergebnisse dieser Analyse dienen als signifikanter Input fĂŒr die Notfallplanung. FĂŒr die Evaluierung des Ansatzes wurde ein Simulink Prototyp entwickelt. ZusĂ€tzlich wird ein Metamodell zur Abbildung und Erfassung von Business Continuity Anforderungen, welches auf Basis der OpenModels Plattform umgesetzt worden ist, vorgestellt.Companies are increasingly confronted with fast-changing risk-situations, leading to substantial challenges for business continuity and resilience professionals. Furthermore, the growing availability needs and the dependence on providers and suppliers demand an effective and eficient response to disruptions and interruptions in order to protect the brand, reputation and financial objectives of an organization. As the preparation for ’expecting the unexpected’ can be very costly, it is essential to highlight the benefits and advantages brought by proper business continuity planning. This thesis contributes to current research ambitions by presenting a formal approach extending the capabilities of risk-aware business process management. Risk aware business process management in general bridges the gap between the business process management, risk management and business continuity management domain. The presented extension within the thesis enables the consideration of resource allocation aspects within the risk-aware business process modeling and simulation. Through this extension it is possible to evaluate the effects of workarounds and resource re-allocations which is one crucial part in business continuity plans. In order to test the feasibility we implemented a prototype of our formal model using Simulink. Additionally, in this work, we introduce a business continuity meta-model which is capable to capture essential business continuity requirements. The meta-model was implemented as a project within the OpenModels Initative

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenterℱ: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Pervasive computing reference architecture from a software engineering perspective (PervCompRA-SE)

    Get PDF
    Pervasive computing (PervComp) is one of the most challenging research topics nowadays. Its complexity exceeds the outdated main frame and client-server computation models. Its systems are highly volatile, mobile, and resource-limited ones that stream a lot of data from different sensors. In spite of these challenges, it entails, by default, a lengthy list of desired quality features like context sensitivity, adaptable behavior, concurrency, service omnipresence, and invisibility. Fortunately, the device manufacturers improved the enabling technology, such as sensors, network bandwidth, and batteries to pave the road for pervasive systems with high capabilities. On the other hand, this domain area has gained an enormous amount of attention from researchers ever since it was first introduced in the early 90s of the last century. Yet, they are still classified as visionary systems that are expected to be woven into peopleñ€ℱs daily lives. At present, PervComp systems still have no unified architecture, have limited scope of context-sensitivity and adaptability, and many essential quality features are insufficiently addressed in PervComp architectures. The reference architecture (RA) that we called (PervCompRA-SE) in this research, provides solutions for these problems by providing a comprehensive and innovative pair of business and technical architectural reference models. Both models were based on deep analytical activities and were evaluated using different qualitative and quantitative methods. In this thesis we surveyed a wide range of research projects in PervComp in various subdomain areas to specify our methodological approach and identify the quality features in the PervComp domain that are most commonly found in these areas. It presented a novice approach that utilizes theories from sociology, psychology, and process engineering. The thesis analyzed the business and architectural problems in two separate chapters covering the business reference architecture (BRA) and the technical reference architecture (TRA). The solutions for these problems were introduced also in the BRA and TRA chapters. We devised an associated comprehensive ontology with semantic meanings and measurement scales. Both the BRA and TRA were validated throughout the course of research work and evaluated as whole using traceability, benchmark, survey, and simulation methods. The thesis introduces a new reference architecture in the PervComp domain which was developed using a novel requirements engineering method. It also introduces a novel statistical method for tradeoff analysis and conflict resolution between the requirements. The adaptation of the activity theory, human perception theory and process re-engineering methods to develop the BRA and the TRA proved to be very successful. Our approach to reuse the ontological dictionary to monitor the system performance was also innovative. Finally, the thesis evaluation methods represent a role model for researchers on how to use both qualitative and quantitative methods to evaluate a reference architecture. Our results show that the requirements engineering process along with the trade-off analysis were very important to deliver the PervCompRA-SE. We discovered that the invisibility feature, which was one of the envisioned quality features for the PervComp, is demolished and that the qualitative evaluation methods were just as important as the quantitative evaluation methods in order to recognize the overall quality of the RA by machines as well as by human beings

    High Reliability Organization Theory As An Input To Manage Operational Risk In Project Management

    Get PDF
    This paper demonstrates how adoption of High Reliability Organization Theory (HROT) delivers value to mainstream organizations. It presents the terminology that encompasses High Reliability Organizations (HROs) and how researchers define the characteristics and core principles of such organizations. These organizations have been well studied by professionals from numerous disciplines allowing us to understand what makes an HRO successful. This paper will add to this by exploring how HROT may be applied to mainstream organizations and elaborates on the importance of mindfulness specifically as it relates to sensitivity to operations. The findings are synthesized into an actual project that successfully leveraged HROT principles to improve reliability and address operational risk. The paper concludes that there are considerable opportunities to exploit HROT in project, program, and process management to achieve high reliability and value in a non-HRO

    Towards risk-aware communications networking

    Get PDF

    Strategic Roadmaps and Implementation Actions for ICT in Construction

    Get PDF

    A Systemic Approach to Next Generation Infrastructure Data Elicitation and Planning Using Serious Gaming Methods

    Get PDF
    Infrastructure systems are vital to the functioning of our society and economy. However, these systems are increasingly complex and are more interdependent than ever, making them difficult to manage. In order to respond to increasing demand, environmental concerns, and natural and man-made threats, infrastructure systems have to adapt and transform. Traditional engineering design approaches and planning tools have proven to be inadequate when planning and managing these complex socio-technical system transitions. The design and implementation of next generation infrastructure systems require holistic methodologies, encompassing organizational and societal aspects in addition to technical factors. In order to do so, a serious gaming based risk assessment methodology is developed to assist infrastructure data elicitation and planning. The methodology combines the use of various models, commercial-off-the-shelf solutions and a gaming approach to aggregate the inputs of various subject matter experts (SMEs) to predict future system characteristics. The serious gaming based approach enables experts to obtain a thorough understanding of the complexity and interdependency of the system while offering a platform to experiment with various strategies and scenarios. In order to demonstrate its abilities, the methodology was applied to National Airspace System (NAS) overhaul and its transformation to Next Generation Air Transportation System (NextGen). The implemented methodology yielded a comprehensive safety assessment and data generation mechanism, embracing the social and technical aspects of the NAS transformation for the next 15 years
    • 

    corecore