71 research outputs found

    Understanding the influence of psychology and vicarious experience on property flood resilience choices

    Get PDF
    There is an acknowledged need to improve the resilience of those at risk of flooding in the UK. The majority of the at-risk population do not actively adopt mitigation measures even when they have experienced multiple flood events. If uptake of resilience methods is not increased, the physical and financial impacts will continue to escalate, as will psychological harm, with wider implications for health care costs.Previous studies largely focus upon explicating the barriers to resilient adaptation; a hitherto under-researched aspect is an understanding of the driving factors that can elicit active mitigation in the household sector, other than repeated inundation of the home. This research builds upon existing behavioural theories to develop a conceptual framework specific to the needs of the UK flood risk management context. The framework was explored via a survey of members of community flood groups; the topics covered included details of a wide range of flood mitigation measures adopted, together with the precise nature and extent of flood experiences. The survey instrument incorporated two psychometric tests measuring personality factors (self-efficacy and locus of control) which have been implicated in a range of hazard preparedness behaviours, but have not been subjected to formal assessment in this context previously in the UK.The results yielded new insight on the link between preparedness behaviours, personality traits and different types of flood experience. In contrast to previous UK research, the majority of the respondents (92%) had taken one or more mitigation actions in addition to joining a flood group. Furthermore, a very high proportion of respondents in the sample had begun to take action when lacking direct flood experience (26%) or having had only vicarious (or other indirect forms of) flood exposure (36%). Respondents scored significantly higher than the general adult population for general self-efficacy (GSE) (

    For the want of a nail: the Western Allies quest to synchronize maneuver and logistics during operations Torch and Overlord

    Get PDF
    Doctor of PhilosophyDepartment of HistoryDonald J. MrozekUnderstanding why the Western Allies failed to penetrate the western border of Germany in the fall of 1944 is a longer and more involved story than most histories of the topic imply. Allied performance in the European Theater of Operations during World War II is directly linked to their performance in the Mediterranean Theater of Operations (MTO) and before that, in the North African theater. This study focuses on how the Western Allies conducted campaigns – how they ran combined headquarters in order to plan and supervise joint, theater-level operations, and how those activities changed over time as the key leaders involved gained combat experience. After looking at the efforts of the Allied over this longer window of time, a new conclusion about why the pursuit phase of Overlord failed to penetrate the Westwall becomes clear. LTG J. C. H. Lee’s Communication’s Zone (COMZ) was unprepared to fulfill the logistical requirements of the Allied fall campaign in France in 1944, contributing directly to disappointment over the outcome of the campaign. For those who expected two years of combat experience to result in more effective performance in subsequent action, the failure was surely surprising. This study examines why COMZ could not manage the theater’s logistics and distribution system, and how Supreme Headquarters, Allied Expeditionary Force (SHAEF) failed to correct this shortcoming as it sought to synchronize joint operations with logistical requirements and the limitations they imposed. By contrasting the operational methods used by the United States (U.S.) and United Kingdom (U.K). and by looking at how Torch and Overlord unfolded, this study reaches three conclusions. First, COMZ was woefully unprepared to execute its combat mission in August 1944, and its failures lengthened the war considerably. Second, this failure was directly linked to the U.S. Army’s inability to integrate lessons learned at European Theater of Operations, U.S. Army (ETOUSA). Third, the work demonstrates how critical the integration of maneuver and sustainment is at the operational level of war and how U.S. doctrine and practice predating the war made this difficult to recognize. Finally, successful command at the theater and operational level relies upon consensus and cooperation, unlike the more directive nature of tactical control. COMZ and SHAEF were not prepared to fulfil their roles in August and September because the U.S. experience in World War One and the doctrine that emerged from that experience resulted in the adoption of a model for theater command that was eventually rejected in 1944. Although useful lessons were gained during Torch and implemented at Allied Force Headquarters (AFHQ), ETOUSA and Lee’s Service of Supply (SOS) did not integrate them. Those lessons were obscured when key personalities rotated or the org chart changed -- it took time for AFHQ, North African Theater of Operations, U.S. Army (NATOUSA), and the functional components to gel. ETOUSA and SOS faced different challenges, were busy with Bolero, and suffered through personnel turnover and restrictions of their own. A final round of reorganization swept through the U.K. over the winter of 1943 and 1944 when much of the command team relocated from the MTO to London. These organizational changes left in question who exactly was in charge of the various aspects of the sustainment mission during Overlord. Lee proved less effective than his peers when it came to producing results that were valued by the operational commands, and SHAEF and the army groups gradually poached ownership of planning and integrating logistical support from SOS/COMZ as a result. Lee held on to running the communications zone in France, but then he did not properly prepare for the mission. By the time SHAEF realized COMZ did not know how to do its job, it was too late to save the fall campaign. Just how bad things had gotten by October and November was masked by poor recordkeeping during the pursuit, confusion over what was really happening within the subordinate commands, and a narrative advanced by Eisenhower in January 1945 designed to paint a more flattering picture of recent events. Eisenhower manipulated facts in a report submitted to the Combined Chiefs of Staff in order to justify his decisions in France, dismiss any reported “mistakes” made during the fall, and ensure he retained personal control over the three army groups rather than reappointing a subordinate overall ground commander. In the process, Eisenhower initiated the cover-up that would make it so difficult to establish why the pursuit broke down

    A risk management framework for downstream petroleum product transportation and distribution in Nigeria

    Get PDF
    Phd ThesisIn Nigeria, downstream transportation and distribution of petroleum products is mainly done using pipelines and truck tanker transport systems. These systems have been linked to substantial accidents/incidents with consequences on human safety and the environment. This thesis proposes a risk management framework for the pipelines and road truck tanker transport systems. The study is based on a preliminary review of the entire downstream petroleum industry regulations which identifies key legislations and stakeholder interests within the context of accident prevention and response. This was then integrated into tailored mixed method risk assessment of the pipeline and truck transport systems. The risk assessment made use of accident reports and inputs from semistructure interviews and focus group discussion with relevant stakeholder organisations. For the pipeline systems, 96.46% of failure was attributed to activities of saboteurs and third party interference. The failure frequency of the pipeline (per km-year) was found to be very high (0.351) when compared to failure frequencies in the UK (0.23×10-3) and the US (0.135×10-3). It was discovered that limitations in pipeline legislations and national vested interests limits regulatory and operational capabilities. As a result the operator lacks the human and technical capability for pipeline integrity management and surveillance. Similarly the finding from the truck system revealed that 79% of accidents are due to human factors. The tanker regulators have no structured approach in dealing with the regulation of petroleum road trucking. Also, operating companies poorly adhere to safety standards. From an accident/incident response perspective, it was discovered that local response capability is lacking and the vulnerability of affected communities increases due to poor knowledge of the hazards associated with petroleum products. A framework was proposed for each of the transport systems. For the pipeline system, the framework leverages on the powers of the Petroleum Minister to provide best practice pipeline risk management directives. It also proposes strategies which combine the use of social tactics for engaging host communities in pipeline surveillance with technical tactics to enhance the pipeline integrity. For the truck risk management framework, control points for prevention of truck accidents were identified. It adheres to principles of commitment to change, and regulatory/peer collaboration for deployment of management actions. Suitable policy recommendations were made based on regulatory and operational interest of stakeholder organisations.Petroleum Technology Development Fund (PTD

    Supporting feature-level software maintenance

    Get PDF
    Software maintenance is the process of modifying a software system to fix defects, improve performance, add new functionality, or adapt the system to a new environment. A maintenance task is often initiated by a bug report or a request for new functionality. Bug reports typically describe problems with incorrect behaviors or functionalities. These behaviors or functionalities are known as features. Even in very well-designed systems, the source code that implements features is often not completely modularized. The delocalized nature of features makes maintaining them challenging. Since maintenance tasks are expressed in terms of features, the goal of this dissertation is to support software maintenance at the feature-level. We focus on two tasks in particular: feature location and impact analysis via feature coupling.;Feature location is the process of identifying the source code that implements a feature, and it is an essential first step to any maintenance task. There are many existing techniques for feature location that incorporate various types of analyses such as static, dynamic, and textual. In this dissertation, we recognize the advantages of leveraging several types of analyses and introduce a new approach to feature location based on combining dynamic analysis, textual analysis, and web mining algorithms applied to software. The use of web mining for feature location is a novel contribution, and we show that our new techniques based on web mining are significantly more effective than the current state of the art.;After using feature location to identify a feature\u27s source code, maintenance can be completed on that feature. Impact analysis should then be performed to revalidate the system and determine which other features may have been affected by the modifications. We define three feature coupling metrics that capture the relationship between features based on structural information, textual information, and their combination. Our novel feature coupling metrics can be used for impact analysis to quantify the strength of coupling between pairs of features. We performed three empirical studies on open-source software systems to assess the feature coupling metrics and established three major results. First, there is a moderate to strong statistically significant correlation between feature coupling and faults. Second, feature coupling can be used to correctly determine about half of the other features that would be affected by a change to a given feature. Finally, we found that the metrics align with developers\u27 opinions about pairs of features that are actually coupled

    Machine Learning for Software Fault Detection : Issues and Possible Solutions

    Get PDF
    Viime vuosina tekoälyn ja etenkin kone- ja syväoppimisen tutkimus on menestynyt osittain uusien teknologioiden ja laitteiston kehityksen vuoksi. Tutkimusalan uudelleen alkanut nousu on saanut monet tutkijat käyttämään kone- ja syväoppimismalleja sekä -tekniikoita ohjelmistotuotannon alalla, johon myös ohjelmiston laatu sisältyy. Tässä väitöskirjassa tutkitaan ohjelmistovirheiden tunnistukseen tarkoitettujen koneoppimismallien suorituskykyä kolmelta kannalta. Ensin pyritään määrittämään parhaiten ongelmaan soveltuvat mallit. Toiseksi käytetyistä malleista etsitään ohjelmistovirheiden tunnistusta heikentäviä yhtäläisyyksiä. Lopuksi ehdotetaan mahdollisia ratkaisuja löydettyihin ongelmiin. Koneoppimismallien suorituskyvyn analysointi paljasti kaksi pääongelmaa: datan epäsymmetrisyys ja aikariippuvuus. Näiden ratkaisemiseksi testattiin useita tekniikoita: ohjelmistovirheiden käsittely anomalioina, keinotekoisesti uusien näytteiden luominen datan epäsymmetrisyyden korjaamiseksi sekä jokaisen näytteen historian huomioivien syväoppimismallien kokeilu aikariippuvuusongelman ratkaisemiseksi. Ohjelmistovirheet havaittiin merkittävästi paremmin käyttämällä dataa tasapainottavia ylinäytteistämistekniikoita sekä aikasarjaluokitteluun tarkoitettuja syväoppimismalleja. Tulokset tuovat selvyyttä ohjelmistovirheiden ennustamiseen koneoppimismenetelmillä liittyviin ongelmiin. Ne osoittavat, että ohjelmistojen laadun tarkkailussa käytettävän datan aikariippuvuus tulisi ottaa huomioon, mikä vaatii etenkin tutkijoiden huomiota. Lisäksi ohjelmistovirheiden tarkempi havaitseminen voisi auttaa ammatinharjoittajia parantamaan ohjelmistojen laatua. Tulevaisuudessa tulisi tutkia kehittyneempien syväoppimismallien soveltamista. Tämä kattaa uusien metriikoiden sisällyttämisen ennustaviin malleihin, sekä kehittyneempien ja paremmin datan aikariippuvuuden huomioon ottavien aikasarjatyökalujen hyödyntämisen.Over the past years, thanks to the availability of new technologies and advanced hardware, the research on artificial intelligence, more specifically machine and deep learning, has flourished. This newly found interest has led many researchers to start applying machine and deep learning techniques also in the field of software engineering, including in the domain of software quality. In this thesis, we investigate the performance of machine learning models for the detection of software faults with a threefold purpose. First of all, we aim at establishing which are the most suitable models to use, secondly we aim at finding the common issues which prevent commonly used models from performing well in the detection of software faults. Finally, we propose possible solutions to these issues. The analysis of the performance of the machine learning models highlighted two main issues: the unbalanced data, and the time dependency within the data. To address these issues, we tested multiple techniques: treating the faults as anomalies and artificially generating more samples for solving the unbalanced data problem; the use of deep learning models that take into account the history of each data sample to solve the time dependency issue. We found that using oversampling techniques to balance the data, and using deep learning models specific for time series classification substantially improve the detection of software faults. The results shed some light on the issues related to machine learning for the prediction of software faults. These results indicate a need to consider the time dependency of the data used in software quality, which needs more attention from researchers. Also, improving the detection performance of software faults could help the practitioners to improve the quality of their software. In the future, more advanced deep learning models can be investigated. This includes the use of other metrics as predictors and the use of more advanced time series analysis tools for better taking into account the time dependency of the data

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    A framework for design assurance in developing embedded systems

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringStephen A. DyerSteven WarrenEmbedded systems control nearly every device we encounter. Examples abound: appliances, scientific instruments, building environmental controls, avionics, communications, smart phones, and transportation subsystems. These embedded systems can fail in various ways: performance, safety, and meeting market needs. Design errors often cause failures in performance or safety. Market failures, particularly delayed schedule release or running over budget, arise from poor processes. Rigorous methods can significantly reduce the probability of failure. Industry has produced and widely published “best practices” that promote rigorous design and development of embedded systems. Unfortunately, 20 to 35% of development teams do not use them, which leads to operational failures or missed schedules and budgets. This dissertation increases the potential for success in designing and developing embedded systems through the following: 1. It identifies, through literature review, the reasons and factors that cause teams to avoid best practices, which in turn contribute to development failures. 2. It provides a framework, as a psychologically unbiased mediator, to help teams institute best practices. The framework is both straightforward to implement and use and simple to learn. 3. It examines the feasibility of both crowdsourcing and the Delphi method to aid, through anonymous comments on proposed projects, unbiased mediation and estimation within the framework. In two separate case studies, both approaches resulted in underestimation of both required time and required effort. The wide variance in the surveys’ results from crowdsourcing indicated that approach to not be particularly useful. On the other hand, convergence of estimates and forecasts in both projects resulted when employing the Delphi method. Both approaches required six or more weeks to obtain final results. 4. It develops a recommendation model, as a plug-in module to the framework, for the build-versus-buy decision in design of subsystems. It takes a description of a project, compares designing a custom unit with integrating a commercial unit into the final product, and generates a recommendation for the build-versus-buy decision. A study of 18 separate case studies examines the sensitivity of 14 parameters in making the build-versus-buy decision when developing embedded systems. Findings are as follows: team expertise and available resources are most important; partitioning tasks and reducing interdependence are next in importance; the quality and support of commercial units are less important; and finally, premiums and product lifecycles have the least effect on the cost of development. A recommendation model incorporates the results of the sensitivity study and successfully runs on 16 separate case studies. It shows the feasibility and features of a tool that can recommend a build-or-buy decision. 5. It develops a first-order estimation model as another plug-in module to the framework. It aids in planning the development of embedded systems. It takes a description of a project and estimates required time, required effort, and challenges associated with the project. It is simple to implement and easy to use; it can be a spreadsheet, a Matlab model or a webpage; each provides an output like the model for the build-versus-buy decision
    corecore