4,045 research outputs found

    Constraint checking during error recovery

    Get PDF
    The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control

    Towards understanding vulnerability: Investigating disruptions in cropping schedules in irrigated rice fields in West Java

    Get PDF
    Unsafe conditions may increase the vulnerability of farmers natural hazards and reduce the capacity of farmers to prevent or recover from disaster impacts. This study aimed to investigate disruptions in cropping schedules to understand unsafe conditions that contribute to vulnerability in irrigated fields served by Ir. Djuanda (Jatiluhur) reservoir in West Java. Firstly, the deviation of ongoing cropping schedules from the official cropping calendar was evaluated using the time-series Enhanced Vegetation Index (EVI) derived from MODerate-resolution Imaging Spectroradiometer (MODIS) imageries. Secondly, reasons for disruptions in cropping schedules were explored using an in-depth interview with farmers, extension officers, and water managers and analyzed using a qualitative content analysis. Thirdly, the progression from potential causes to consequences of the disruption was identified using a Bow-Tie analysis. Unsafe conditions were identified using the result of the Bow-Tie analysis. Finally, several ways to reduce vulnerability were suggested. This study has successfully showed that cropping schedules deviate from the official cropping calendar in the study area. Reasons for disruptions in cropping schedules include economic motives, weather variabilities, geographic locations, coping strategies, farmers’ interactions, and agricultural infrastructures. The Bow-Tie analysis has visualized the progression from potential causes, disruptions in cropping schedules, to potential disaster impacts. Unsafe conditions have been identified, categorized into the dangerous locations, unsustainable farming activities, unsuitable coping strategies, fragile infrastructures, and inaccurate perceptions, have been pinpointed. Addressing unsafe conditions is likely to able to reduce vulnerability in irrigated rice fields

    Resource Allocation Optimization through Task Based Scheduling Algorithms in Distributed Real Time Embedded Systems

    Get PDF
    Distributed embedded system is a type of distributed system, which consists of a large number of nodes, each node having lower computational power when compared to a node of a regular distributed system (like a cluster). A real time system is the one where every task has an associated dead line and the system works with a continuous stream of data supplied in real time.Such systems find wide applications in various fields such as automobile industry as fly-by-wire,brake-by-wire and steer-by-wire systems. Scheduling and efficient allocation of resources is extremely important in such systems because a distributed embedded real time system must deliver its output within a certain time frame, failing which the output becomes useless.In this paper, we have taken up processing unit number as a resource and have optimized the allocation of it to the various tasks.We use techniques such as model-based redundancy,heartbeat monitoring and check-pointing for fault detection and failure recovery.Our fault tolerance framework uses an existing list-based scheduling algorithm for task scheduling.This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system

    Welcoming high reliability organising in construction management

    Get PDF
    To achieve project objectives, construction project managers have to manoeuvre through complex coordination structures. They have to simultaneously deal with limited budgets, tight schedules, demanding stakeholders and a fragmented supply-chain. Despite their extensive coordination efforts, project managers are frequently confronted with unexpected delays that force them to improvise and re-plan. As a consequence, budgets and schedules tend to overrun and project organisations appear out-of-control rather than stable and reliable. To enrich our understanding of these phenomena, we propose using the theoretical lens of High Reliability Organising (HRO). HRO stems from research into high hazard industries, and is relatively new to construction management. It provides five generic guiding principles that help practitioners anticipate and contain unwanted events. Given that the use of HRO beyond high hazard contexts is not universally accepted within the scientific community, we ask whether it is justified to apply the HRO lens to the organisation and coordination of 'mainstream' construction projects. We elaborate on this issue by addressing its main theoretical concepts, its origin and its application beyond the fields of risk and safety. We further explain why reductionist interpretations of HRO concepts unnecessarily limit HRO's research domain. We propose a pragmatic reinterpretation of HRO that provides access to the field of construction management. Finally, we present preliminary results of our study into delays and overruns in inner-city subsurface utility reconstruction projects. Our theoretical and empirical arguments provide a stepping-stone for future HRO research projects in the construction management field

    Is Dance Good for the Body or Not? : An Examination of Body Awareness and Injury Prevention for Specialised Tertiary Dance Students

    Get PDF
    The purpose of this research paper is to discover more about tertiary dance and the effects that dance has on the body. I will discuss the pressures that dance places on the body, looking specifically at the years during full time study as a tertiary student. I will address dance issues such as common injuries, the reasons these injuries occur, prevention strategies, the effect that dance has on the mind and training conditions generally. Research into tertiary dance education programs, dance injuries, injury prevention, and general dance patterns will be supported by survey responses to come to some conclusions about the question \u27Is dance good for the body, or not?\u27 Dance is a challenging, aesthetically pleasing, innovative art form, where the participants the dancers- are consistently aiming for the best possible individual appearance, performance quality, technique, and unique style. This means that the risk of pushing a fraction further than what is physically possible and working the body too hard is elevated. The danger of injury is always apparent in the back of a dancers\u27 mind. As Orthopaedic Surgeon Reza Salleh said to me during an injury rehabilitation session, \u27Injuries are a dancer\u27s occupational hazard.\u27 The first and most obvious finding from the surveys conducted as part of this research and my study of the participants is that students who are enrolled or have graduated from a tertiary dance program strongly believe that they have learnt more about their bodies and are better prepared for injury prevention and maintenance due to their tertiary studies. The injury rate was different for each survey participant however the age where most injuries occurred was between the ages of 18 to 22. The increased amount of pressure that the dancer experiences when taking this step into full-time study can have several effects on the body. It is a time of vulnerability and change, and the dancer will take part in many activities that they have potentially never practiced before, leaving them feeling unsafe and nervous in some aspects of class or rehearsal activity. From this study I have discovered that the time when students are studying full-time at a tertiary education program is when they are exposed to many new physical practices, mostly unfamiliar. It is these years that injury occurrences increased due to heavy scheduling, exposure to new and difficult genres of techniques and skills, and the drive to reach full potential before the last day of the final year, as the gates open to the professional world and the comfort of the institution is left behind

    Design of an integrated airframe/propulsion control system architecture

    Get PDF
    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture

    An example of requirements for Advanced Subsonic Civil Transport (ASCT) flight control system using structured techniques

    Get PDF
    The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies

    Uniparallel Execution and its Uses.

    Full text link
    We introduce uniparallelism: a new style of execution that allows multithreaded applications to benefit from the simplicity of uniprocessor execution while scaling performance with increasing processors. A uniparallel execution consists of a thread-parallel execution, where each thread runs on its own processor, and an epoch-parallel execution, where multiple time intervals (epochs) of the program run concurrently. The epoch-parallel execution runs all threads of a given epoch on a single processor; this enables the use of techniques that are effective on a uniprocessor. To scale performance with increasing cores, a thread-parallel execution runs ahead of the epoch-parallel execution and generates speculative checkpoints from which to start future epochs. If these checkpoints match the program state produced by the epoch-parallel execution at the end of each epoch, the speculation is committed and output externalized; if they mismatch, recovery can be safely initiated as no speculative state has been externalized. We use uniparallelism to build two novel systems: DoublePlay and Frost. DoublePlay benefits from the efficiency of logging the epoch-parallel execution (as threads in an epoch are constrained to a single processor, only infrequent thread context-switches need to be logged to recreate the order of shared-memory accesses), allowing it to outperform all prior systems that guarantee deterministic replay on commodity multiprocessors. While traditional methods detect data races by analyzing the events executed by a program, Frost introduces a new, substantially faster method called outcome-based race detection to detect the effects of a data race by comparing the program state of replicas for divergences. Unlike DoublePlay, which runs a single epoch-parallel execution of the program, Frost runs multiple epoch-parallel replicas with complementary schedules, which are a set of thread schedules crafted to ensure that replicas diverge only if a data race occurs and to make it very likely that harmful data races cause divergences. Frost detects divergences by comparing the outputs and memory states of replicas at the end of each epoch. Upon detecting a divergence, Frost analyzes the replica outcomes to diagnose the data race bug and selects an appropriate recovery strategy that masks the failure.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/89677/1/kaushikv_1.pd
    corecore