49 research outputs found

    Separation of Circulating Tokens

    Full text link
    Self-stabilizing distributed control is often modeled by token abstractions. A system with a single token may implement mutual exclusion; a system with multiple tokens may ensure that immediate neighbors do not simultaneously enjoy a privilege. For a cyber-physical system, tokens may represent physical objects whose movement is controlled. The problem studied in this paper is to ensure that a synchronous system with m circulating tokens has at least d distance between tokens. This problem is first considered in a ring where d is given whilst m and the ring size n are unknown. The protocol solving this problem can be uniform, with all processes running the same program, or it can be non-uniform, with some processes acting only as token relays. The protocol for this first problem is simple, and can be expressed with Petri net formalism. A second problem is to maximize d when m is given, and n is unknown. For the second problem, the paper presents a non-uniform protocol with a single corrective process.Comment: 22 pages, 7 figures, epsf and pstricks in LaTe

    A Taxonomy of Daemons in Self-stabilization

    Full text link
    We survey existing scheduling hypotheses made in the literature in self-stabilization, commonly referred to under the notion of daemon. We show that four main characteristics (distribution, fairness, boundedness, and enabledness) are enough to encapsulate the various differences presented in existing work. Our naming scheme makes it easy to compare daemons of particular classes, and to extend existing possibility or impossibility results to new daemons. We further examine existing daemon transformer schemes and provide the exact transformed characteristics of those transformers in our taxonomy.Comment: 26 page

    New advances in vehicular technology and automotive engineering

    Get PDF
    An automobile was seen as a simple accessory of luxury in the early years of the past century. Therefore, it was an expensive asset which none of the common citizen could afford. It was necessary to pass a long period and waiting for Henry Ford to establish the first plants with the series fabrication. This new industrial paradigm makes easy to the common American to acquire an automobile, either for running away or for working purposes. Since that date, the automotive research grown exponentially to the levels observed in the actuality. Now, the automobiles are indispensable goods; saying with other words, the automobile is a first necessity article in a wide number of aspects of living: for workers to allow them to move from their homes into their workplaces, for transportation of students, for allowing the domestic women in their home tasks, for ambulances to carry people with decease to the hospitals, for transportation of materials, and so on, the list don’t ends. The new goal pursued by the automotive industry is to provide electric vehicles at low cost and with high reliability. This commitment is justified by the oil’s peak extraction on 50s of this century and also by the necessity to reduce the emissions of CO2 to the atmosphere, as well as to reduce the needs of this even more valuable natural resource. In order to achieve this task and to improve the regular cars based on oil, the automotive industry is even more concerned on doing applied research on technology and on fundamental research of new materials. The most important idea to retain from the previous introduction is to clarify the minds of the potential readers for the direct and indirect penetration of the vehicles and the vehicular industry in the today’s life. In this sequence of ideas, this book tries not only to fill a gap by presenting fresh subjects related to the vehicular technology and to the automotive engineering but to provide guidelines for future research. This book account with valuable contributions from worldwide experts of automotive’s field. The amount and type of contributions were judiciously selected to cover a broad range of research. The reader can found the most recent and cutting-edge sources of information divided in four major groups: electronics (power, communications, optics, batteries, alternators and sensors), mechanics (suspension control, torque converters, deformation analysis, structural monitoring), materials (nanotechnology, nanocomposites, lubrificants, biodegradable, composites, structural monitoring) and manufacturing (supply chains). We are sure that you will enjoy this book and will profit with the technical and scientific contents. To finish, we are thankful to all of those who contributed to this book and who made it possible.info:eu-repo/semantics/publishedVersio

    Asynchronous neighborhood task synchronization

    Full text link
    Faults are likely to occur in distributed systems. The motivation for designing self-stabilizing system is to be able to automatically recover from a faulty state. As per Dijkstra\u27s definition, a system is self-stabilizing if it converges to a desired state from an arbitrary state in a finite number of steps. The paradigm of self-stabilization is considered to be the most unified approach to designing fault-tolerant systems. Any type of faults, e.g., transient, process crashes and restart, link failures and recoveries, and byzantine faults, can be handled by a self-stabilizing system; Many applications in distributed systems involve multiple phases. Solving these applications require some degree of synchronization of phases. In this thesis research, we introduce a new problem, called asynchronous neighborhood task synchronization ( NTS ). In this problem, processes execute infinite instances of tasks, where a task consists of a set of steps. There are several requirements for this problem. Simultaneous execution of steps by the neighbors is allowed only if the steps are different. Every neighborhood is synchronized in the sense that all neighboring processes execute the same instance of a task. Although the NTS problem is applicable in nonfaulty environments, it is more challenging to solve this problem considering various types of faults. In this research, we will present a self-stabilizing solution to the NTS problem. The proposed solution is space optimal, fault containing, fully localized, and fully distributed. One of the most desirable properties of our algorithm is that it works under any (including unfair) daemon. We will discuss various applications of the NTS problem

    Automated Analysis and Optimization of Distributed Self-Stabilizing Algorithms

    Get PDF
    Self-stabilization [2] is a versatile technique for recovery from erroneous behavior due to transient faults or wrong initialization. A system is self-stabilizing if (1) starting from an arbitrary initial state it can automatically reach a set of legitimate states in a finite number of steps and (2) it remains in legitimate states in the absence of faults. Weak-stabilization [3] and probabilistic-stabilization [4] were later introduced in the literature to deal with resource consumption of self-stabilizing algorithms and impossibility results. Since the system perturbed by fault may deviate from correct behavior for a finite amount of time, it is paramount to minimize this time as much as possible, especially in the domain of robotics and networking. This type of fault tolerance is called non-masking because the faulty behavior is not completely masked from the user [1]. Designing correct stabilizing algorithms can be tedious. Designing such algorithms that satisfy certain average recovery time constraints (e.g., for performance guarantees) adds further complications to this process. Therefore, developing an automatic technique that takes as input the specification of the desired system, and synthesizes as output a stabilizing algorithm with minimum (or other upper bound) average recovery time is useful and challenging. In this thesis, our main focus is on designing automated techniques to optimize the average recovery time of stabilizing systems using model checking and synthesis techniques. First, we prove that synthesizing weak-stabilizing distributed programs from scratch and repairing stabilizing algorithms with average recovery time constraints are NP-complete in the state-space of the program. To cope with this complexity, we propose a polynomial-time heuristic that compared to existing stabilizing algorithms, provides lower average recovery time for many of our case studies. Second, we study the problem of fine tuning of probabilistic-stabilizing systems to improve their performance. We take advantage of the two properties of self-stabilizing algorithms to model them as absorbing discrete-time Markov chains. This will reduce the computation of average recovery time to finding the weighted sum of elements in the inverse of a matrix. Finally, we study the impact of scheduling policies on recovery time of stabilizing systems. We, in particular, propose a method to augment self-stabilizing programs with k-central and k-bounded schedulers to study dierent factors, such as geographical distance of processes and the achievable level of parallelism

    Essays on Firms and Labor Markets in Developing Countries

    Get PDF
    This thesis consists of three chapters on firms and labor markets in developing countries. The first chapter employs a labor market experiment to compare the effects of a demand-side and a supply-side policy aimed at addressing youth unemployment: vocational training and firm-provided training. The findings demonstrate that vocational training is a more cost-effective strategy to assist young job seekers transition into the labor market. This is because vocationally trained workers, whose skills are certified and can be demonstrated to potential employers, receive significantly higher rates of job offers when unemployed. The second chapter investigates how youth search for jobs in a developing country context and identifies the key factors influencing their ability to secure good jobs. To accomplish this, it relies on a field experiment that combines two standard labor market interventions: vocational training, vocational training combined with matching youth to firms, and matching only. The analysis highlights the foundational but separate roles of skills and expectations in job search, how interventions cause youth to become optimistic or discouraged, and how this matters for long run sorting and individual labor market outcomes. The third chapter examines how imperfect consumer information about available goods in the market influences firms' location choices within a city. By combining a novel data collection with a quantitative equilibrium model of consumer search and firm location, the study reveals that information frictions lead to substantial firm agglomeration within cities and hinder high-quality firms' ability to attract customers, enabling lower-quality competitors to survive. Through counterfactual scenarios, this chapter demonstrates the importance of considering consumer information frictions for an accurate assessment of the welfare implications of urban policies

    Comparing Recent Advances in Estimating and Measuring Oil Slick Thickness: An MPRI Technical Report

    Get PDF
    Characterization of the degree and extent of surface oil during and after an oil spill is a critical part of emergency response and Natural Resource Damage Assessment (NRDA) activities. More specifically, understanding floating oil thickness in real-time can guide response efforts by directing limited assets to priority cleanup areas; aid in ‘volume released’ estimates; enhance fate, transport and effects modeling capabilities; and support natural resource injury determinations. An international workshop brought researchers from agencies, academia and industry who were advancing in situ and remote oil characterization tools and methods together with stake holders and end users who rely on information about floating oil thickness for mission critical assignments (e.g., regulatory, assessment, cleanup, research). In total, over a dozen researchers presented and discussed their findings from tests using various different sensors and sensor platforms. The workshop resulted in discussions and recommendations for better ways to leverage limited resources and opportunities for advancing research and developing tools and methods for oil spill thickness measurements and estimates that could be applied during spill responses. One of the primary research gaps identified by the workshop participants was the need for side-by-side testing and validation of these different methods, to better understand their respective strengths, weaknesses and technical readiness levels, so that responders would be better able to make decisions about what methods are appropriate to use under what conditions, and to answer the various questions associated with response actions. Approach: 1) Convene a more in-depth multi day researcher workshop to discuss and develop specific workplan to conduct side-by-side validation and verification experiments for testing oil thickness measurements. 2) Conduct the validation and verification experiments in controlled environments: the Coastal Response Research Center (CRRC) highbay at the University of New Hampshire (UNH); and the Ohmsett National Oil Spill Response Research & Renewable Energy Test Facility

    Cooperative Research and Development for Advanced Microturbines Program on Advanced Integrated Microturbine System

    Full text link

    2009 Annual Research Symposium Abstract Book

    Get PDF
    2009 annual volume of abstracts for science research projects conducted by students at Trinity College

    Benelux meeting on systems and control, 23rd, March 17-19, 2004, Helvoirt, The Netherlands

    Get PDF
    Book of abstract
    corecore