3,868 research outputs found

    Globalization

    Get PDF
    [Excerpt] While the chapters in the previous section examined employment relations in different national contexts, in this chapter we focus on employment relations in the international or global context. We begin by outlining different perspectives on globalization and examine how globalization has evolved over time. Based on this discussion, we provide a definition of globalization which best accounts for contemporary patterns of global interdependence. We then provide a brief overview of the arguments for and against globalization and discuss the implications that economic globalization presents for employment relations

    Addressing Complexity and Intelligence in Systems Dependability Evaluation

    Get PDF
    Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work

    Optimal test case selection for multi-component software system

    Get PDF
    The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTARÂżs demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    NeBula: TEAM CoSTAR’s robotic autonomy solution that won phase II of DARPA subterranean challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR’s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.Peer ReviewedAgha, A., Otsu, K., Morrell, B., Fan, D. D., Thakker, R., Santamaria-Navarro, A., Kim, S.-K., Bouman, A., Lei, X., Edlund, J., Ginting, M. F., Ebadi, K., Anderson, M., Pailevanian, T., Terry, E., Wolf, M., Tagliabue, A., Vaquero, T. S., Palieri, M., Tepsuporn, S., Chang, Y., Kalantari, A., Chavez, F., Lopez, B., Funabiki, N., Miles, G., Touma, T., Buscicchio, A., Tordesillas, J., Alatur, N., Nash, J., Walsh, W., Jung, S., Lee, H., Kanellakis, C., Mayo, J., Harper, S., Kaufmann, M., Dixit, A., Correa, G. J., Lee, C., Gao, J., Merewether, G., Maldonado-Contreras, J., Salhotra, G., Da Silva, M. S., Ramtoula, B., Fakoorian, S., Hatteland, A., Kim, T., Bartlett, T., Stephens, A., Kim, L., Bergh, C., Heiden, E., Lew, T., Cauligi, A., Heywood, T., Kramer, A., Leopold, H. A., Melikyan, H., Choi, H. C., Daftry, S., Toupet, O., Wee, I., Thakur, A., Feras, M., Beltrame, G., Nikolakopoulos, G., Shim, D., Carlone, L., & Burdick, JPostprint (published version

    Managing Systemic Risk in Legal Systems

    Get PDF
    The American legal system has proven remarkably robust even in the face vast and often tumultuous political, social, economic, and technological change. Yet our system of law is not unlike other complex social, biological, and physical systems in exhibiting local fragility in the midst of its global robustness. Understanding how this “robust yet fragile†(RYF) dilemma operates in legal systems is important to the extent law is expected to assist in managing systemic risk — the risk of large local or even system-wide failures — in other social systems. Indeed, legal system failures have been blamed as partly responsible for disasters such as the recent financial system crisis and the Deepwater Horizon oil spill. If we cannot effectively manage systemic risk within the legal system, however, how can we expect the legal system to manage systemic risk elsewhere? This Article employs a complexity science model of the RYF dilemma to explore why systemic risk persists in legal systems and how to manage it. Part I defines complexity in the context of the institutions and instruments that make up the legal system. Part II defines the five dimensions of robustness that support functionality of the legal system: (1) reliability; (2) efficiency; (3) scalability; (4) modularity, and (5) evolvability. Part III then defines system fragility, examining the internal and external constraints that impede legal system robustness and the fail-safe system control strategies for managing their effects. With those basic elements of the RYF dilemma model in place, Part IV defines systemic risk, exploring the paradoxical role of increasingly organized complexity brought about by fail-safe strategies as a source of legal system failure. There is no way around the RYF dilemma — some degree of systemic risk is inherent in any complex adaptive system — but the balance between robustness and fragility is something we can hope to influence. To explore how, Part V applies the RYF dilemma model to a concrete systemic risk management context — oil drilling in the deep Gulf of Mexico. The legal regime governing offshore oil exploration and extraction has been blamed as contributing to the set of failures that led to the catastrophic Deepwater Horizon spill and is at the center of reform initiatives. Using this case study, I argue that the RYF dilemma model provides valuable insights into how legal systems fail and how to manage legal systemic risk

    Managing Systemic Risk in Legal Systems

    Get PDF
    The American legal system has proven remarkably robust even in the face of vast and often tumultuous political, social, economic, and technological change. Yet our system of law is not unlike other complex social, biological, and physical systems in exhibiting local fragility in the midst of its global robustness. Understanding how this “robust yet fragile” (RYF) dilemma operates in legal systems is important to the extent law is expected to assist in managing systemic risk—the risk of large local or even system-wide failures—in other social systems. Indeed, legal system failures have been blamed as partly responsible for disasters such as the recent financial system crisis and the Deepwater Horizon oil spill. If we cannot effectively manage systemic risk within the legal system, how can we expect the legal system to manage systemic risk elsewhere? This Article employs a complexity science model of the RYF dilemma to explore why systemic risk persists in legal systems and how to manage it. Part I defines complexity in the context of the institutions and instruments that make up the legal system. Part II defines the five dimensions of robustness that support functionality of the legal system: (1) reliability, (2) efficiency, (3) scalability, (4) modularity, and (5) evolvability. Part III then defines system fragility by examining the internal and external constraints that impede legal system robustness and the fail-safe system control strategies for managing their effects. With those basic elements of the RYF dilemma model in place, Part IV defines systemic risk and explores the paradoxical role of increasingly organized complexity brought about by fail-safe strategies as a source of legal system failure. There is no way around the RYF dilemma—some degree of systemic risk is inherent in any complex adaptive system—but the balance between robustness and fragility is something we can hope to influence. To explore how, Part V applies the RYF dilemma model to a concrete systemic risk management context—oil drilling in the deep Gulf of Mexico. The legal regime governing offshore oil exploration and extraction has been blamed as contributing to the set of failures that led to the catastrophic Deepwater Horizon spill and is at the center of reform initiatives. Using this case study, I argue that the RYF dilemma model provides valuable insights into how legal systems fail and how to manage legal systemic risk
    • …
    corecore