1,308 research outputs found

    Climate Change and Critical Agrarian Studies

    Full text link
    Climate change is perhaps the greatest threat to humanity today and plays out as a cruel engine of myriad forms of injustice, violence and destruction. The effects of climate change from human-made emissions of greenhouse gases are devastating and accelerating; yet are uncertain and uneven both in terms of geography and socio-economic impacts. Emerging from the dynamics of capitalism since the industrial revolution — as well as industrialisation under state-led socialism — the consequences of climate change are especially profound for the countryside and its inhabitants. The book interrogates the narratives and strategies that frame climate change and examines the institutionalised responses in agrarian settings, highlighting what exclusions and inclusions result. It explores how different people — in relation to class and other co-constituted axes of social difference such as gender, race, ethnicity, age and occupation — are affected by climate change, as well as the climate adaptation and mitigation responses being implemented in rural areas. The book in turn explores how climate change – and the responses to it - affect processes of social differentiation, trajectories of accumulation and in turn agrarian politics. Finally, the book examines what strategies are required to confront climate change, and the underlying political-economic dynamics that cause it, reflecting on what this means for agrarian struggles across the world. The 26 chapters in this volume explore how the relationship between capitalism and climate change plays out in the rural world and, in particular, the way agrarian struggles connect with the huge challenge of climate change. Through a huge variety of case studies alongside more conceptual chapters, the book makes the often-missing connection between climate change and critical agrarian studies. The book argues that making the connection between climate and agrarian justice is crucial

    Algorithms for Geometric Facility Location: Centers in a Polygon and Dispersion on a Line

    Get PDF
    We study three geometric facility location problems in this thesis. First, we consider the dispersion problem in one dimension. We are given an ordered list of (possibly overlapping) intervals on a line. We wish to choose exactly one point from each interval such that their left to right ordering on the line matches the input order. The aim is to choose the points so that the distance between the closest pair of points is maximized, i.e., they must be socially distanced while respecting the order. We give a new linear-time algorithm for this problem that produces a lexicographically optimal solution. We also consider some generalizations of this problem. For the next two problems, the domain of interest is a simple polygon with n vertices. The second problem concerns the visibility center. The convention is to think of a polygon as the top view of a building (or art gallery) where the polygon boundary represents opaque walls. Two points in the domain are visible to each other if the line segment joining them does not intersect the polygon exterior. The distance to visibility from a source point to a target point is the minimum geodesic distance from the source to a point in the polygon visible to the target. The question is: Where should a single guard be located within the polygon to minimize the maximum distance to visibility? For m point sites in the polygon, we give an O((m + n) log (m + n)) time algorithm to determine their visibility center. Finally, we address the problem of locating the geodesic edge center of a simple polygon—a point in the polygon that minimizes the maximum geodesic distance to any edge. For a triangle, this point coincides with its incenter. The geodesic edge center is a generalization of the well-studied geodesic center (a point that minimizes the maximum distance to any vertex). Center problems are closely related to farthest Voronoi diagrams, which are well- studied for point sites in the plane, and less well-studied for line segment sites in the plane. When the domain is a polygon rather than the whole plane, only the case of point sites has been addressed—surprisingly, more general sites (with line segments being the simplest example) have been largely ignored. En route to our solution, we revisit, correct, and generalize (sometimes in a non-trivial manner) existing algorithms and structures tailored to work specifically for point sites. We give an optimal linear-time algorithm for finding the geodesic edge center of a simple polygon

    Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb

    Get PDF
    The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings. However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet. While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity. Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks. Therefore, this thesis addresses the research question: How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation? This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question. To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach. The engineering approach decouples the engineering steps of component definition, integration, and deployment. The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior. Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components. Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most. The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method. A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation. The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies. The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics. The second case study is a smart factory showcase system with more challenging application components and interface technologies. The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine große Auswahl von Drittanbieter-Lösungen ist einer der SchlĂŒsselfaktoren fĂŒr Software Ecosystems: Nutzer profitieren vom breiten Angebot und schnellen Innovationen, wĂ€hrend Drittanbieter ĂŒber die Plattform erfolgreiche Lösungen anbieten können. Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche fĂŒr verteilte Steuerungssysteme noch nicht erreicht wurde. WĂ€hrend Drittanbieter-Komponenten möglichst spĂ€t -- sogar Laufzeit -- integriert werden mĂŒssten, mĂŒssen Steuerungssysteme jedoch eine hohe ZuverlĂ€ssigkeit gegenĂŒber Echtzeitanforderungen aufweisen, was zu IntegrationskomplexitĂ€t fĂŒhrt. Dies zu lösen wĂŒrde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem fĂŒr moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfĂŒgbarer Frameworks zu schnellerer Innovation fĂŒhren wĂŒrde. Daher behandelt diese Dissertation folgende Forschungsfrage: Wie können wir moderne IT-Technologien verwenden und unabhĂ€ngige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Änderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen? Diese Dissertation beschreibt Herausforderungen und verwandte AnsĂ€tze im Detail und zeigt auf, dass existierende AnsĂ€tze diese Frage nicht vollstĂ€ndig behandeln. Um diese LĂŒcke zu schließen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz. Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten. Die Laufzeit-Plattform unterstĂŒtzt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten. UnabhĂ€ngige Entwicklung und Integration werden durch Konzepte fĂŒr synchrone Rekonfiguration im Vollbetrieb unterstĂŒtzt, also durch dynamische Orchestrierung. Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter QualitĂ€tsminderung, wenn ĂŒberhaupt. Rekonfigurationsplanung wird durch Analysekonzepte unterstĂŒtzt, einschließlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen QualitĂ€tsminderung mit dem Evolving Dataflow Graph (EDFG). Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben. Zwei Fallstudien untersuchen Machbarkeit und NĂŒtzlichkeit der Konzepte. Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben. Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet. Die Konzepte sind den Studien nach machbar und nĂŒtzlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kĂŒrzere Zyklen zu erreichen

    DESIGN AND VERIFICATION OF AUTONOMOUS SYSTEMS IN THE PRESENCE OF UNCERTAINTIES

    Get PDF
    Autonomous Systems offer hope towards moving away from mechanized, unsafe, manual, often inefficient practices. The last decade has seen several small, but important, steps towards making this dream into reality. These advancements have helped us to achieve limited autonomy in several places, such as, driving, factory floors, surgeries, wearables, and home assistants, etc. Nevertheless, autonomous systems are required to operate in a wide range of environments with uncertainties (viz., sensor errors, timing errors, dynamic nature of the environment, etc.). Such environmental uncertainties, even when present in small amounts, can have drastic impact on the safety of the system—thus hampering the goal of achieving higher degree of autonomy, especially in safety critical domains. To this end, the dissertation shall discuss formaltechniques that are able to verify and design autonomous systems for safety, even under the presence of such uncertainties, allowing for their trustworthy deployment in the real world. Specifically, the dissertation shall discuss monitoring techniques for autonomous systems from available (noisy) logs, and safety-verification techniques of autonomous system controllers under timing uncertainties. Secondly, using heterogeneous learning-based cloud computing models that can balance uncertainty in output and computation cost, the dissertation will present techniques for designing safe and performance-optimal autonomous systems.Doctor of Philosoph

    On the motion planning & control of nonlinear robotic systems

    Get PDF
    In the last decades, we saw a soaring interest in autonomous robots boosted not only by academia and industry, but also by the ever in- creasing demand from civil users. As a matter of fact, autonomous robots are fast spreading in all aspects of human life, we can see them clean houses, navigate through city traffic, or harvest fruits and vegetables. Almost all commercial drones already exhibit unprecedented and sophisticated skills which makes them suitable for these applications, such as obstacle avoidance, simultaneous localisation and mapping, path planning, visual-inertial odometry, and object tracking. The major limitations of such robotic platforms lie in the limited payload that can carry, in their costs, and in the limited autonomy due to finite battery capability. For this reason researchers start to develop new algorithms able to run even on resource constrained platforms both in terms of computation capabilities and limited types of endowed sensors, focusing especially on very cheap sensors and hardware. The possibility to use a limited number of sensors allowed to scale a lot the UAVs size, while the implementation of new efficient algorithms, performing the same task in lower time, allows for lower autonomy. However, the developed robots are not mature enough to completely operate autonomously without human supervision due to still too big dimensions (especially for aerial vehicles), which make these platforms unsafe for humans, and the high probability of numerical, and decision, errors that robots may make. In this perspective, this thesis aims to review and improve the current state-of-the-art solutions for autonomous navigation from a purely practical point of view. In particular, we deeply focused on the problems of robot control, trajectory planning, environments exploration, and obstacle avoidance

    Application of knowledge management principles to support maintenance strategies in healthcare organisations

    Get PDF
    Healthcare is a vital service that touches people's lives on a daily basis by providing treatment and resolving patients' health problems through the staff. Human lives are ultimately dependent on the skilled hands of the staff and those who manage the infrastructure that supports the daily operations of the service, making it a compelling reason for a dedicated research study. However, the UK healthcare sector is undergoing rapid changes, driven by rising costs, technological advancements, changing patient expectations, and increasing pressure to deliver sustainable healthcare. With the global rise in healthcare challenges, the need for sustainable healthcare delivery has become imperative. Sustainable healthcare delivery requires the integration of various practices that enhance the efficiency and effectiveness of healthcare infrastructural assets. One critical area that requires attention is the management of healthcare facilities. Healthcare facilitiesis considered one of the core elements in the delivery of effective healthcare services, as shortcomings in the provision of facilities management (FM) services in hospitals may have much more drastic negative effects than in any other general forms of buildings. An essential element in healthcare FM is linked to the relationship between action and knowledge. With a full sense of understanding of infrastructural assets, it is possible to improve, manage and make buildings suitable to the needs of users and to ensure the functionality of the structure and processes. The premise of FM is that an organisation's effectiveness and efficiency are linked to the physical environment in which it operates and that improving the environment can result in direct benefits in operational performance. The goal of healthcare FM is to support the achievement of organisational mission and goals by designing and managing space and infrastructural assets in the best combination of suitability, efficiency, and cost. In operational terms, performance refers to how well a building contributes to fulfilling its intended functions. Therefore, comprehensive deployment of efficient FM approaches is essential for ensuring quality healthcare provision while positively impacting overall patient experiences. In this regard, incorporating knowledge management (KM) principles into hospitals' FM processes contributes significantly to ensuring sustainable healthcare provision and enhancement of patient experiences. Organisations implementing KM principles are better positioned to navigate the constantly evolving business ecosystem easily. Furthermore, KM is vital in processes and service improvement, strategic decision-making, and organisational adaptation and renewal. In this regard, KM principles can be applied to improve hospital FM, thereby ensuring sustainable healthcare delivery. Knowledge management assumes that organisations that manage their organisational and individual knowledge more effectively will be able to cope more successfully with the challenges of the new business ecosystem. There is also the argument that KM plays a crucial role in improving processes and services, strategic decision-making, and adapting and renewing an organisation. The goal of KM is to aid action – providing "a knowledge pull" rather than the information overload most people experience in healthcare FM. Other motivations for seeking better KM in healthcare FM include patient safety, evidence-based care, and cost efficiency as the dominant drivers. The most evidence exists for the success of such approaches at knowledge bottlenecks, such as infection prevention and control, working safely, compliances, automated systems and reminders, and recall based on best practices. The ability to cultivate, nurture and maximise knowledge at multiple levels and in multiple contexts is one of the most significant challenges for those responsible for KM. However, despite the potential benefits, applying KM principles in hospital facilities is still limited. There is a lack of understanding of how KM can be effectively applied in this context, and few studies have explored the potential challenges and opportunities associated with implementing KM principles in hospitals facilities for sustainable healthcare delivery. This study explores applying KM principles to support maintenance strategies in healthcare organisations. The study also explores the challenges and opportunities, for healthcare organisations and FM practitioners, in operationalising a framework which draws the interconnectedness between healthcare. The study begins by defining healthcare FM and its importance in the healthcare industry. It then discusses the concept of KM and the different types of knowledge that are relevant in the healthcare FM sector. The study also examines the challenges that healthcare FM face in managing knowledge and how the application of KM principles can help to overcome these challenges. The study then explores the different KM strategies that can be applied in healthcare FM. The KM benefits include improved patient outcomes, reduced costs, increased efficiency, and enhanced collaboration among healthcare professionals. Additionally, issues like creating a culture of innovation, technology, and benchmarking are considered. In addition, a framework that integrates the essential concepts of KM in healthcare FM will be presented and discussed. The field of KM is introduced as a complex adaptive system with numerous possibilities and challenges. In this context, and in consideration of healthcare FM, five objectives have been formulated to achieve the research aim. As part of the research, a number of objectives will be evaluated, including appraising the concept of KM and how knowledge is created, stored, transferred, and utilised in healthcare FM, evaluating the impact of organisational structure on job satisfaction as well as exploring how cultural differences impact knowledge sharing and performance in healthcare FM organisations. This study uses a combination of qualitative methods, such as meetings, observations, document analysis (internal and external), and semi-structured interviews, to discover the subjective experiences of healthcare FM employees and to understand the phenomenon within a real-world context and attitudes of healthcare FM as the data collection method, using open questions to allow probing where appropriate and facilitating KM development in the delivery and practice of healthcare FM. The study describes the research methodology using the theoretical concept of the "research onion". The qualitative research was conducted in the NHS acute and non-acute hospitals in Northwest England. Findings from the research study revealed that while the concept of KM has grown significantly in recent years, KM in healthcare FM has received little or no attention. The target population was fifty (five FM directors, five academics, five industry experts, ten managers, ten supervisors, five team leaders and ten operatives). These seven groups were purposively selected as the target population because they play a crucial role in KM enhancement in healthcare FM. Face-to-face interviews were conducted with all participants based on their pre-determined availability. Out of the 50-target population, only 25 were successfully interviewed to the point of saturation. Data collected from the interview were coded and analysed using NVivo to identify themes and patterns related to KM in healthcare FM. The study is divided into eight major sections. First, it discusses literature findings regarding healthcare FM and KM, including underlying trends in FM, KM in general, and KM in healthcare FM. Second, the research establishes the study's methodology, introducing the five research objectives, questions and hypothesis. The chapter introduces the literature on methodology elements, including philosophical views and inquiry strategies. The interview and data analysis look at the feedback from the interviews. Lastly, a conclusion and recommendation summarise the research objectives and suggest further research. Overall, this study highlights the importance of KM in healthcare FM and provides insights for healthcare FM directors, managers, supervisors, academia, researchers and operatives on effectively leveraging knowledge to improve patient care and organisational effectiveness

    Multi-Criteria Optimization of Real-Time DAGs on Heterogeneous Platforms under P-EDF

    Get PDF
    This paper tackles the problem of optimal placement of complex real-time embedded applications on heterogeneous platforms. Applications are composed of directed acyclic graphs of tasks, with each DAG having a minimum inter-arrival period for its activation requests, and an end-to-end deadline within which all of the computations need to terminate since each activation. The platforms of interest are heterogeneous power-aware multi-core platforms with DVFS capabilities, including big.LITTLE Arm architectures, and platforms with GPU or FPGA hardware accelerators with Dynamic Partial Reconfiguration capabilities. Tasks can be deployed on CPUs using partitioned EDF-based scheduling. Additionally, some of the tasks may have an alternate implementation available for one of the accelerators on the target platform, which are assumed to serve requests in non-preemptive FIFO order. The system can be optimized by: minimizing power consumption, respecting precise timing constraints; maximizing the applications’ slack, respecting given power consumption constraints; or even a combination of these, in a multi-objective formulation. We propose an off-line optimization of the mentioned problem based on mixed-integer quadratic constraint programming (MIQCP). The optimization provides the DVFS configuration of all the CPUs (or accelerators) capable of frequency switching and the placement to be followed by each task in the DAGs, including the software-vs-hardware implementation choice for tasks that can be hardware-accelerated. For relatively big problems, we developed heuristic solvers capable of providing suboptimal solutions in a significantly reduced time compared to the MIQCP strategy, thus widening the applicability of the proposed framework. We validate the approach by running a set of randomly generated DAGs on Linux under SCHED_DEADLINE, deployed onto two real boards, one with Arm big.LITTLE architecture, the other with FPGA acceleration, verifying that the experimental runs meet the theoretical expectations in terms of timing and power optimization goals
    • 

    corecore