3,086 research outputs found

    Project scheduling under uncertainty using fuzzy modelling and solving techniques

    Get PDF
    In the real world, projects are subject to numerous uncertainties at different levels of planning. Fuzzy project scheduling is one of the approaches that deal with uncertainties in project scheduling problem. In this paper, we provide a new technique that keeps uncertainty at all steps of the modelling and solving procedure by considering a fuzzy modelling of the workload inspired from the fuzzy/possibilistic approach. Based on this modelling, two project scheduling techniques, Resource Constrained Scheduling and Resource Leveling, are considered and generalized to handle fuzzy parameters. We refer to these problems as the Fuzzy Resource Constrained Project Scheduling Problem (FRCPSP) and the Fuzzy Resource Leveling Problem (FRLP). A Greedy Algorithm and a Genetic Algorithm are provided to solve FRCPSP and FRLP respectively, and are applied to civil helicopter maintenance within the framework of a French industrial project called Helimaintenance

    Fully-deterministic execution of IEC-61499 models for Distributed Avionics Applications

    Get PDF
    © 2018 by the authors. The development of time-critical Distributed Avionics Applications (DAAs) pushes beyond the limit of existing modeling methodologies to design dependable systems. Aerospace and industrial automation entail high-integrity applications where execution time is essential for dependability. This tempts us to use modeling technologies from one domain in another. The challenge is to demonstrate that they can be effectively used across domains whilst assuring temporally dependable applications. This paper shows that an IEC61499-modeled DAA can satisfy temporal dependability requirements as to end-to-end flow latency when it is properly scheduled and realized in a fully deterministic avionics platform that entails Integrated Modular Avionics (IMA) computation along with Time-Triggered Protocol (TTP) communication. Outcomes from the execution design of an IEC61499-based DAA model for an IMA-TTP platform are used to check runtime correctness through DAA control stability. IEC 61499 is a modeling standard for industrial automation, and it is meant to facilitate distribution and reconfiguration of applications. The DAA case study is a Distributed Fluid Control System (DFCS) for the Airbus-A380 fuel system. Latency analysis results from timing metrics as well as closed-loop control simulation results are presented. Experimental outcomes suggest that an IEC61499-based DFCS model can achieve desired runtime latency for temporal dependability when executed in an IMA-TTP platform. Concluding remarks and future research direction are also discussed

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    Using Deep Neural Networks for Scheduling Resource-Constrained Activity Sequences

    Get PDF
    Eines der bekanntesten Planungsprobleme stellt die Planung von Aktivitäten unter Berücksichtigung von Reihenfolgenbeziehungen zwischen diesen Aktivitäten sowie Ressourcenbeschränkungen dar. In der Literatur ist dieses Planungsproblem als das ressourcenbeschränkte Projektplanungsproblem bekannt und wird im Englischen als Resource-Constrained Project Scheduling Problem oder kurz RCPSP bezeichnet. Das Ziel dieses Problems besteht darin, die Bearbeitungszeit einer Aktivitätsfolge zu minimieren, indem festgelegt wird, wann jede einzelne Aktivität beginnen soll, ohne dass die Ressourcenbeschränkungen überschritten werden. Wenn die Bearbeitungsdauern der Aktivitäten bekannt und deterministisch sind, können die Startzeiten der Aktivitäten à priori definiert werden, ohne dass die Gefahr besteht, dass der Zeitplan unausführbar wird. Da jedoch die Bearbeitungsdauern der Aktivitäten häufig nicht deterministisch sind, sondern auf Schätzungen von Expertengruppen oder historischen Daten basieren, können die realen Bearbeitungsdauern von den geschätzten abweichen. In diesem Fall ist eine reaktive Planungsstrategie zu bevorzugen. Solch eine reaktive Strategie legt die Startzeiten der einzelnen Aktivitäten nicht zu Beginn des Projektes fest, sondern erst unmittelbar an jedem Entscheidungspunkt im Projekt, also zu Beginn des Projektes und immer dann wenn eine oder mehrere Aktivitäten abgeschlossen und die beanspruchten Ressourcen frei werden. In dieser Arbeit wird eine neue reaktive Planungsstrategie für das ressourcenbeschränkte Projektplanungsproblem vorgestellt. Im Gegensatz zu anderen Literaturbeiträgen, in denen exakte, heuristische und meta-heuristische Methoden zur Anwendung kommen, basiert der in dieser Arbeit aufgestellte Lösungsansatz auf künstlichen neuronalen Netzen und maschinellem Lernen. Die neuronalen Netze verarbeiten die Informationen, die den aktuellen Zustand der Aktivitätsfolge beschreiben, und erzeugen daraus Prioritätswerte für die Aktivitäten, die im aktuellen Entscheidungspunkt gestartet werden können. Das maschinelle Lernen und insbesondere das überwachte Lernen werden für das Trainieren der neuronalen Netze mit beispielhaften Trainingsdaten angewendet, wobei die Trainingsdaten mit Hilfe einer Simulation erzeugt wurden. Sechs verschiedene neuronale Netzwerkstrukturen werden in dieser Arbeit betrachtet. Diese Strukturen unterscheiden sich sowohl in der ihnen zur Verfügung gestellten Eingabeinformation als auch der Art des neuronalen Netzes, das diese Information verarbeitet. Es werden drei Arten von neuronalen Netzen betrachtet. Diese sind neuronale Netze mit vollständig verbundenen Schichten, 1- dimensionale faltende neuronale Netze und 2-dimensionale neuronale faltende Netze. Darüber hinaus werden innerhalb jeder einzelnen Netzwerkstruktur verschiedene Hyperparameter, z.B. die Lernrate, Anzahl der Lernepochen, Anzahl an Schichten und Anzahl an Neuronen per Schicht, mittels einer Bayesischen Optimierung abgestimmt. Während des Abstimmens der Hyperparameter wurden außerdem Bereiche für die Hyperparameter identifiziert, die zur Verbesserung der Leistungen genutzt werden sollten. Das am besten trainierte Netzwerk wird dann für den Vergleich mit anderen vierunddreißig reaktiven heuristischen Methoden herangezogen. Die Ergebnisse dieses Vergleichs zeigen, dass der in dieser Arbeit vorgeschlagene Ansatz in Bezug auf die Minimierung der Gesamtdauer der Aktivitätsfolge die meisten Heuristiken übertrifft. Lediglich 3 Heuristiken erzielen kürzere Gesamtdauern als der Ansatz dieser Arbeit, jedoch sind deren Rechenzeiten um viele Größenordnungen länger. Eine Annahme in dieser Arbeit besteht darin, dass während der Ausführung der Aktivitäten Abweichungen bei den Aktivitätsdauern auftreten können, obwohl die Aktivitätsdauern generell als deterministisch modelliert werden. Folglich wird eine Sensitivitätsanalyse durchgeführt, um zu prüfen, ob die vorgeschlagene reaktive Planungsstrategie auch dann kompetitiv bleibt, wenn die Aktivitätsdauern von den angenommenen Werten abweichen

    Self-Aware Scheduling for Mixed-Criticality Component-Based Systems

    Get PDF
    A basic mixed-criticality requirement in real-time systems is temporal isolation, which ensures that applications receive a guaranteed (CPU) service and impose a bounded interference on other applications. Providing operating system support for temporal isolation is often inefficient, in terms of utilisation and achieved latencies, or complex and hard to implement or model correctly. Correct models are, however, a prerequisite when response times are bounded by formal analyses. We provide a novel approach to this challenge by applying self-aware computing methodologies that involve run-time monitoring to detect (and correct) model deviations of a budget-based scheduler

    Heterogeneity-aware scheduling and data partitioning for system performance acceleration

    Get PDF
    Over the past decade, heterogeneous processors and accelerators have become increasingly prevalent in modern computing systems. Compared with previous homogeneous parallel machines, the hardware heterogeneity in modern systems provides new opportunities and challenges for performance acceleration. Classic operating systems optimisation problems such as task scheduling, and application-specific optimisation techniques such as the adaptive data partitioning of parallel algorithms, are both required to work together to address hardware heterogeneity. Significant effort has been invested in this problem, but either focuses on a specific type of heterogeneous systems or algorithm, or a high-level framework without insight into the difference in heterogeneity between different types of system. A general software framework is required, which can not only be adapted to multiple types of systems and workloads, but is also equipped with the techniques to address a variety of hardware heterogeneity. This thesis presents approaches to design general heterogeneity-aware software frameworks for system performance acceleration. It covers a wide variety of systems, including an OS scheduler targeting on-chip asymmetric multi-core processors (AMPs) on mobile devices, a hierarchical many-core supercomputer and multi-FPGA systems for high performance computing (HPC) centers. Considering heterogeneity from on-chip AMPs, such as thread criticality, core sensitivity, and relative fairness, it suggests a collaborative based approach to co-design the task selector and core allocator on OS scheduler. Considering the typical sources of heterogeneity in HPC systems, such as the memory hierarchy, bandwidth limitations and asymmetric physical connection, it proposes an application-specific automatic data partitioning method for a modern supercomputer, and a topological-ranking heuristic based schedule for a multi-FPGA based reconfigurable cluster. Experiments on both a full system simulator (GEM5) and real systems (Sunway Taihulight Supercomputer and Xilinx Multi-FPGA based clusters) demonstrate the significant advantages of the suggested approaches compared against the state-of-the-art on variety of workloads."This work is supported by St Leonards 7th Century Scholarship and Computer Science PhD funding from University of St Andrews; by UK EPSRC grant Discovery: Pattern Discovery and Program Shaping for Manycore Systems (EP/P020631/1)." -- Acknowledgement

    A time-predictable many-core processor design for critical real-time embedded systems

    Get PDF
    Critical Real-Time Embedded Systems (CRTES) are in charge of controlling fundamental parts of embedded system, e.g. energy harvesting solar panels in satellites, steering and breaking in cars, or flight management systems in airplanes. To do so, CRTES require strong evidence of correct functional and timing behavior. The former guarantees that the system operates correctly in response of its inputs; the latter ensures that its operations are performed within a predefined time budget. CRTES aim at increasing the number and complexity of functions. Examples include the incorporation of \smarter" Advanced Driver Assistance System (ADAS) functionality in modern cars or advanced collision avoidance systems in Unmanned Aerial Vehicles (UAVs). All these new features, implemented in software, lead to an exponential growth in both performance requirements and software development complexity. Furthermore, there is a strong need to integrate multiple functions into the same computing platform to reduce the number of processing units, mass and space requirements, etc. Overall, there is a clear need to increase the computing power of current CRTES in order to support new sophisticated and complex functionality, and integrate multiple systems into a single platform. The use of multi- and many-core processor architectures is increasingly seen in the CRTES industry as the solution to cope with the performance demand and cost constraints of future CRTES. Many-cores supply higher performance by exploiting the parallelism of applications while providing a better performance per watt as cores are maintained simpler with respect to complex single-core processors. Moreover, the parallelization capabilities allow scheduling multiple functions into the same processor, maximizing the hardware utilization. However, the use of multi- and many-cores in CRTES also brings a number of challenges related to provide evidence about the correct operation of the system, especially in the timing domain. Hence, despite the advantages of many-cores and the fact that they are nowadays a reality in the embedded domain (e.g. Kalray MPPA, Freescale NXP P4080, TI Keystone II), their use in CRTES still requires finding efficient ways of providing reliable evidence about the correct operation of the system. This thesis investigates the use of many-core processors in CRTES as a means to satisfy performance demands of future complex applications while providing the necessary timing guarantees. To do so, this thesis contributes to advance the state-of-the-art towards the exploitation of parallel capabilities of many-cores in CRTES contributing in two different computing domains. From the hardware domain, this thesis proposes new many-core designs that enable deriving reliable and tight timing guarantees. From the software domain, we present efficient scheduling and timing analysis techniques to exploit the parallelization capabilities of many-core architectures and to derive tight and trustworthy Worst-Case Execution Time (WCET) estimates of CRTES.Los sistemas críticos empotrados de tiempo real (en ingles Critical Real-Time Embedded Systems, CRTES) se encargan de controlar partes fundamentales de los sistemas integrados, e.g. obtención de la energía de los paneles solares en satélites, la dirección y frenado en automóviles, o el control de vuelo en aviones. Para hacerlo, CRTES requieren fuerte evidencias del correcto comportamiento funcional y temporal. El primero garantiza que el sistema funciona correctamente en respuesta de sus entradas; el último asegura que sus operaciones se realizan dentro de unos limites temporales establecidos previamente. El objetivo de los CRTES es aumentar el número y la complejidad de las funciones. Algunos ejemplos incluyen los sistemas inteligentes de asistencia a la conducción en automóviles modernos o los sistemas avanzados de prevención de colisiones en vehiculos aereos no tripulados. Todas estas nuevas características, implementadas en software,conducen a un crecimiento exponencial tanto en los requerimientos de rendimiento como en la complejidad de desarrollo de software. Además, existe una gran necesidad de integrar múltiples funciones en una sóla plataforma para así reducir el número de unidades de procesamiento, cumplir con requisitos de peso y espacio, etc. En general, hay una clara necesidad de aumentar la potencia de cómputo de los actuales CRTES para soportar nueva funcionalidades sofisticadas y complejas e integrar múltiples sistemas en una sola plataforma. El uso de arquitecturas multi- y many-core se ve cada vez más en la industria CRTES como la solución para hacer frente a la demanda de mayor rendimiento y las limitaciones de costes de los futuros CRTES. Las arquitecturas many-core proporcionan un mayor rendimiento explotando el paralelismo de aplicaciones al tiempo que proporciona un mejor rendimiento por vatio ya que los cores se mantienen más simples con respecto a complejos procesadores de un solo core. Además, las capacidades de paralelización permiten programar múltiples funciones en el mismo procesador, maximizando la utilización del hardware. Sin embargo, el uso de multi- y many-core en CRTES también acarrea ciertos desafíos relacionados con la aportación de evidencias sobre el correcto funcionamiento del sistema, especialmente en el ámbito temporal. Por eso, a pesar de las ventajas de los procesadores many-core y del hecho de que éstos son una realidad en los sitemas integrados (por ejemplo Kalray MPPA, Freescale NXP P4080, TI Keystone II), su uso en CRTES aún precisa de la búsqueda de métodos eficientes para proveer evidencias fiables sobre el correcto funcionamiento del sistema. Esta tesis ahonda en el uso de procesadores many-core en CRTES como un medio para satisfacer los requisitos de rendimiento de aplicaciones complejas mientras proveen las garantías de tiempo necesarias. Para ello, esta tesis contribuye en el avance del estado del arte hacia la explotación de many-cores en CRTES en dos ámbitos de la computación. En el ámbito del hardware, esta tesis propone nuevos diseños many-core que posibilitan garantías de tiempo fiables y precisas. En el ámbito del software, la tesis presenta técnicas eficientes para la planificación de tareas y el análisis de tiempo para aprovechar las capacidades de paralelización en arquitecturas many-core, y también para derivar estimaciones de peor tiempo de ejecución (Worst-Case Execution Time, WCET) fiables y precisas

    Scheduling Mixed-Criticality Real-Time Systems

    Get PDF
    This dissertation addresses the following question to the design of scheduling policies and resource allocation mechanisms in contemporary embedded systems that are implemented on integrated computing platforms: in a multitasking system where it is hard to estimate a task's worst-case execution time, how do we assign task priorities so that 1) the safety-critical tasks are asserted to be completed within a specified length of time, and 2) the non-critical tasks are also guaranteed to be completed within a predictable length of time if no task is actually consuming time at the worst case? This dissertation tries to answer this question based on the mixed-criticality real-time system model, which defines multiple worst-case execution scenarios, and demands a scheduling policy to provide provable timing guarantees to each level of critical tasks with respect to each type of scenario. Two scheduling algorithms are proposed to serve this model. The OCBP algorithm is aimed at discrete one-shot tasks with an arbitrary number of criticality levels. The EDF-VD algorithm is aimed at recurrent tasks with two criticality levels (safety-critical and non-critical). Both algorithms are proved to optimally minimize the percentage of computational resource waste within two criticality levels. More in-depth investigations to the relationship among the computational resource requirement of different criticality levels are also provided for both algorithms.Doctor of Philosoph
    corecore