18 research outputs found

    Combining Task-level and System-level Scheduling Modes for Mixed Criticality Systems

    Get PDF
    Different scheduling algorithms for mixed criticality systems have been recently proposed. The common denominator of these algorithms is to discard low critical tasks whenever high critical tasks are in lack of computation resources. This is achieved upon a switch of the scheduling mode from Normal to Critical. We distinguish two main categories of the algorithms: system-level mode switch and task-level mode switch. System-level mode algorithms allow low criticality (LC) tasks to execute only in normal mode. Task-level mode switch algorithms enable to switch the mode of an individual high criticality task (HC), from low (LO) to high (HI), to obtain priority over all LC tasks. This paper investigates an online scheduling algorithm for mixed-criticality systems that supports dynamic mode switches for both task level and system level. When a HC task job overruns its LC budget, then only that particular job is switched to HI mode. If the job cannot be accommodated, then the system switches to Critical mode. To accommodate for resource availability of the HC jobs, the LC tasks are degraded by stretching their periods until the Critical mode exhibiting job complete its execution. The stretching will be carried out until the resource availability is met. We have mechanized and implemented the proposed algorithm using Uppaal. To study the efficiency of our scheduling algorithm, we examine a case study and compare our results to the state of the art algorithms.Comment: \copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Partitioned Scheduling of Multi-Modal Mixed-Criticality Real-Time Systems on Multiprocessor Platforms

    Get PDF
    Real-time systems are becoming increasingly complex. A modern car, for example, requires a multitude of control tasks, such as braking, active suspension, and collision avoidance. These tasks not only exhibit different degrees of safety criticality but also change their criticalities as the driving mode changes. For instance, the suspension task is a critical part of the stability of the car at high speed, but it is only a comfort feature at low speed. Therefore, it is crucial to ensure timing guarantees for the system with respect to the tasks’ criticalities, not only within each mode but also during mode changes. This paper presents a partitioned multi-processor scheduling scheme for multi-modal mixed-criticality real-time systems. Our scheme consists of a packing algorithm and a scheduling algorithm for each processor that take into account both mode changes and criticalities. The packing algorithm maximizes the schedulable utilization across modes using the sustained criticality of each task, which captures the overall criticality of the task across modes. The scheduling algorithm combines Rate-Monotonic scheduling with a mode transition enforcement mechanism that relies on the transitional zero-slack instants of tasks to control low-criticality tasks during mode changes, so as to preserve the schedulability of high-criticality tasks. We also present an implementation of our scheduler in the Linux operating system, as well as an experimental evaluation to illustrate its practicality. Our evaluation shows that our scheme can provide close to twice as much tolerance to overloads (ductility) compared to a mode-agnostic scheme

    The Design, Analysis, & Application Of Multi-Modal Real-Time Embedded Systems

    Get PDF
    For many hand-held computing devices (e.g., smartphones), multiple operational modes are preferred because of their flexibility. In addition to their designated purposes, some of these devices provide a platform for different types of services, which include rendering of high-quality multimedia. Upon such devices, temporal isolation among co-executing applications is very important to ensure that each application receives an acceptable level of quality-of-service. In order to provide strong guarantees on services, multimedia applications and real-time control systems maintain timing constraints in the form of deadlines for recurring tasks. A flexible real-time multi-modal system will ideally provide system designers the option to change both resource-level modes and application-level modes. Existing schedulability analysis for a real-time multi-modal system (MMS) with software/hardware modes are computationally intractable. In addition, a fast schedulability analysis is desirable in a design-space exploration that determines the best parameters of a multi-modal system. The thesis of this dissertation is: The determination of resource parameters with guaranteed schedulability for real-time systems that may change computational requirements over time is expensive in terms of runtime. However, decoupling schedulability analysis from determining the minimum processing resource parameters of a real-time multi-modal system results in pseudo-polynomial complexity for the combined goals of determining MMS schedulability and optimal resource parameters. Effective schedulability analysis and optimized resource usages are essential for an MMS that may co-execute with other applications to reduce size and cost of an embedded system. Traditional real-time systems research has addressed the issue of schedulability under mode-changes and temporal isolation separately and independently. For instance, schedulability analysis of real-time multi-mode systems has commonly assumed that the system is executing upon a dedicated platform. On the other hand, research on temporal isolation in real-time scheduling has often assumed that the application and resource requirements of each subsystem are fixed during runtime. Only recently researchers have started to address the problem of guaranteeing hard deadlines of temporally-isolated subsystems for multi-modal systems. However, most of this research suffers two fundamental drawbacks: 1) full support for resource and application level mode-changes does not exist, and/or 2) determining schedulability for such systems has exponentialtime complexity. As a result, current literature cannot guarantee optimal resource usages for multi-modal systems. In this dissertation, we address the two fundamental drawbacks by providing a theoretical framework and associate tractable schedulability analysis for hard-real-time multi-modal subsystems. Then, by leveraging the schedulability analysis, we address the problem of optimizing a multi-modal system with respect to resource usages. To accelerate the schedulability analysis, we develop a parallel algorithm using message passing interface (MPI) to check the invariants of the schedulable real-time MMS. This parallel algorithm significantly improves the execution time for checking the schedulability (e.g., our parallel algorithm requires only approximately 45 minutes to analyze a 16-mode system upon 8 cores, whereas the analysis takes 9 hours when executed on a single core). However, even this reduction is still expensive for techniques such as design-space exploration (DSE) that repeatedly applies schedulability analysis to determine the optimal system resource parameters. Today\u27s massively parallel GPU platforms can be a cost-effective alternative for scaling the number of computer nodes and further reducing the computation time. An efficient GPU-based schedulability analysis can also be used online to reconfigure the system by re-evaluating schedulability if parameters change dynamically. In this dissertation, we also extend our parallel schedulability analysis algorithm for a GPU. Finally, we performed a case-study of radar-assisted cruise control system to show the usability of multi-modal system which consists of fixed priority non-preemptive tasks

    Autonomous Machine을 위한 실시간 스트림 처리와 센서 퓨전을 지원하는 Splash 프로그래밍 언어의 설계

    Get PDF
    학위논문(박사)--서울대학교 대학원 :공과대학 전기·컴퓨터공학부,2020. 2. 홍성수.Autonomous machines have begun to be widely used in various application domains due to recent remarkable advances in machine intelligence. As these autonomous machines are equipped with diverse sensors, multicore processors and distributed computing nodes, the complexity of the underlying software platform is increasing at a rapid pace, overwhelming the developers with implementation details. This leads to a demand for a new programming framework that has an easy-to-use programming abstraction. In this thesis, we present a graphical programming framework named Splash that explicitly addresses the programming challenges that arise during the development of an autonomous machine. We set four design goals to solve the challenges. First, Splash should provide an easy-to-use, effective programming abstraction. Second, it must support real-time stream processing for deep-learning based machine learning intelligence. Third, it must provide programming support for real-time control system of autonomous machines such as sensor fusion and mode change. Finally, it should support performance optimization of software system running on a heterogeneous multicore distributed computing platform. Splash allows programmers to specify genuine, end-to-end timing constraints. Also, it provides a best-effort runtime system that tries to meet the annotated timing constraints and exception handling mechanisms to monitor the violation of such constraints. To implement these runtime mechanisms, Splash provides underlying timing semantics: (1) it provides an abstract global clock that is shared by machines in the distributed system and (2) it supports programmers to write birthmark on every stream data item. Splash offers a multithreaded process model to support concurrent programming. In the multithreaded process model, a programmer can write a multithreaded program using Splash threads we call sthreads. An sthread is a logical entity of independent execution. In addition, Splash provides a language construct named build unit that allows programmers to allocate sthreads to processes and threads of an underlying operating system. Splash provides three additional language semantics to support real-time stream processing and real-time control systems. First, it provides rate control semantics to solve uncontrolled jitter and an unbounded FIFO queue problem due to the variability in communication delay and execution time. Second, it supports fusion semantics to handle timing issues caused by asynchronous sensors in the system. Finally, it provides mode change semantics to meet varying requirements in the real-time control systems. In this paper, we describe each language semantics and runtime mechanism that realizes such semantics in detail. To show the utility of our framework, we have written a lane keeping assist system (LKAS) in Splash as an example. We evaluated rate control, sensor fusion, mode change and build unit-based allocation. First, using rate controller, the jitter was reduced from 30.61 milliseconds to 1.66 milliseconds. Also, average lateral deviation and heading angle is reduced from 0.180 meters to 0.016 meters and 0.043 rad to 0.008 rad, respectively. Second, we showed that the fusion operator works normally as intended, with a run-time overhead of only 7 microseconds on average. Third, the mode change mechanism operated correctly and incurred a run-time overhead of only 0.53 milliseconds. Finally, as we increased the number of build units from 1 to 8, the average end-to-end latency was increased from 75.79 microseconds to 2022.96 microseconds. These results show that the language semantics and runtime mechanisms proposed in this thesis are designed and implemented correctly, and Splash can be used to effectively develop applications for an autonomous machine.딥 러닝 기반 machine intelligence의 비약적인 발전으로 인해 autonomous machine들이 다양한 분야에서 활용되고 있다. 이런 기기들은 다양한 센서, 멀티코어 프로세서, 분산 컴퓨팅 노드를 장착하고 있기 때문에, 이들을 지원하기 위한 기반 소프트웨어 플랫폼의 복잡도는 빠른 속도로 증가하는 추세이다. 이에 따라 개발자들이 복잡한 소프트웨어 구조를 효과적으로 다룰 수 있도록 해주는 프로그래밍 프레임워크의 필요성이 대두되고 있다. 본 학위논문은 autonomous machine의 개발 과정에서 발생하는 문제들을 해결하기 위한 그래픽 기반 프로그래밍 프레임워크인 Splash를 제안한다. Splash라는 이름은 stream processing language for autonomous machine에서 앞의 세 단어의 첫 문자들을 따서 지어졌다. 이 이름은 물과 같이 흐르는 스트림 데이터를 다루기 위한 프로그래밍 언어와 런타임 시스템을 개발하겠다는 의도를 가진다. 본 논문에서는 복잡한 소프트웨어 구조를 효과적으로 다루기 위해 네 가지 디자인 목표를 설정한다. 첫째, Splash는 개발자에게 세부적인 구현 이슈를 숨기고, 쉽게 사용할 수 있는 프로그래밍 추상화를 제공하여야 한다. 둘째, Splash는 machine intelligence를 위한 실시간 스트림 처리를 지원할 수 있어야 한다. 셋째, Splash는 실시간 제어 시스템에서 널리 사용되는 센서 퓨전, 모드 변경, 예외 처리와 같은 기능들을 위한 지원을 제공하여야 한다. 넷째, Splash는 이기종 멀티코어 분산 컴퓨팅 플랫폼에서 수행되는 소프트웨어 시스템의 성능 최적화를 지원하여야 한다. Splash는 실시간 스트림 처리를 위해 개발자가 프로그램 상에 본질적인 end-to-end timing constraints를 명시할 수 있도록 한다. 그리고 개발자가 명시한 timing constraints를 인지하고 이를 최대한 지켜주는 best-effort 런타임 시스템과 timing constraints의 위반을 모니터링하고 처리해주는 예외 처리 메커니즘을 함께 제공한다. 이런 런타임 메커니즘들을 구현하기 위해 Splash는 두 가지 기본적인 timing semantics를 제공한다. 첫째, 분산 시스템 상에서 모든 머신들이 공유할 수 있는 global time base를 제공한다. 둘째, Splash 상에 들어오는 모든 스트림 데이터 아이템에 자신의 birthmark를 기록하도록 한다. Splash는 동시성 프로그래밍을 지원하기 위한 멀티 쓰레디드 처리 모델을 제공한다. Splash 프로그래머는 sthread라는 논리적인 수행 단위를 사용하여 프로그램을 개발할 수 있다. 그리고 Splash는 sthread들을 실제 운영체제의 수행 단위인 프로세스와 쓰레드에게 할당하는 과정을 돕기 위한 빌드 유닛이라는 language construct를 제공한다. Splash는 timing semantics와 멀티 쓰레디드 처리 모델을 기반으로 실시간 스트림 처리와 실시간 제어 시스템을 지원하기 위한 세 가지 language semantics를 추가로 지원한다. 첫째는 스트림 데이터의 통신이나 처리 지연으로 인해 발생하는 지터나 바운드 되지 않는 큐 문제를 해결하기 위한 rate 제어 semantics이다. 둘째는 센서 퓨전 과정에서 시간적으로 동기화되지 않은 센서 입력들로 인한 타이밍 이슈들을 해결하기 위한 퓨전 semantics이다. 마지막은 가변적인 제어 시스템의 요구사항을 충족시키기 위해 수행 로직의 변경을 지원하는 모드 변경 semantics이다. 본 논문에서는 각각의 language semantics를 구체적으로 설명하고, 이를 실현하기 위한 런타임 메커니즘을 설계하고 구현한다. Splash의 효용성을 검증하기 위해서, 본 논문은 Splash를 사용하여 LKAS 응용을 개발하고 이를 Splash 런타임 시스템 상에서 수행시키며 실험을 진행하였다. 본 논문에서는 rate 제어 메커니즘, 센서 퓨전 메커니즘, 모드 변경 메커니즘, 빌드 유닛 기반 allocation을 각각 선정된 성능 지표들을 사용하여 검증하였다. 첫째, Splash의 rate 제어기를 사용하면 지터가 30.61ms에서 1.66ms로 감소되었고, 이로 인해 주행 차량의 측면 편차와 방향각이 각각 0.180m에서 0.016m, 0.043rad에서 0.008rad으로 개선된다는 것을 확인하였다. 둘째, 센서 퓨전을 위해 제안된 퓨전 연산자가 설계된 의도대로 정상 동작하고, 평균 7us의 낮은 오버헤드만을 유발한다는 것을 확인하였다. 셋째, 모드 변경 기능의 정상 동작을 검증하였고 그 과정에서 발생하는 시간적 오버헤드는 평균 0.53ms에 불과하였다. 마지막으로, synthetic workload에 대해 컴포넌트들에 매핑된 빌드 유닛 개수를 1개, 2개, 4개, 8개로 증가시킴에 따라 평균 end-to-end 지연 시간은 75.79us, 330.80us, 591.87us, 2022.96us로 증가하는 것을 확인하였다. 이러한 결과들은 본 논문에서 제안하는 language semantics와 런타임 메커니즘들이 의도대로 설계, 구현되었고, 이를 통해 autonomous machine의 응용들을 효과적으로 개발할 수 있다는 것을 보여준다.Chapter 1 Introduction p.1 1.1 Motivation p.2 1.2 Splash Overview p.5 1.3 Organization of This Dissertation p.9 Chapter 2 Related Work p.10 2.1 Kahn Process Network p.10 2.2 Firing Rule Applied to a Process p.13 2.3 Programming Framework for an Autonomous Machine p.14 2.4 Runtime Software for an Autonomous Machine p.16 2.5 Rate Control p.18 2.5.1 Traffic Shaping p.20 2.5.2 Traffic Policing p.22 2.6 Sensor Fusion p.23 2.6.1 Measurement Fusion p.24 2.6.2 Situation Fusion p.27 2.7 Mode Change p.30 2.7.1 Synchronous Mode Change p.32 2.7.2 Asynchronous Mode Change p.32 Chapter 3 Motivation and Contributions p.34 3.1 Problem Description p.34 3.2 Limitations of Kahn Process Network p.36 3.3 Contributions of this Dissertation p.38 Chapter 4 Underlying Timing Semantics of Splash p.41 4.1 End-to-End Timing Constraints p.41 4.2 Global Time Base and In-order Delivery p.42 4.3 Integrating Three Distinct Computing Models p.43 Chapter 5 Splash Language Constructs p.45 5.1 Processing Component p.46 5.2 Port p.49 5.3 Channel and Clink p.52 5.4 Fusion Operator p.54 5.5 Factory and Mode Change p.60 5.6 Build Unit p.65 5.7 Exception Handling p.67 Chapter 6 Splash Runtime Mechanisms p.69 6.1 Rate Control Mechanism p.69 6.2 Sensor Fusion Mechanism p.70 6.3 Mode Change Mechanism p.77 Chapter 7 Code Generation and Runtime System p.80 7.1 Build Unit-based Allocation p.80 7.2 Code Generation Template p.82 7.3 Splash Runtime System p.84 Chapter 8 Experimental Evaluation p.86 8.1 LKAS Program p.86 8.2 Experimental Environment p.91 8.3 Evaluating Rate Control p.92 8.4 Evaluating Sensor Fusion p.96 8.5 Evaluating Mode Change p.97 8.6 Evaluating Build Unit-based Allocation p.99 Chapter 9 Conclusion p.102 Bibliography p.104 Abstract in Korean p.113Docto

    A Hierarchical Scheduling Model for Dynamic Soft-Realtime System

    Get PDF
    We present a new hierarchical approximation and scheduling approach for applications and tasks with multiple modes on a single processor. Our model allows for a temporal and spatial distribution of the feasibility problem for a variable set of tasks with non-deterministic and fluctuating costs at runtime. In case of overloads an optimal degradation strategy selects one of several application modes or even temporarily deactivates applications. Hence, transient and permanent bottlenecks can be overcome with an optimal system quality, which is dynamically decided. This paper gives the first comprehensive and complete overview of all aspects of our research, including a novel CBS concept to confine entire applications, an evaluation of our system by using a video-on-demand application, an outline for adding further resource dimension, and aspects of our protoype implementation based on RTSJ

    Transient force atomic force microscopy: systems approaches to emerging applications

    Get PDF
    In existing dynamic mode operation of Atomic Force Microscopes (AFMs) steady-state signals like amplitude and phase are used for detection and imaging of material. Due to the high quality factor of the cantilever probe the corresponding methods are inherently slow. In this dissertation, a novel methodology for fast interrogation of material that exploits the transient part of the cantilever motion is developed. This method effectively addresses the perceived fundamental limitation on bandwidth due to high quality factors. It is particularly suited for the detection of small time scale tip-sample interactions. Analysis and experiments show that the method results in significant increase in bandwidth and resolution as compared to the steady-state-based methods;In atomic force microscopy, bandwidth or resolution can be affected by active quality factor (Q) control. However, in existing methods the trade off between resolution and bandwidth remains inherent. Observer based Q control method provides greater flexibility in managing the tradeoff between resolution and bandwidth during imaging. It also facilitates theoretical analysis lacking in existing methods;In this dissertation we develop a method for exact constructive controllability of quantum-mechanical systems. The method has three steps, first a path from the initial state to the final state is determined and intermediate points chosen such that any two consecutive points are close, next small sinusoidal control signals are used to drive the system between the points, and finally quantum measurement technique is used to exactly achieve the desired state. The methodology is demonstrated for the control of spin-half particles in a Stern-Gerlach setting;In this dissertation, a novel closed-loop real-time scheduling algorithm is developed based on dynamic estimation of execution time of tasks based on both deadline miss ratio and task rejection ratio in the system. This approach is highly preferable for firm/soft real-time systems since it provides a firm performance guarantee in terms of high guarantee ratio. Proportional-integral controller and H-infinity controller are designed for closed loop scheduling. Simulation studies showed that closed-loop dynamic scheduling offers a better performance over the openloop scheduling under all the practical conditions

    Delay Bound: Fractal Traffic Passes through Network Servers

    Get PDF
    Delay analysis plays a role in real-time systems in computer communication networks. This paper gives our results in the aspect of delay analysis of fractal traffic passing through servers. There are three contributions presented in this paper. First, we will explain the reasons why conventional theory of queuing systems ceases in the general sense when arrival traffic is fractal. Then, we will propose a concise method of delay computation for hard real-time systems as shown in this paper. Finally, the delay computation of fractal traffic passing through severs is presented

    Performanzanalyse für Multi-Core Multi-Mode Systeme mit gemeinsam genutzten Ressourcen - Verfahren und Anwendung auf AUTOSAR -

    Get PDF
    In order to implement multi-core systems for single-mode and multi-mode real-time applications, as can be found in modern automobiles, their development process requires appropriate methods and tools for timing and performance verification. In this context, this thesis proposes first novel approaches for the analysis of worst-case blocking-times and response-times for single-mode real-time applications that share resources in partitioned multi-core systems. For this purpose a compositional performance analysis methodology is adopted and extended to take into account the contention of tasks on the processor cores and on the shared resources under different combinations of processor scheduling policies and shared resource arbitration strategies. Highly relevant is the compatibility of the proposed analysis methods with the specifications of the automotive AUTOSAR standard, which defines the combination of (1) preemptive, non-preemptive and cooperative core local scheduling with (2) lock-based arbitration of core local shared resources and spinlock-based arbitration of inter-core shared resources. Further, this thesis proposes novel timing analysis solutions for multi-mode distributed real-time systems. For such systems, the settling time of a mode change, called mode change transition latency, is identified as an important system parameter that has been neglected before. This thesis contributes a novel analysis algorithm which gives a maximum bound on each mode change transition latency of multi-mode distributed applications. Knowing the settling time of each mode change, the impact of multiple mode changes and of the possible overload situations can be handled in the early development phases of real-time systems. Finally, an approach for safely handling shared resources across mode changes is presented and a corresponding timing analysis method is contributed. The new analysis solution combines modeling and analysis elements of the multi-core and multi-mode related analysis solutions and focuses on the specification of the AUTOSAR standard. This enables system designers to handle the timing behavior of more complex systems in which the problems of mode management, multi-core scheduling and shared resource arbitration coexist. The applicability and usefulness of the contributed analysis solutions are highlighted by experimental evaluations, which are enabled by the implementation of the proposed analysis methods in a performance analysis tool framework.Um Multicore-Systeme für die Umsetzung zeitkritischer Single- und Multi-Mode Anwendungen in sicherheitskritischen Umgebungen einsetzen zu können, werden in dem Entwicklungsprozess geeignete Analysemethoden und Tools zur Bestimmung des Zeitverhaltens und der Performanz benötigt. Als erster Beitrag dieser Dissertation werden neue Analyseverfahren eingeführt, um die Worst-Case-Antwortzeiten und -Blockierungszeiten für statische Echtzeitanwendungen in Single-Mode eingebetteten Multicore-Systemen mit gemeinsam genutzten Ressourcen zu bestimmen. Die entwickelten Verfahren nutzen einen existierenden kompositionellen Performanzanalyseansatz und erweitern diesen, um verschiedene Kombinationen von partitionierenden Multiprozessor-Schedulingverfahren und –Synchronisationsmechanismen behandeln zu können. Besonders praxisrelevant ist die Möglichkeit, die Kombination von (1) preemptives, nicht-preemptives sowie kooperatives Prozessor-Scheduling und (2) Spinlock-basierten Synchronisationsmechanismen zu analysieren, die heute in AUTOSAR-konformen Automotive-Softwarearchitekturen standardisiert sind. Als zweiter Beitrag wird in dieser Dissertation ein neuer Ansatz für die Analyse der zeitlichen Auswirkungen von mehreren Szenarienübergängen in vernetzten Multi-Mode eingebetteten Systemen eingeführt. Als erste konstruktive Maßnahme ermöglicht das in dieser Arbeit präsentierte Verfahren die Berechnung der Einschwingzeit jedes Szenarioübergangs und leistet dadurch eine wichtige Hilfestellung beim Systementwurf. Auf diese Weise können die Auswirkungen der Szenarienübergänge, einschließlich der zeitlich begrenzten Überlastsituationen, kontrolliert und in den Systementwurf frühzeitig einbezogen werden. Als letzter Beitrag dieser Dissertation wird ein Ansatz für die Handhabung der Zugriffskonflikte auf gemeinsam genutzten Ressourcen in Multi-Mode eingebetteten Multicore-Systemen präsentiert und eine entsprechende Analysemethode eingeführt. Die neue Analyse kombiniert Modellierungs- und Analyse-Elemente der vorher in dieser Arbeit eingeführten Analyseansätze, und ermöglicht die Untersuchung des ungünstigsten Zeitverhaltens viel komplexer eingebetteten Multicore-Systemen. Dabei werden erneut Spezifikationen der AUTOSAR-Standards berücksichtigt. Nicht zuletzt werden alle Analysemethoden in eine Toolumgebung implementiert und für verschiedene Experimente, die deren praktische Anwendbarkeit hervorheben, angewendet

    Optimisation in multi-mode systems

    Get PDF
    We study cost optimisation in multi-mode systems with discrete costs. We first solve the problem in one dimension and next we study it in multiple dimensions. As a motivating example, we study the temperature control in buildings using heating, ventilation and air-conditioning system HVAC while paying the minimal cost as possible. By optimising the behaviour of the HVAC systems, lots of energy could be saved. We are interested in finding optimal solutions as well as approximate solutions with guarantees
    corecore