170,658 research outputs found

    Joint Time-and Event-Triggered Scheduling in the Linux Kernel

    Full text link
    There is increasing interest in using Linux in the real-time domain due to the emergence of cloud and edge computing, the need to decrease costs, and the growing number of complex functional and non-functional requirements of real-time applications. Linux presents a valuable opportunity as it has rich hardware support, an open-source development model, a well-established programming environment, and avoids vendor lock-in. Although Linux was initially developed as a general-purpose operating system, some real-time capabilities have been added to the kernel over many years to increase its predictability and reduce its scheduling latency. Unfortunately, Linux currently has no support for time-triggered (TT) scheduling, which is widely used in the safety-critical domain for its determinism, low run-time scheduling latency, and strong isolation properties. We present an enhancement of the Linux scheduler as a new low-overhead TT scheduling class to support offline table-driven scheduling of tasks on multicore Linux nodes. Inspired by the Slot shifting algorithm, we complement the new scheduling class with a low overhead slot shifting manager running on a non-time-triggered core to provide guaranteed execution time to real-time aperiodic tasks by using the slack of the time-triggered tasks and avoiding high-overhead table regeneration for adding new periodic tasks. Furthermore, we evaluate our implementation on server-grade hardware with Intel Xeon Scalable Processor.Comment: to appear in Operating Systems Platforms for Embedded Real-Time applications (OSPERT) workshop 2023 co-hosted with 35th Euromicro conference on Real-time system

    Unsupervised feature selection for sensor time-series in pervasive computing applications

    Get PDF
    The paper introduces an efficient feature selection approach for multivariate time-series of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of time-series cross-correlation. The algorithm is capable of identifying nonredundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. In particular, the proposed feature selection process does not require expert intervention to determine the number of selected features, which is a key advancement with respect to time-series filters in the literature. The characteristic of the prosed algorithm allows enriching learning systems, in pervasive computing applications, with a fully automatized feature selection mechanism which can be triggered and performed at run time during system operation. A comparative experimental analysis on real-world data from three pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in the literature when dealing with sensor time-series. Specifically, it is presented an assessment both in terms of reduction of time-series redundancy and in terms of preservation of informative features with respect to associated supervised learning tasks

    Rethinking Sampled-Data Control for Unmanned Aircraft Systems

    Get PDF
    Unmanned aircraft systems are expected to provide both increasingly varied functionalities and outstanding application performances, utilizing the available resources. In this paper, we explore the recent advances and challenges at the intersection of real-time computing and control and show how rethinking sampling strategies can improve performance and resource utilization. We showcase a novel design framework, cyber-physical co-regulation, which can efficiently link together computational and physical characteristics of the system, increasing robust performance and avoiding pitfalls of event-triggered sampling strategies. A comparison experiment of different sampling and control strategies was conducted and analyzed. We demonstrate that co-regulation has resource savings similar to event-triggered sampling, but maintains the robustness of traditional fixed-periodic sampling forming a compelling alternative to traditional vehicle control design

    ATMP: An Adaptive Tolerance-based Mixed-criticality Protocol for Multi-core Systems

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted ncomponent of this work in other works.The challenge of mixed-criticality scheduling is to keep tasks of higher criticality running in case of resource shortages caused by faults. Traditionally, mixedcriticality scheduling has focused on methods to handle faults where tasks overrun their optimistic worst-case execution time (WCET) estimate. In this paper we present the Adaptive Tolerance based Mixed-criticality Protocol (ATMP), which generalises the concept of mixed-criticality scheduling to handle also faults of other nature, like failure of cores in a multi-core system. ATMP is an adaptation method triggered by resource shortage at runtime. The first step of ATMP is to re-partition the task to the available cores and the second step is to optimise the utility at each core using the tolerance-based real-time computing model (TRTCM). The evaluation shows that the utility optimisation of ATMP can achieve a smoother degradation of service compared to just abandoning tasks

    Time-Triggered Program Monitoring

    Get PDF
    Debugging is an important phase in the embedded software development cycle because of its high proportion in the overall cost in the product development. Debugging is difficult for real-time applications as such programs are time-sensitive and must meet deadlines in often a resource constrained environment. A common approach for real-time systems is to monitor the execution instead of stepping through the program, because stepping will usually violate all deadline constraints. We consider a time-triggered approach for program monitoring at runtime, resulting in bounded and predictable overhead. In time-triggered execution monitoring, a monitor runs as a separate process in parallel with an application program and samples the program's state periodically to evaluate a set of properties. Applying this technique in computing systems, results in bounded and predictable overhead. However, the time-triggered approach can have high overhead depending on the granularity of the monitoring effort. To reduce this overhead, we instrument the program with markers that will require to sample less frequently and thus reduce the overhead. This leads to interesting problems of (a) where to place the markers in the code and (b) how to manipulate the markers. While related work investigates the first part, in this work, we investigate the second part. We investigate different instrumentation schemes and propose two new schemes based on bitvectors that significantly reduce the overhead for time-triggered execution monitoring. Time-triggered execution monitoring suffers from several drawbacks such as; the time-triggered monitor requires certain synchronization features at the operating system level and may suffer from various concurrency and synchronization dependencies in a real-time setting. Furthermore, the time-triggered execution monitoring scheme requires the embedded environment to provide multi-tasking features. To address the aforementioned problems, we propose a new method called time-triggered self-monitoring, where the program under inspection is instrumented, so that it self-samples its state in a periodic fashion without requiring assistance from an external monitor or an internal timer. The experimental results show that a time-triggered self-monitored program performs significantly better in terms of execution time, binary code size, and context switches when compared to the same program monitored by an external time-triggered monitor

    Time Triggered Scheduling Analysis for Real-Time Applications on Multicore Platforms

    Get PDF
    REACTION 2014. 3rd International Workshop on Real-time and Distributed Computing in Emerging Applications. Rome, Italy. December 2nd, 2014.Scheduling of real-time applications for multicore platforms has become an important research topic. For analyzing the timing satisfactions of real-time tasks, most researches in the literature assume independent tasks. However, industrial applications are usually with fully tangled dependencies among the tasks. Independence of the tasks provides a very nice abstraction, whereas dependent structures due to the tangled executions of the tasks are closer to the real systems. This paper studies the scheduling policies and the schedulabil-ity analysis based on independent tasks by hiding the execution dependencies with additional timing parameters. Our scheduling policy relates to the well-known periodic task model, but in contrast, tasks are able to communicate with each other. A feasible task set requires an analysis for each core and the communication infrastructure, which can be performed indi-vidually by decoupling computation from communication in a distributed system. By using a Time-Triggered Constant Phase (TTCP) scheduler, each task receives certain time-slots in the hyper-period of the task set, which ensures a time-predictable communication impact. In this paper, we provide several algorithms to derive the time-slot for each task. Further, we found a fast heuristic algorithm to calculate the time-slot for each task, which is capable to reach a core utilization of 90% by considering typical industrial applications. Finally, experiments show the effectiveness of our heuristic and the performance in different settings.Publicad

    The high-rate data challenge: computing for the CBM experiment

    Get PDF
    The Compressed Baryonic Matter experiment (CBM) is a next-generation heavy-ion experiment to be operated at the FAIR facility, currently under construction in Darmstadt, Germany. A key feature of CBM is very high interaction rate, exceeding those of contemporary nuclear collision experiments by several orders of magnitude. Such interaction rates forbid a conventional, hardware-triggered readout; instead, experiment data will be freely streaming from self-triggered front-end electronics. In order to reduce the huge raw data volume to a recordable rate, data will be selected exclusively on CPU, which necessitates partial event reconstruction in real-time. Consequently, the traditional segregation of online and offline software vanishes; an integrated on- and offline data processing concept is called for. In this paper, we will report on concepts and developments for computing for CBM as well as on the status of preparations for its first physics run

    Fusion of Real-time Tsunami Simulation and Remote Sensing for Mapping the Impact of Tsunami Disaster

    Get PDF
    Bringing together state-of-the-art high-performance computing, remote sensing and spatial information sciences, we establish a method of real-time tsunami inundation forecasting, damage estimation and mapping to enhance disaster response. Right after a major (near field) earthquake is triggered, we perform a real-time tsunami inundation forecasting with use of high-performance computing platform. Given the maximum flow depth distribution, we perform quantitative estimation of exposed population using census data and the numbers of potential death and damaged structures by applying tsunami fragility curve. After the potential tsunami-affected areas are estimated, the analysis gets focused and moves on to the "detection" phase using remote sensing. Recent advances of remote sensing technologies expand capabilities of detecting spatial extent of tsunami affected area and structural damage. Especially, a semi-automated method to estimate building damage in tsunami-affected areas is developed using optical sensor data and a set of pre-and post-event high-resolution SAR (Synthetic Aperture Radar) data. The method is verified through the case studies in the 2011 Tohoku and other potential tsunami scenarios, and the prototype system development is now underway in Kochi prefecture, one of at-risk coastal city against Nankai trough earthquake. In the trial operation, we verify the capability of the method as a new tsunami early warning and response system for stakeholders and responders
    corecore