38 research outputs found
GCSR: A Graphical Language With Algebraic Semantics for the Specification of Real-Time Systems
Graphical Communicating Shared Resources, GCSR, is a formal language for specifying real-time systems including their functional and resource requirements. A GCSR specification consists of a set of nodes that are connected with directed, labeled edges, which describe possible execution flows. Nodes represent instantaneous selection among execution flows, or time and resource consuming system activities. In addition, a node can represent a system subcomponent, which allows modular, hierarchical, thus scalable system specifications. Edges are labeled with instantaneous communication actions or time to describe the duration of activities in the source node. GCSR supports the explicit representation of resources and priorities to resolve resource contention. The semantics of GCSR is the Algebra of Communicating Shared Resources, a timed process algebra with operational semantics that makes GCSR specifications executable. Furthermore, the process algebra provides behavioral equivalence relations between GCSR specifications. These equivalence relations can be used to replace a GCSR specification with an equivalent specification inside another, and to minimize a GCSR specification in terms of the number of nodes and edges. The paper defines the GCSR language, describes GCSR specification reductions that preserve the specification behaviors, and illustrates GCSR with example design specifications
The Soundness and Completeness of ACSR (Algebra of Communicating Shared Resources)
Recently, significant progress has been made in the development of timed process algebras for the specification and analysis of real-time systems; one of which is a timed process algebra called ACSR. ACSR supports synchronous timed actions and asynchronous instantaneous events. Timed actions are used to represent the usage of resources and to model the passage of time. Events are used to capture synchronization between processes. To be able to specify real systems accurately, ACSR supports a notion of priority that can be used to arbitrate among timed actions competing for the use of resources and among events that are ready for synchronization. Equivalence between ACSR terms is defined in terms of strong bisimulation. The paper contains a set of algebraic laws that are proven sound and complete for finite ACSR agents
vCAT: Dynamic Cache Management Using CAT Virtualization
This paper presents vCAT, a novel design for dynamic shared cache management on multicore virtualization platforms based on Intel’s Cache Allocation Technology (CAT). Our design achieves strong isolation at both task and VM levels through cache partition virtualization, which works in a similar way as memory virtualization, but has challenges that are unique to cache and CAT. To demonstrate the feasibility and benefits of our design, we provide a prototype implementation of vCAT, and we present an extensive set of microbenchmarks and performance evaluation results on the PARSEC benchmarks and synthetic workloads, for both static and dynamic allocations. The evaluation results show that (i) vCAT can be implemented with minimal overhead, (ii) it can be used to mitigate shared cache interference, which could have caused task WCET increased by up to 7.2 x, (iii) static management in vCAT can increase system utilization by up to 7 x compared to a system without cache management; and (iv) dynamic management substantially outperforms static management in terms of schedulable utilization (increase by up to 3 x in our multi-mode example use case)
Analysis and Implementation of Global Preemptive Fixed-Priority Scheduling with Dynamic Cache Allocation
We introduce gFPca, a cache-aware global pre-emptive fixed-priority (FP) scheduling algorithm with dynamic cache allocation for multicore systems, and we present its analysis and implementation. We introduce a new overhead-aware analysis that integrates several novel ideas to safely and tightly account for the cache overhead. Our evaluation shows that the proposed overhead-accounting approach is highly accurate, and that gFPca improves the schedulability of cache-intensive tasksets substantially compared to the cache-agnostic global FP algorithm. Our evaluation also shows that gFPca outperforms the existing cache-aware non- preemptive global FP algorithm in most cases. Through our implementation and empirical evaluation, we demonstrate the feasibility of cache-aware global scheduling with dynamic cache allocation and highlight scenarios in which gFPca is especially useful in practice
Holistic resource allocation for multicore real-time systems
This paper presents CaM, a holistic cache and memory bandwidth resource allocation strategy for multicore real-time systems. CaM is designed for partitioned scheduling, where tasks are mapped onto cores, and the shared cache and memory bandwidth resources are partitioned among cores to reduce resource interferences due to concurrent accesses. Based on our extension of LITMUSRT with Intel’s Cache Allocation Technology and MemGuard, we present an experimental evaluation of the relationship between the allocation of cache and memory bandwidth resources and a task’s WCET. Our resource allocation strategy exploits this relationship to map tasks onto cores, and to compute the resource allocation for each core. By grouping tasks with similar characteristics (in terms of resource demands) to the same core, it enables tasks on each core to fully utilize the assigned resources. In addition, based on the tasks’ execution time behaviors with respect to their assigned resources, we can determine a desirable allocation that maximizes schedulability under resource constraints. Extensive evaluations using real-world benchmarks show that CaM offers near optimal schedulability performance while being highly efficient, and that it substantially outperforms existing solutions
Process Algebraic Approach to the Schedulability Analysis and Workload Abstraction of Hierarchical Real-Time Systems
Real-time embedded systems have increased in complexity. As microprocessors become more powerful, the software complexity of real-time embedded systems has increased steadily. The requirements for increased functionality and adaptability make the development of real-time embedded software complex and error-prone. Component-based design has been widely accepted as a compositional approach to facilitate the design of complex systems. It provides a means for decomposing a complex system into simpler subsystems and composing the subsystems in a hierarchical manner. A system composed of real-time subsystems with hierarchy is called a hierarchical real-time system
This paper describes a process algebraic approach to schedulability analysis of hierarchical real-time systems. To facilitate modeling and analyzing hierarchical real-time systems, we conservatively extend an existing process algebraic theory based on ACSR-VP (Algebra of Communicating Shared Resources with Value-Passing) for the schedulability of real-time systems. We explain a method to model a resource model in ACSR-VP which may be partitioned for a subsystem. We also introduce schedulability relation to define the schedulability of hierarchical real-time systems and show that satisfaction checking of the relation is reducible to deadlock checking in ACSR-VP and can be done automatically by the tool support of ERSA (Verification, Execution and Rewrite System for ACSR). With the schedulability relation, we present algorithms for abstracting real-time system workloads
Optimizing the Resource Requirements of Hierarchical Scheduling Systems
Compositional reasoning on hierarchical scheduling systems is a well-founded formal method that can construct schedulable and optimal system configurations in a compositional way. However, a compositional framework formulates the resource requirement of a component, called an interface, by assuming that a resource is always supplied by the parent components in the most pessimistic way. For this reason, the component interface demands more resources than the amount of resources that are really sufficient to satisfy sub-components. We provide two new supply bound functions which provides tighter bounds on the resource requirements of individual components. The tighter bounds are calculated by using more information about the scheduling system.
We evaluate our new tighter bounds by using a model-based schedulability framework for hierarchical scheduling systems realized as Uppaal models. The timed models are checked using model checking tools Uppaal and Uppaal SMC, and we compare our results with the state of the art tool CARTS
Autophagy protein NRBF2 has reduced expression in Alzheimer\u27s brains and modulates memory and amyloid-beta homeostasis in mice
Background Dysfunctional autophagy is implicated in Alzheimer\u27s Disease (AD) pathogenesis. The alterations in the expression of many autophagy related genes (ATGs) have been reported in AD brains; however, the disparity of the changes confounds the role of autophagy in AD. Methods To further understand the autophagy alteration in AD brains, we analyzed transcriptomic (RNAseq) datasets of several brain regions (BA10, BA22, BA36 and BA44 in 223 patients compared to 59 healthy controls) and measured the expression of 130 ATGs. We used autophagy-deficient mouse models to assess the impact of the identified ATGs depletion on memory, autophagic activity and amyloid-beta (A beta) production. Results We observed significant downregulation of multiple components of two autophagy kinase complexes BECN1-PIK3C3 and ULK1/2-FIP200 specifically in the parahippocampal gyrus (BA36). Most importantly, we demonstrated that deletion of NRBF2, a component of the BECN1-PIK3C3 complex, which also associates with ULK1/2-FIP200 complex, impairs memory in mice, alters long-term potentiation (LTP), reduces autophagy in mouse hippocampus, and promotes A beta accumulation. Furthermore, AAV-mediated NRBF2 overexpression in the hippocampus not only rescues the impaired autophagy and memory deficits in NRBF2-depleted mice, but also reduces beta-amyloid levels and improves memory in an AD mouse model. Conclusions Our data not only implicates NRBF2 deficiency as a risk factor for cognitive impairment associated with AD, but also support the idea of NRBF2 as a potential therapeutic target for AD
Exploiting TTP Co-Occurrence via GloVe-Based Embedding With MITRE ATT&CK Framework
The digital transformation of various systems has brought great convenience to our daily lives, but it has also increased the level of cyberattacks. As the number of cyberattacks has increased, so has the number of reports analyzing them, MITRE publishes the ATT&CK Matrix which analyzes the tactics and techniques of attacks based on real-world examples. As the flow of attacks has become more understandable through TTP information, researchers have been using it with deep learning models to detect or predict attacks, which makes embedding essential to train the model. In previous studies on embedding TTPs, embedding is limited to simple statistical methods such as one-hot encoding and TF-IDF. Such methods do not consider the order of TTPs and the conceptual similarity between TTPs, therefore do not capture the rich information that TTPs contain. In this paper, we propose embedding TTP with GloVe, a method using a co-occurrence matrix. To properly evaluate the semantic embedding performance of TTP, we also propose a measurement called Tactic Match Rate (TMR). In the experimental results, 8 out of 14 tactics showed a TMR of more than 0.5. Especially the “TA0007 (Discovery)” tactic showed the highest TMR of 0.87. Through correlation analysis, the experimental result shows that the reason for the different embedding performances of the tactic is affected by the frequency of the technique in the same tactic, with at most a 0.96 score. We also experimentally demonstrated that the neutrality of TTP affects learning performance