28 research outputs found

    Eager Stack Cache Memory Transfers

    Get PDF
    The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program\u27s worst-case execution time is needed, time-predictable computer architectures promise to resolve this problem. The stack cache, for instance, allows the compiler to efficiently cache a program\u27s stack, while static analysis of its behavior remains easy. This work introduces an optimization of the stack cache that allows to anticipate memory transfers that might be initiated by future stack cache control instructions. These eager memory transfers thus allow to reduce the average-case latency of those control instructions, very similar to "prefetching" techniques known from conventional caches. However, the mechanism proposed here is guaranteed to have no impact on the worst-case execution time estimates computed by static analysis. Measurements on a dual-core platform using the Patmos processor and imedivision-multiplexing-based memory arbitration, show that our technique can eliminate up to 62% (7%) of the memory transfers from (respectively to) the stack cache on average over all programs of the MiBench benchmark suite

    Worst-Case Execution Time Analysis of Predicated Architectures

    Get PDF
    The time-predictable design of computer architectures for the use in (hard) real-time systems is becoming more and more important, due to the increasing complexity of modern computer architectures. The design of predictable processor pipelines recently received considerable attention. The goal here is to find a trade-off between predictability and computing power. Branches and jumps are particularly problematic for high-performance processors. For one, branches are executed late in the pipeline. This either leads to high branch penalties (flushing) or complex software/hardware techniques (branch predictors). Another side-effect of branches is that they make it difficult to exploit instruction-level parallelism due to control dependencies. Predicated computer architectures allow to attach a predicate to the instructions in a program. An instruction is then only executed when the predicate evaluates to true and otherwise behaves like a simple nop instruction. Predicates can thus be used to convert control dependencies into data dependencies, which helps to address both of the aforementioned problems. A downside of predicated instructions is the precise worst-case execution time (WCET) analysis of programs making use of them. Predicated memory accesses, for instance, may or may not have an impact on the processor\u27s cache and thus need to be considered by the cache analysis. Predication potentially has an impact on all analysis phases of a WCET analysis tool. We thus explore a preprocessing step that explicitly unfolds the control-flow graph, which allows us to apply standard analyses that are themselves not aware of predication

    Efficient Context Switching for the Stack Cache: Implementation and Analysis

    Get PDF
    International audienceThe design of tailored hardware has proven a successful strategy to reduce the timing analysis overhead for (hard) real-time systems. The stack cache is an example of such a design that has been proven to provide good average-case performance, while being easy to analyze.So far, however, the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved and restored when a task is preempted.We propose (a) an analysis exploiting the simplicity of the stack cache to bound the overhead induced by task preemption and (b) an extension of the design that allows to (partially) hide the overhead by virtualizing stack caches

    Machine Learning Algorithms For Breast Cancer Prediction And Diagnosis

    Full text link
    peer reviewedEach year number of deaths is increasing extremely because of breast cancer. It is the most frequent type of all cancers and the major cause of death in women worldwide. Any development for prediction and diagnosis of cancer disease is capital important for a healthy life. Consequently, high accuracy in cancer prediction is important to update the treatment aspect and the survivability standard of patients. Machine learning techniques can bring a large contribute on the process of prediction and early diagnosis of breast cancer, became a research hotspot and has been proved as a strong technique. In this study, we applied five machine learning algorithms: Support Vector Machine (SVM), Random Forest, Logistic Regression, Decision tree (C4.5) and K-Nearest Neighbours (KNN) on the Breast Cancer Wisconsin Diagnostic dataset, after obtaining the results, a performance evaluation and comparison is carried out between these different classifiers. The main objective of this research paper is to predict and diagnosis breast cancer, using machine-learning algorithms, and find out the most effective whit respect to confusion matrix, accuracy and precision. It is observed that Support vector Machine outperformed all other classifiers and achieved the highest accuracy (97.2%).All the work is done in the Anaconda environment based on python programming language and Scikit-learn library

    Breast Cancer Prediction and Diagnosis through a New Approach based on Majority Voting Ensemble Classifier

    Full text link
    peer reviewedResearchers have extensively used machine learning techniques and data mining methods to build prediction models and classify data in various domains such as aviation, computer science, education, finance, marketing and particularly in medical field where those methods are applied as support systems for diagnosis and analysis in order to make better decisions. On this subject, our research paper attempts to assess the performance of Individual and Ensemble machine learning techniques based on the effectiveness and the efficiently, in terms of accuracy, specificity, sensitivity and precision to choose the most effective. The main object of our research paper is to define the best and effective machine learning approach for the Breast Cancer diagnosis and prediction. To achieve our objective, we applied individual based level machine learning algorithms Support Vector Machine (SVM), K-Nearest Neighbors (KNN), NaĂŻve Bayes (NB), Decision tree (C4.5), Simple Logistic and well known ensembles methods like Majority Voting and Random Forest with 10 cross field technique on the Breast Cancer Diagnosis Dataset obtained from UCI Repository. The experimental results show that the Majority Voting Ensemble technique based on 3 top classifiers SVM, K-NN, Simple Logistic gives the highest accuracy 98.1% with the lowest error rate 0.01% and outperformed all other individual classifiers. This study demonstrates that our proposal approach based on Majority Voting Ensemble technique was the best classification machine learning model with the highest level of accuracy for breast cancer prediction and diagnosis. All experiments are effectuated within a simulation environment and realized in Weka data mining tool

    Analyse temporelle pour les architectures prédictibles

    No full text
    With the rising complexity of the underlying computer hardware, the analysis of the timing behavior of real-time software is becoming more and more complex and imprecise. Time-predictable computer architectures thus have been proposed to provide hardware support for timing analysis. The goal is to deliver tighter worst-case execution time (WCET) estimates while keeping the analysis overhead minimal. These estimates are typically provided by standalone WCET analysis tools. The emergence of time-predictable architectures is, however, quite recent. While several designs have been introduced, efforts are still needed to assess their effectiveness in actually enhancing the worst-case performance. For many time-predictable hardware, timing analysis is either non-existing or lacking proper support. Consequently, time-predictable architectures are barely supported in existing WCET analysis tools. The general contribution of this thesis is to help filling this gap and turning some opportunities into concrete advantages. For this, we take interest in the Patmos processor. The already existing support around Patmos allows for an effective exploration of techniques to enhance the worst-case performance. Main contributions include: (1) Handling of predicated execution in timing analysis, (2) Comparison of the precision of stack cache occupancy analyses, (3) Analysis of preemption costs for the stack cache, (4) Preemption mechanisms for the stack cache, and (5) Prefetching-like technique for the stack cache. In addition, we present our WCET analysis tool Odyssey, which implements timing analyses for Patmos.En raison de la complexité croissante des architectures matérielles, l'analyse temporelle du logiciel temps-réel devient de plus en plus complexe et imprécise. Les architectures prédictibles des ordinateurs ont donc été proposées afin d'assurer un support matériel dédié à analyse temporelle. The but est de fournir des estimations plus précises de pire-temps d'exécution de programmes (WCET), tout en gardant le coût et la compexité de l'analyse minimal. Ces estimations proviennent typiquement d'outils dédiés à l'analyse WCET. L'émergence de ces architectures spécialisées est, toutefois, assez récent. Bien que plusieurs designs d'architectures ont été proposés, des efforts sont encore nécessaires pour évaluer leurs capacités à améliorer les performances pire cas. Pour plusieurs composants matériels prédictibles, l'analyse temporelle est manquante ou partiellement supportée. En conséquence, les architectures prédictibles sont à peine supportées dans les outils d’analyse WCET existants. Dans cette thèse, nous nous intéressons au processeur prédictible Patmos. Le support existant autour de la plateforme permet une exploration effective des techniques d'optimisation pour les performances pire cas. Les principales contributions comprennent: (1) Une gestion des prédicats dans le flux d'anayse WCET, (2) Une comparaison de la précision des analyses d'occupancy pour le stack cache, (3) Une analyse des coûts de préemption pour le stack cache, (4) Des mécanismes de préemption pour le stack cache, et (5) Des techniques de prefetching pour le stack cache. En outre, nous présentons Odyssey -- notre outil d'analyse WCET pour le processeur Patmos

    Worst-Case Execution Time Analysis of Predicated Architectures

    No full text
    The time-predictable design of computer architectures for the use in (hard) real-time systems is becoming more and more important, due to the increasing complexity of modern computer architectures. The design of predictable processor pipelines recently received considerable attention. The goal here is to find a trade-off between predictability and computing power. Branches and jumps are particularly problematic for high-performance processors. For one, branches are executed late in the pipeline. This either leads to high branch penalties (flushing) or complex software/hardware techniques (branch predictors). Another side-effect of branches is that they make it difficult to exploit instruction-level parallelism due to control dependencies. Predicated computer architectures allow to attach a predicate to the instructions in a program. An instruction is then only executed when the predicate evaluates to true and otherwise behaves like a simple nop instruction. Predicates can thus be used to convert control dependencies into data dependencies, which helps to address both of the aforementioned problems. A downside of predicated instructions is the precise worst-case execution time (WCET) analysis of programs making use of them. Predicated memory accesses, for instance, may or may not have an impact on the processor's cache and thus need to be considered by the cache analysis. Predication potentially has an impact on all analysis phases of a WCET analysis tool. We thus explore a preprocessing step that explicitly unfolds the control-flow graph, which allows us to apply standard analyses that are themselves not aware of predication.</p

    A Comparative Study of the Precision of Stack Cache Occupancy Analyses

    No full text
    International audienceUtilizing a stack cache in a real-time system can aid predictability by avoiding interference between accesses to regular data and stack data. While loads and stores are guaranteed cache hits, explicit operations are required to manage the stack cache. The (timing) behavior of these operations depends on the cache occupancy, which has to be bounded during timing analysis. The precision of the computed occupancy bounds naturally impacts the precision of the timing analysis. In this work, we compare the precision of stack cache occupancy bounds computed by two different approaches: (1) classical inter-procedural data-flow analysis and (2) a specialized stack cache analysis (SCA). Our evaluation, using MiBench benchmarks, shows that the SCA technique usually provides more precise occupancy bounds

    Eager Stack Cache Memory Transfers

    No full text
    International audience<p>The growing complexity of modern computer architectures increasingly complicates the prediction of the run-time behavior of software. For real-time systems, where a safe estimation of the program's worst-case execution time is needed, time-predictable computer architectures promise to resolve this problem. The stack cache, for instance, allows the compiler to efficiently cache a program's stack, while static analysis of its behavior remains easy.</p><p></p><p>This work introduces an optimization of the stack cache that allows to anticipate memory transfers that might be initiated by future stack cache control instructions. These eager memory transfers thus allow to reduce the average-case latency of those control instructions, very similar to ``prefetching'' techniques known from conventional caches. However, the mechanism proposed here is guaranteed to have no impact on the worst- case execution time estimates computed by static analysis. Measurements on a dual-core platform using the Patmos processor and time-division-multiplexing-based memory arbitration, show that our technique can eliminate up to 62% (7%) of the memory transfers from (respectively to) the stack cache on average over all programs of the MiBench benchmark suite.</p

    Fuzzy Performance Measurement System for the Maintenance Function

    No full text
    Maintenance aims to sustain the manufacturing process by ensuring that the production tools are well managed. However, modern production means are more automated and therefore more complex, so they require a performing maintenance for which planning and implementation are increasingly difficult. This paper, therefore, emphasizes the elaboration of a maintenance performance measurement system. Therefore, for an appropriate and reliable performance evaluation, we adopt a multi-level multi-criteria approach where every criterion influencing the maintenance performance either indirectly or directly are specified. Thereafter, indicators for the specified criteria would be measured using fuzzy logic to overcome limitations due to lack of available resources and data. For a global maintenance measurement, the elementary measurements will be aggregated into a total measurement through a multiple-criteria decision-making method such as MACBETH
    corecore