264 research outputs found
Modeling and visualizing networked multi-core embedded software energy consumption
In this report we present a network-level multi-core energy model and a
software development process workflow that allows software developers to
estimate the energy consumption of multi-core embedded programs. This work
focuses on a high performance, cache-less and timing predictable embedded
processor architecture, XS1. Prior modelling work is improved to increase
accuracy, then extended to be parametric with respect to voltage and frequency
scaling (VFS) and then integrated into a larger scale model of a network of
interconnected cores. The modelling is supported by enhancements to an open
source instruction set simulator to provide the first network timing aware
simulations of the target architecture. Simulation based modelling techniques
are combined with methods of results presentation to demonstrate how such work
can be integrated into a software developer's workflow, enabling the developer
to make informed, energy aware coding decisions. A set of single-,
multi-threaded and multi-core benchmarks are used to exercise and evaluate the
models and provide use case examples for how results can be presented and
interpreted. The models all yield accuracy within an average +/-5 % error
margin
Web-Interface for querying and visualizing Alcoholic Liver Disease Patients’ data from database using GraphQL
Ο αλκοολισμός αποτελεί́ ένα από τα σοβαρότερα και συχνότερα προβλήματα που αντιμετωπίζουν οι σύγχρονες κοινωνίες. 5%-10% του πληθυσμού στις ευρωπαϊκές χώρες κάνει κατάχρηση αλκοόλ, με την παρατεταμένη κατανάλωση αλκοόλ να επιφέρει ίνωση και κίρρωση του ήπατος (αλκοολική νόσος, Alcohol Liver Disease, ALD). Η αλκοολική νόσος συνίσταται στην ανάπτυξη του λιπώδους ήπατος, στην αλκοολική ηπατίτιδα, και τελικά στην κίρρωση του ήπατος. Τα πρώτα στάδια της ίνωσης και της αλκοολικής ηπατίτιδας είναι ασυμπωματικά ενώ όταν τελικά εκδηλωθεί η νόσος, η κλινική εικόνα είναι οξεία. Στην κλινική πράξη η διάγνωση της ALD βασίζεται στο ιστορικό χρήσης αλκοόλ, στην συμπτωματολογία του ασθενούς, και σε εργαστηριακές εξετάσεις (π.χ. ηπατικά ένζυμα, αρτηριακή πίεση, γλυκόζη αίματος, κ.α.). Η διπλωματική εργασία αποσκοπεί στη δημιουργία μιας βάσης δεδομένων για την συλλογή και ταξινόμηση όλων των εργαστηριακών, κλινικών, κ.α. εξετάσεων των ασθενών.
Η αναζήτηση δεδομένων και δημιουργία γραφημάτων γίνεται σε πραγματικό χρόνο μέσω της χρήσης GraphQL επερωτήσεων. Η σχεδίαση της διεπαφής λαμβάνει υπόψη την αλλαγή των δεδομένων καθώς επίσης και την επαναχρησιμοποίηση σε διαφορετικού είδους δεδομένα από άλλα πειράματα και τη χρήση από άλλα υπολογιστικά συστήματα. Με αυτό το βιοπληροφορικό εργαλείο θα απλοποιηθεί η διαδικασία επιλογής δεδομένων, ανάλυσης και προβολής με χρήση γραφημάτων και διαγραμμάτων όλων των δεδομένων από ιατρούς και ερευνητές. Αυτό έχει ως αποτέλεσμα το εργαλείο να διευκολύνει την καθημερινότητα των ιατρών και ερευνητών ώστε να επικεντρώνονται περισσότερο στην ουσία της έρευνας, δηλαδή στην εξαγωγή συμπερασμάτων για τις βασικότερες κατηγορίες των δεδομένων που οδηγούν τους ασθενείς στην πάθηση της αλκοολικής ηπατικής νόσου, και λιγότερο στις διαδικασίες.Alcoholism is one of the most serious and most common problems faced by modern societies. Approximately, 5%-10% of the population in European countries do alcohol abuse, with prolonged alcohol consumption causing liver fibrosis and cirrhosis (alcoholic liver disease, ALD). Alcoholic disease is the development of fatty liver, alcoholic hepatitis, and finally cirrhosis of the liver. The early stages of fibrosis and alcoholic hepatitis are symptomless, and when the disease is finally manifested, the clinical picture is acute. In clinical practice, the diagnosis of ALD is based on the historical alcohol ingestion, patient symptomatology and laboratory tests (e.g. liver enzymes, blood pressure, blood glucose, etc.). The dissertation aims to create a database for the collection and classification of all laboratorial, clinical, etc. examinations of patients.
Data search and graph plots and charts are created in real-time with the use of GraphQL queries and middleware query caching. The design process of the interface takes into account data changes as well as reusability of this tool in different kind of data from other tests or experiments and can be used in all types computing systems as it is containerized and responsive. This bioinformatic tool will help physicians and researchers to simplify the process of data selection, analysis and visualization by using graphs and diagrams of all data. As a result, the tool facilitates the day-to-day physicians and researchers schedule and as has the effect of letting them focus more on the essence of research, i.e. to draw conclusions about the main categories of information that lead patients to alcoholic liver disease, and less on processes
Recommended from our members
Achieving Accurate Predictions of Future Events Under Hardware Heterogeneity
Heterogeneous hardware is becoming increasingly available in modern hardware, while research breakthroughs enforce the expectation that heterogeneity will keep increasing in the future. Significant gains can be achieved via appropriate utilization of heterogeneity, in terms of performance and power consumption, however, poor utilization can have a detrimental effect. Intelligent scheduling and resource management is a crucial challenge we need to overcome in order to harvest the full potential of heterogeneous hardware. As systems become larger and include greater levels of hardware diversity, the importance of intelligent scheduling and resource management is further accentuated.This dissertation presents techniques that aid the process of scheduling and resource management in the presence of heterogeneous hardware, via accurately predicting upcoming runtime events. With a proactive and accurate view of the near future, schedulers can utilize the underlying hardware more efficiently, and fully take advantage of the available benefits.By adapting a majority element heuristic, this dissertation significantly improves the accuracy of predicting memory addresses about to be accessed, while reducing prediction-related costs by a factor of ten thousand compared to previously proposed predictive approaches. Coupled with novel microarchitectural modifications, accurate address predictions are shown to improve the performance of heterogeneous memory architectures.Machine learning-based performance predictors are further presented, capable of predicting a program's performance when executed on a given general-purpose core. Trained to model the subtleties of the interaction between hardware and software, these predictors are capable of generating highly accurate predictions even for cores with varied Instruction Set Architectures. Utilizing these performance predictions for job scheduling, is shown to improve overall system performance. The trained predictors are further examined and interpreted in order to visualize the correlations between features picked up and amplified during training.Finally, this dissertation demonstrates that scheduling algorithms cannot guarantee deriving an optimal schedule during realistic execution scenarios due to the underlying hardware heterogeneity, the wide range of runtime requirements of software, as well as prediction error from performance predictors. In response, deep neural networks are trained to select one scheduling approach from a list of options with varied overheads and correctness guarantees. The scheduling approach chosen, is the one which will most likely return the highest-performance schedule with the lowest overhead, given a particular instance of the job-to-core assignment problem
Cache-Aware Virtual Page Management
With contemporary research focusing its attention primarily on benchmark-driven performance evaluation, studying fundamental memory characteristics has gone by the way-side.
This thesis presents a systematic study of the expected performance characteristics for contemporary multi-core CPUs.
These characteristics are the primary influence on benchmarking variability and need to be quantified if more accurate benchmark results are desired.
With the aid of a new, highly customizable, micro-benchmark suite, these CPU-specific attributes are evaluated and contrasted.
The benchmark tool provides the framework for accurately measuring instruction throughput and integrates hardware performance counters to gain insight into machine-level caching performance.
Additionally, the Linux operating system's impact on cache utilization is evaluated.
With careful virtual memory management, cache-misses may be reduced, significantly contributing to benchmark result stability.
Finally, a popular cache performance model, stack distance profile, is evaluated with respect to contemporary CPU architectures.
While particularly popular in multi-core contention-aware scheduling projects, modern incarnations of the model fail to account for trends in CPU cache hardware, leading to measurable degrees of inaccuracy
Crystal gazer : profile-driven write-rationing garbage collection for hybrid memories
Non-volatile memories (NVM) offer greater capacity than DRAM but suffer from high latency and low write endurance. Hybrid memories combine DRAM and NVM to form scalable memory systems with the promise of high capacity, low energy consumption, and high endurance. Automatically managing hybrid NVM-DRAM memories to achieve their promise without changing user applications or their programming models remains an open question. This paper uses garbage collection in managed languages to exploit NVM capacity while preventing NVM wear out in hybrid memories with no changes to the programming model. We introduce profile-driven write-rationing garbage collection. Allocation sites that produce frequently written objects are predicted based on previous program executions. Objects are initially allocated in a DRAM nursery space. The collector copies surviving nursery objects from highly written sites to a mature DRAM space and read-mostly objects to a mature NVM space.Write-intensity prediction for 15 Java benchmarks accurately places objects in the correct space, eliminating expensive object monitoring from prior write-rationing garbage collectors. Furthermore, our technique exposes a Pareto tradeoff between DRAM usage and NVM lifetime, unlike prior work. Experimental results on NUMA hardware that emulates hybrid NVM-DRAM memory demonstrates that profile-driven write-rationing garbage collection reduces the number of writes to NVM compared to prior work to extend its lifetime, maximizes the use of NVM for its capacity, and achieves good performance
Cache-based Timing Side-channels in Partitioning Hypervisors
Dissertação de mestrado em Engenharia Eletrónica Industrial e ComputadoresIn recent years, the automotive industry has seen a technology complexity increase to comply with
computing innovations such as autonomous driving, connectivity and mobility. As such, the need to reduce
this complexity without compromising the intended metrics is imperative.
The advent of hypervisors in the automotive domain presents a solution to reduce the complexity of
the systems by enabling software portability and isolation between virtual machines (VMs).
Although virtualization creates the illusion of strict isolation and exclusive resource access, the
convergence of critical and non-critical systems into shared chips presents a security problem. This shared
hardware has microarchitectural features that can be exploited through their temporal behavior, creating
sensitive data leakage channels between co-located VMs. In mixed-criticality systems, the exploitation of
these channels can lead to safety issues on systems with real-time constraints compromising the whole
system.
The implemented side-channel attacks demonstrated well-defined channels, across two real-time
partitioning hypervisors in mixed-criticality systems, that enable the inference of a co-located VM’s
cache activity. Furthermore, these channels have proven to be mitigated using cache coloring as a
countermeasure, thus increasing the determinism of the system in detriment of average performance.
From a safety perspective, this dissertation emphasizes the need to weigh the tradeoffs of the trending
architectural features that target performance over predictability and determinism.Nos últimos anos, a indústria automotiva tem sido objeto de um crescendo na sua complexidade
tecnológica de maneira a manter-se a par das mais recentes inovações de computação. Sendo assim, a
necessidade de reduzir a complexidade sem comprometer as métricas pretendidas é imperativa.
O advento dos hipervisores na indústria automotiva apresenta uma solução para a redução da
complexidade dos sistemas, possiblitando a portabilidade do software e o isolamento entrevirtual vachines
(VMs).
Embora a virtualização crie a ilusão de isolamento e acesso exclusivo a recursos, a convergência
de sistemas críticos e não-críticos em chips partilhados representa um problema de segurança. O
hardware partilhado tem características microarquiteturais que podem ser exploradas através do seu
comportamento temporal, criando canais de fuga de informação crítica entre VMs adjacentes. Em
sistemas de criticalidade mista, a exploração destes canais pode comprometer sistemas com limitações
de tempo real.
Os ataques side-channel implementados revelam canais bem definidos que possibilitam a inferência
da atividade de cache de VMs situadas no mesmo processador. Além disso, esses canais provaram serem
passíveis de ser mitigados usando cache coloring como estratégia de mitigação, aumentando assim o
determinismo do sistema em detrimento da sua performance.
De uma perspetiva da segurança, esta dissertação enfatiza a necessidade de pesar os tradeoffs das
tendências arquiteturais que priorizam a performance e secundarizam o determinismo e previsibilidade
do sistema
10381 Summary and Abstracts Collection -- Robust Query Processing
Dagstuhl seminar 10381 on robust query processing (held 19.09.10 -
24.09.10) brought together a diverse set of researchers and practitioners
with a broad range of expertise for the purpose of fostering discussion
and collaboration regarding causes, opportunities, and solutions for
achieving robust query processing.
The seminar strove to build a unified view across
the loosely-coupled system components responsible for
the various stages of database query processing.
Participants were chosen for their experience with database
query processing and, where possible, their prior work in academic
research or in product development towards robustness in database query
processing.
In order to pave the way to motivate, measure, and protect future advances
in robust query processing, seminar 10381 focused on developing tests
for measuring the robustness of query processing.
In these proceedings, we first review the seminar topics, goals,
and results, then present abstracts or notes of some of the seminar break-out
sessions.
We also include, as an appendix,
the robust query processing reading list that
was collected and distributed to participants before the seminar began,
as well as summaries of a few of those papers that were
contributed by some participants
- …