2,652 research outputs found

    Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen Geräten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur für Menschen mit neurologischen Verletzungen entwickelt, sondern auch für ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfänglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser Bemühungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein großes Potenzial für eine Vielzahl von Anwendungen, auch für weniger stark eingeschränkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hängt jedoch auch von der Verfügbarkeit zuverlässiger BCI-Hardware ab, die den Einsatz in der realen Welt gewährleistet. Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was Flexibilität und Effizienz bei der EEG-Signalverarbeitung gewährleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewährleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschließlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller Mobilität. Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die Flexibilität des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die für verschiedene BCI-Anwendungen erforderlich ist. Darüber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung für mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht. Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte Leistungsfähigkeit und Ausstattung für ein mobiles BCI. Es erfüllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg für eine schnelle Übertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf für die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard für BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application. The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors. The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies. Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Untersuchung von Performanzveränderungen auf Quelltextebene

    Get PDF
    Änderungen am Quelltext einer Software können zu veränderter Performanz führen. Um das Auftreten von Regressionen zu verhindern und die Effekte von Quelltextänderungen, von denen eine Verbesserung erwartet wird, zu überprüfen, ist die Messung der Auswirkungen von Quelltextänderungen auf die Performanz sowie das tiefgehende Verständnis des Laufzeitverhaltens der beteiligten Quelltextkonstrukte notwendig. Die Spezifikation von Benchmarks oder Lasttests, um Regressionen zu erkennen, erfordert immensen manuellen Aufwand. Für das Verständnis der Änderungen sind anschließend oft weitere Experimente notwendig. In der vorliegenden Arbeit wird der Ansatz Performanzanalyse von Softwaresystemen (Peass) entwickelt. Peass beruht auf der Annahme, dass Performanzänderungen durch Messung der Performanz von Unittests erkennbar ist. Peass besteht aus (1) einer Methode zur Regressionstestselektion, d. h. zur Bestimmung, zwischen welchen Commits sich die Performanz geändert haben kann basierend auf statischer Quelltextanalyse und Analyse des Laufzeitverhaltens, (2) einer Methode zur Umwandlung von Unittests in Performanztests und zur statistisch zuverlässigen und reproduzierbaren Messung der Performanz und (3) einer Methode zur Unterstützung des Verstehens von Ursachen von Performanzänderungen. Der Peass-Ansatzes ermöglicht es somit, durch den Workload von Unittests messbare Performanzänderungen automatisiert zu untersuchen. Die Validität des Ansatzes wird geprüft, indem gezeigt wird, dass (1) typische Performanzprobleme in künstlichen Testfällen und (2) reale, durch Entwickler markierte Performanzänderungen durch Peass gefunden werden können. Durch eine Fallstudie in einem laufenden Softwareentwicklungsprojekt wird darüber hinaus gezeigt, dass Peass in der Lage ist, relevante Performanzänderungen zu erkennen.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 AusblickChanges to the source code of a software may result in varied performance. In order to prevent the occurance of regressions and check the effect of source changes, which are expected to result in performance improvements, both the measurement of the impact of source code changes and a deep understanding of the runtime behaviour of the used source code elements are necessary. The specification of benchmarks and load tests, which are able to detect performance regressions, requires immense manual effort. To understand the changes, often additional experiments are necessary. This thesis develops the Peass approach (Performance analysis of software systems). Peass is based on the assumption, that performance changes can be identified by unit tests. Therefore, Peass consists of (1) a method for regression test selection, which determines between which commits the performance may have changed based on static code analysis and analysis of the runtime behavior, (2) a method for transforming unit tests into performance tests and for statistically reliable and reproducible measurement of the performance and (3) a method for aiding the diagnosis of root causes of performance changes. The Peass approach thereby allows to automatically examine performance changes that are measurable by the workload of unit tests. The validity of the approach is evaluated by showing that (1) typical performance problems in artificial test cases and (2) real, developer-tagged performance changes can be found by Peass. Furthermore, a case study in an ongoing software development project shows that Peass is able to detect relevant performance changes.:1 Einleitung 1.1 Motivation 1.2 Ansatz 1.3 Forschungsfragen 1.4 Beiträge 1.5 Aufbau der Arbeit 2 Grundlagen 2.1 Software Performance Engineering 2.2 Modellbasierter Ansatz 2.2.1 Überblick 2.2.2 Performanzantipattern 2.3 Messbasierter Ansatz 2.3.1 Messprozess 2.3.2 Messwertanalyse 2.4 Messung in künstlichen Umgebungen 2.4.1 Benchmarking 2.4.2 Lasttests 2.4.3 Performanztests 2.5 Messung in realen Umgebungen: Monitoring 2.5.1 Überblick 2.5.2 Umsetzung 2.5.3 Werkzeuge 3 Regressionstestselektion 3.1 Ansatz 3.1.1 Grundidee 3.1.2 Voraussetzungen 3.1.3 Zweistufiger Prozess 3.2 Statische Testselektion 3.2.1 Selektierte Änderungen 3.2.2 Prozess 3.2.3 Implementierung 3.3 Tracevergleich 3.3.1 Selektierte Änderungen 3.3.2 Prozess 3.3.3 Implementierung 3.3.4 Kombination mit statischer Analyse 3.4 Evaluation 3.4.1 Implementierung 3.4.2 Exaktheit 3.4.3 Korrektheit 3.4.4 Diskussion der Validität 3.5 Verwandte Arbeiten 3.5.1 Funktionale Regressionstestbestimmung 3.5.2 Regressionstestbestimmung für Performanztests 4 Messprozess 4.1 Vergleich von Mess- und Analysemethoden 4.1.1 Vorgehen 4.1.2 Fehlerbetrachtung 4.1.3 Workloadgröße der künstlichen Unittestpaare 4.2 Messmethode 4.2.1 Aufbau einer Iteration 4.2.2 Beenden von Messungen 4.2.3 Garbage Collection je Iteration 4.2.4 Umgang mit Standardausgabe 4.2.5 Zusammenfassung der Messmethode 4.3 Analysemethode 4.3.1 Auswahl des statistischen Tests 4.3.2 Ausreißerentfernung 4.3.3 Parallelisierung 4.4 Evaluierung 4.4.1 Vergleich mit JMH 4.4.2 Reproduzierbarkeit der Ergebnisse 4.4.3 Fazit 4.5 Verwandte Arbeiten 4.5.1 Beenden von Messungen 4.5.2 Änderungserkennung 4.5.3 Anomalieerkennung 5 Ursachenanalyse 5.1 Reduktion des Overheads der Messung einzelner Methoden 5.1.1 Generierung von Beispielprojekten 5.1.2 Messung von Methodenausführungsdauern 5.1.3 Optionen zur Overheadreduktion 5.1.4 Messergebnisse 5.1.5 Überprüfung mit MooBench 5.2 Messkonfiguration der Ursachenanalyse 5.2.1 Grundlagen 5.2.2 Fehlerbetrachtung 5.2.3 Ansatz 5.2.4 Messergebnisse 5.3 Verwandte Arbeiten 5.3.1 Monitoringoverhead 5.3.2 Ursachenanalyse für Performanzänderungen 5.3.3 Ursachenanalyse für Performanzprobleme 6 Evaluation 6.1 Validierung durch künstliche Performanzprobleme 6.1.1 Reproduktion durch Benchmarks 6.1.2 Umwandlung der Benchmarks 6.1.3 Überprüfen von Problemen mit Peass 6.2 Evaluation durch reale Performanzprobleme 6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten 6.2.2 Untersuchung der Performanzänderungen in GeoMap 7 Zusammenfassung und Ausblick 7.1 Zusammenfassung 7.2 Ausblic

    Defining Safe Training Datasets for Machine Learning Models Using Ontologies

    Get PDF
    Machine Learning (ML) models have been gaining popularity in recent years in a wide variety of domains, including safety-critical domains. While ML models have shown high accuracy in their predictions, they are still considered black boxes, meaning that developers and users do not know how the models make their decisions. While this is simply a nuisance in some domains, in safetycritical domains, this makes ML models difficult to trust. To fully utilize ML models in safetycritical domains, there needs to be a method to improve trust in their safety and accuracy without human experts checking each decision. This research proposes a method to increase trust in ML models used in safety-critical domains by ensuring the safety and completeness of the model’s training dataset. Since most of the complexity of the model is built through training, ensuring the safety of the training dataset could help to increase the trust in the safety of the model. The method proposed in this research uses a domain ontology and an image quality characteristic ontology to validate the domain completeness and image quality robustness of a training dataset. This research also presents an experiment as a proof of concept for this method where ontologies are built for the emergency road vehicle domain

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    OpenLB User Guide: Associated with Release 1.6 of the Code

    Full text link
    OpenLB is an object-oriented implementation of LBM. It is the first implementation of a generic platform for LBM programming, which is shared with the open source community (GPLv2). Since the first release in 2007, the code has been continuously improved and extended which is documented by thirteen releases as well as the corresponding release notes which are available on the OpenLB website (https://www.openlb.net). The OpenLB code is written in C++ and is used by application programmers as well as developers, with the ability to implement custom models OpenLB supports complex data structures that allow simulations in complex geometries and parallel execution using MPI, OpenMP and CUDA on high-performance computers. The source code uses the concepts of interfaces and templates, so that efficient, direct and intuitive implementations of the LBM become possible. The efficiency and scalability has been checked and proved by code reviews. This user manual and a source code documentation by DoxyGen are available on the OpenLB project website

    Novel DVFS Methodologies For Power-Efficient Mobile MPSoC

    Get PDF
    Low power mobile computing systems such as smartphones and wearables have become an integral part of our daily lives and are used in various ways to enhance our daily lives. Majority of modern mobile computing systems are powered by multi-processor System-on-a-Chip (MPSoC), where multiple processing elements are utilized on a single chip. Given the fact that these devices are battery operated most of the times, thus, have limited power supply and the key challenges include catering for performance while reducing the power consumption. Moreover, the reliability in terms of lifespan of these devices are also affected by the peak thermal behaviour on the device, which retrospectively also make such devices vulnerable to temperature side-channel attack. This thesis is concerned with performing Dynamic Voltage and Frequency Scaling (DVFS) on different processing elements such as CPU & GPU, and memory unit such as RAM to address the aforementioned challenges. Firstly, we design a Computer Vision based machine learning technique to classify applications automatically into different categories of workload such that DVFS could be performed on the CPU to reduce the power consumption of the device while executing the application. Secondly, we develop a reinforcement learning based agent to perform DVFS on CPU and GPU while considering the user's interaction with such devices to optimize power consumption and thermal behaviour. Next, we develop a heuristic based automated agent to perform DVFS on CPU, GPU and RAM to optimize the same while executing an application. Finally, we explored the affect of DVFS on CPUs leading to vulnerabilities against temperature side-channel attack and hence, we also designed a methodology to secure against such attack while improving the reliability in terms of lifespan of such devices
    corecore