2,014 research outputs found

    An Optimization Based Design for Integrated Dependable Real-Time Embedded Systems

    Get PDF
    Moving from the traditional federated design paradigm, integration of mixedcriticality software components onto common computing platforms is increasingly being adopted by automotive, avionics and the control industry. This method faces new challenges such as the integration of varied functionalities (dependability, responsiveness, power consumption, etc.) under platform resource constraints and the prevention of error propagation. Based on model driven architecture and platform based designā€™s principles, we present a systematic mapping process for such integration adhering a transformation based design methodology. Our aim is to convert/transform initial platform independent application specifications into post integration platform specific models. In this paper, a heuristic based resource allocation approach is depicted for the consolidated mapping of safety critical and non-safety critical applications onto a common computing platform meeting particularly dependability/fault-tolerance and real-time requirements. We develop a supporting tool suite for the proposed framework, where VIATRA (VIsual Automated model TRAnsformations) is used as a transformation tool at different design steps. We validate the process and provide experimental results to show the effectiveness, performance and robustness of the approach

    Extensions of Task-based Runtime for High Performance Dense Linear Algebra Applications

    Get PDF
    On the road to exascale computing, the gap between hardware peak performance and application performance is increasing as system scale, chip density and inherent complexity of modern supercomputers are expanding. Even if we put aside the difficulty to express algorithmic parallelism and to efficiently execute applications at large scale, other open questions remain. The ever-growing scale of modern supercomputers induces a fast decline of the Mean Time To Failure. A generic, low-overhead, resilient extension becomes a desired aptitude for any programming paradigm. This dissertation addresses these two critical issues, designing an efficient unified linear algebra development environment using a task-based runtime, and extending a task-based runtime with fault tolerant capabilities to build a generic framework providing both soft and hard error resilience to task-based programming paradigm. To bridge the gap between hardware peak performance and application perfor- mance, a unified programming model is designed to take advantage of a lightweight task-based runtime to manage the resource-specific workload, and to control the data ow and parallel execution of tasks. Under this unified development, linear algebra tasks are abstracted across different underlying heterogeneous resources, including multicore CPUs, GPUs and Intel Xeon Phi coprocessors. Performance portability is guaranteed and this programming model is adapted to a wide range of accelerators, supporting both shared and distributed-memory environments. To solve the resilient challenges on large scale systems, fault tolerant mechanisms are designed for a task-based runtime to protect applications against both soft and hard errors. For soft errors, three additions to a task-based runtime are explored. The first recovers the application by re-executing minimum number of tasks, the second logs intermediary data between tasks to minimize the necessary re-execution, while the last one takes advantage of algorithmic properties to recover the data without re- execution. For hard errors, we propose two generic approaches, which augment the data logging mechanism for soft errors. The first utilizes non-volatile storage device to save logged data, while the second saves local logged data on a remote node to protect against node failure. Experimental results have confirmed that our soft and hard error fault tolerant mechanisms exhibit the expected correctness and efficiency

    Proxy compilation for Java via a code migration technique

    Get PDF
    There is an increasing trend that intermediate representations (IRs) are used to deliver programs in more and more languages, such as Java. Although Java can provide many advantages, including a wider portability and better optimisation opportunities on execution, it introduces extra overhead by requiring an IR translation for the program execution. For maximum execution performance, an optimising compiler is placed in the runtime to selectively optimise code regions regarded as ā€œhotspotsā€. This common approach has been effectively deployed in many implementation of programming languages. However, the computational resources demanded by this approach made it less efficient, or even difficult to deploy directly in a resourceconstrained environment. One implementation approach is to use a remote compilation technique to support compilation during the execution. The work presented in this dissertation supports the thesis that execution performance can be improved by the use of efficient optimising compilation by using a proxy dynamic optimising compiler. After surveying various approaches to the design and implementation of remote compilation, a proxy compilation system called Apus is defined. To demonstrate the effectiveness of using a dynamic optimising compiler as a proxy compiler, a complete proxy compilation system is written based on a research-oriented Java VirtualMachine (JVM). The proxy compilation system is discussed in detail, showing how to deliver remote binaries and manage a cache of binaries by using a code migration approach. The proxy compilation client shows how the proxy compilation service is integrated with the selective optimisation system to maximise execution performance. The results of empirical measurements of the system are given, showing the efficiency of code optimisation from either the proxy compilation service and a local binary cache. The conclusion of this work is that Java execution performance can be improved by efficient optimising compilation with a proxy compilation service by using a code migration technique

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from todayā€™s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments

    Get PDF
    This book presents the collection of fifty papers which were presented in the Second International Conference on BUSINESS SUSTAINABILITY 2011 - Management, Technology and Learning for Individuals, Organisations and Society in Turbulent Environments , held in PĆ³voa de Varzim, Portugal, from 22ndto 24thof June, 2011.The main motive of the meeting was growing awareness of the importance of the sustainability issue. This importance had emerged from the growing uncertainty of the market behaviour that leads to the characterization of the market, i.e. environment, as turbulent. Actually, the characterization of the environment as uncertain and turbulent reflects the fact that the traditional technocratic and/or socio-technical approaches cannot effectively and efficiently lead with the present situation. In other words, the rise of the sustainability issue means the quest for new instruments to deal with uncertainty and/or turbulence. The sustainability issue has a complex nature and solutions are sought in a wide range of domains and instruments to achieve and manage it. The domains range from environmental sustainability (referring to natural environment) through organisational and business sustainability towards social sustainability. Concerning the instruments for sustainability, they range from traditional engineering and management methodologies towards ā€œsoftā€ instruments such as knowledge, learning, and creativity. The papers in this book address virtually whole sustainability problems space in a greater or lesser extent. However, although the uncertainty and/or turbulence, or in other words the dynamic properties, come from coupling of management, technology, learning, individuals, organisations and society, meaning that everything is at the same time effect and cause, we wanted to put the emphasis on business with the intention to address primarily companies and their businesses. Due to this reason, the main title of the book is ā€œBusiness Sustainability 2.0ā€ but with the approach of coupling Management, Technology and Learning for individuals, organisations and society in Turbulent Environments. Also, the notationā€œ2.0ā€ is to promote the publication as a step further from our previous publication ā€“ ā€œBusiness Sustainability Iā€ ā€“ as would be for a new version of software. Concerning the Second International Conference on BUSINESS SUSTAINABILITY, its particularity was that it had served primarily as a learning environment in which the papers published in this book were the ground for further individual and collective growth in understanding and perception of sustainability and capacity for building new instruments for business sustainability. In that respect, the methodology of the conference work was basically dialogical, meaning promoting dialog on the papers, but also including formal paper presentations. In this way, the conference presented a rich space for satisfying different authorsā€™ and participantsā€™ needs. Additionally, promoting the widest and global learning environment and participation, in accordance with the Conference's assumed mission to promote Proactive Generative Collaborative Learning, the Conference Organisation shares/puts open to the community the papers presented in this book, as well as the papers presented on the previous Conference(s). These papers can be accessed from the conference webpage (http://labve.dps.uminho.pt/bs11). In these terms, this book could also be understood as a complementary instrument to the Conference authorsā€™ and participantsā€™, but also to the wider readershipsā€™ interested in the sustainability issues. The book brought together 107 authors from 11 countries, namely from Australia, Belgium, Brazil, Canada, France, Germany, Italy, Portugal, Serbia, Switzerland, and United States of America. The authors ā€œrangedā€ from senior and renowned scientists to young researchers providing a rich and learning environment. At the end, the editors hope, and would like, that this book to be useful, meeting the expectation of the authors and wider readership and serving for enhancing the individual and collective learning, and to incentive further scientific development and creation of new papers. Also, the editors would use this opportunity to announce the intention to continue with new editions of the conference and subsequent editions of accompanying books on the subject of BUSINESS SUSTAINABILITY, the third of which is planned for year 2013.info:eu-repo/semantics/publishedVersio

    Reliable Software for Unreliable Hardware - A Cross-Layer Approach

    Get PDF
    A novel cross-layer reliability analysis, modeling, and optimization approach is proposed in this thesis that leverages multiple layers in the system design abstraction (i.e. hardware, compiler, system software, and application program) to exploit the available reliability enhancing potential at each system layer and to exchange this information across multiple system layers

    Fine-grained reasoning about the security and usability trade-off in modern security tools

    Get PDF
    Defense techniques detect or prevent attacks based on their ability to model the attacks. A balance between security and usability should always be established in any kind of defense technique. Attacks that exploit the weak points in security tools are very powerful and thus can go undetected. One source of those weak points in security tools comes when security is compromised for usability reasons, where if a security tool completely secures a system against attacks the whole system will not be usable because of the large false alarms or the very restricted policies it will create, or if the security tool decides not to secure a system against certain attacks, those attacks will simply and easily succeed. The key contribution of this dissertation is that it digs deeply into modern security tools and reasons about the inherent security and usability trade-offs based on identifying the low-level, contributing factors to known issues. This is accomplished by implementing full systems and then testing those systems in realistic scenarios. The thesis that this dissertation tests is that we can reason about security and usability trade-offs in fine-grained ways by building and testing full systems. Furthermore, this dissertation provides practical solutions and suggestions to reach a good balance between security and usability. We study two modern security tools, Dynamic Information Flow Tracking (DIFT) and Antivirus (AV) software, for their importance and wide usage. DIFT is a powerful technique that is used in various aspects of security systems. It works by tagging certain inputs and propagating the tags along with the inputs in the target system. However, current DIFT systems do not track implicit information flow because if all DIFT propagation rules are directly applied in a conservative way, the target system will be full of tagged data (a problem called overtagging) and thus useless because the tags tell us very little about the actual information flow of the system. So, current DIFT systems drop some security for usability. In this dissertation, we reason about the sources of the overtagging problem and provide practical ways to deal with it, while previous approaches have focused on abstract descriptions of the main causes of the problem based on limited experiments. The second security tool we consider in this dissertation is antivirus (AV) software. AV is a very important tool that protects systems against worms and viruses by scanning data against a database of signatures. Despite its importance and wide usage, AV has received little attention from the security research community. In this dissertation, we examine the AV internals and reason about the possibility of creating timing channel attacks against AV software. The attacker could infer information about the AV based only on the scanning time the AV spends to scan benign inputs. The other aspect of AV this dissertation explores is the low-level AV performance impact on systems. Even though the performance overhead of AV is a well known issue, the exact reasons behind this overhead are not well-studied. In this dissertation, we design a methodology that utilizes Event Tracing for Windows technology (ETW), a technology that accounts for all OS events, to reason about AV performance impact from the OS point of view. We show that the main performance impact of the AV on a task is the longer waiting time the task spends waiting on events

    Mechanisms for improving information quality in smartphone crowdsensing systems

    Get PDF
    Given its potential for a large variety of real-life applications, smartphone crowdsensing has recently gained tremendous attention from the research community. Smartphone crowdsensing is a paradigm that allows ordinary citizens to participate in large-scale sensing surveys by using user-friendly applications installed in their smartphones. In this way, fine-grained sensing information is obtained from smartphone users without employing fixed and expensive infrastructure, and with negligible maintenance costs. Existing smartphone sensing systems depend completely on the participants\u27 willingness to submit up-to-date and accurate information regarding the events being monitored. Therefore, it becomes paramount to scalably and effectively determine, enforce, and optimize the information quality of the sensing reports submitted by the participants. To this end, mechanisms to improve information quality in smartphone crowdsensing systems were designed in this work. Firstly, the FIRST framework is presented, which is a reputation-based mechanism that leverages the concept of mobile trusted participants to determine and improve the information quality of collected data. Secondly, it is mathematically modeled and studied the problem of maximizing the likelihood of successful execution of sensing tasks when participants having uncertain mobility execute sensing tasks. Two incentive mechanisms based on game and auction theory are then proposed to efficiently and scalably solve such problem. Experimental results demonstrate that the mechanisms developed in this thesis outperform existing state of the art in improving information quality in smartphone crowdsensing systems --Abstract, page iii

    A real-time networked camera system:a scheduled distributed camera system reduces the latency

    Get PDF
    This report presents the results of a Real-time Networked Camera System, com-missioned by the SAN Group in TU/e. Distributed Systems are motivated by two reasons, the first reason is the physical environment as a requirement and the second reason is to provide a better Quality of Service (QoS). This project describes the distributed system with a video processing application. The aim is to deal with the distributed system as one system thus minimizing delays while keeping the predictability in a real-time context. Time is the most crucial ingredient for the real-time systems in the sense that the tasks within the application should meet with the task deadline. With respect to the distributed system we need to consider a couple of issues. The first one is to have a distributed system and a modular application that is mapped to multiple system nodes. The second issue is to schedule the modules collectively and the third is to propose a solution when shared resource(s) (such as the network) are required by several nodes at the same time. In order to provide a distributed system, we connect 2 cameras with 1 PC via a network switch. Video processing has two parts; the first part consists of creating a frame, encoding the frame, and streaming it to the network and the second part deals with receiving the frame, decoding the frame, and displaying the frame. The first part is running on the cameras and the second part is running on the PC. In order to give real-time behavior to the system, the system components should provide the real-time behavior. The camera is installed with the ĀµC/OS-II (Open Source Real-time Kernel). We investigated the Real-time Operating System and its installation on the PC. In order to provide resource management to the shared resources, we designed and implemented Admission control which controls access to the required con-nection to the PC. We designed and implemented a component to delay the start of any of the cameras in order to synchronize the network utilization. We also designed an enforcement component to allow the tasks to run as much as they should and monitor the frames streamed to the network. The results show that with the Admission Control, cameras only send as many frames as the network can transport. The given start delay to the system shows that overlap can be prevented, but we could not evaluate it because of the semi-tested/unreleased code which is provided by the camera providers. The source code we used is the test source code which was not mature
    • ā€¦
    corecore