23,387 research outputs found

    ENERGY-AWARE OPTIMIZATION FOR EMBEDDED SYSTEMS WITH CHIP MULTIPROCESSOR AND PHASE-CHANGE MEMORY

    Get PDF
    Over the last two decades, functions of the embedded systems have evolved from simple real-time control and monitoring to more complicated services. Embedded systems equipped with powerful chips can provide the performance that computationally demanding information processing applications need. However, due to the power issue, the easy way to gain increasing performance by scaling up chip frequencies is no longer feasible. Recently, low-power architecture designs have been the main trend in embedded system designs. In this dissertation, we present our approaches to attack the energy-related issues in embedded system designs, such as thermal issues in the 3D chip multiprocessor (CMP), the endurance issue in the phase-change memory(PCM), the battery issue in the embedded system designs, the impact of inaccurate information in embedded system, and the cloud computing to move the workload to remote cloud computing facilities. We propose a real-time constrained task scheduling method to reduce peak temperature on a 3D CMP, including an online 3D CMP temperature prediction model and a set of algorithm for scheduling tasks to different cores in order to minimize the peak temperature on chip. To address the challenging issues in applying PCM in embedded systems, we propose a PCM main memory optimization mechanism through the utilization of the scratch pad memory (SPM). Furthermore, we propose an MLC/SLC configuration optimization algorithm to enhance the efficiency of the hybrid DRAM + PCM memory. We also propose an energy-aware task scheduling algorithm for parallel computing in mobile systems powered by batteries. When scheduling tasks in embedded systems, we make the scheduling decisions based on information, such as estimated execution time of tasks. Therefore, we design an evaluation method for impacts of inaccurate information on the resource allocation in embedded systems. Finally, in order to move workload from embedded systems to remote cloud computing facility, we present a resource optimization mechanism in heterogeneous federated multi-cloud systems. And we also propose two online dynamic algorithms for resource allocation and task scheduling. We consider the resource contention in the task scheduling

    Novel online data allocation for hybrid memories on tele-health systems

    Full text link
    [EN] The developments of wearable devices such as Body Sensor Networks (BSNs) have greatly improved the capability of tele-health industry. Large amount of data will be collected from every local BSN in real-time. These data is processed by embedded systems including smart phones and tablets. After that, the data will be transferred to distributed storage systems for further processing. Traditional on-chip SRAMs cause critical power leakage issues and occupy relatively large chip areas. Therefore, hybrid memories, which combine volatile memories with non-volatile memories, are widely adopted in reducing the latency and energy cost on multi-core systems. However, most of the current works are about static data allocation for hybrid memories. Those mechanisms cannot achieve better data placement in real-time. Hence, we propose online data allocation for hybrid memories on embedded tele-health systems. In this paper, we present dynamic programming and heuristic approaches. Considering the difference between profiled data access and actual data access, the proposed algorithms use a feedback mechanism to improve the accuracy of data allocation during runtime. Experimental results demonstrate that, compared to greedy approaches, the proposed algorithms achieve 20%-40% performance improvement based on different benchmarks. (C) 2016 Elsevier B.V. All rights reserved.This work is supported by NSF CNS-1457506 and NSF CNS-1359557.Chen, L.; Qiu, M.; Dai, W.; Hassan Mohamed, H. (2017). Novel online data allocation for hybrid memories on tele-health systems. Microprocessors and Microsystems. 52:391-400. https://doi.org/10.1016/j.micpro.2016.08.003S3914005

    Trust and reputation management for securing collaboration in 5G access networks: the road ahead

    Get PDF
    Trust represents the belief or perception of an entity, such as a mobile device or a node, in the extent to which future actions and reactions are appropriate in a collaborative relationship. Reputation represents the network-wide belief or perception of the trustworthiness of an entity. Each entity computes and assigns a trust or reputation value, which increases and decreases with the appropriateness of actions and reactions, to another entity in order to ensure a healthy collaborative relationship. Trust and reputation management (TRM) has been investigated to improve the security of traditional networks, particularly the access networks. In 5G, the access networks are multi-hop networks formed by entities which may not be trustable, and so such networks are prone to attacks, such as Sybil and crude attacks. TRM addresses such attacks to enhance the overall network performance, including reliability, scalability, and stability. Nevertheless, the investigation of TRM in 5G, which is the next-generation wireless networks, is still at its infancy. TRM must cater for the characteristics of 5G. Firstly, ultra-densification due to the exponential growth of mobile users and data traffic. Secondly, high heterogeneity due to the different characteristics of mobile users, such as different transmission characteristics (e.g., different transmission power) and different user equipment (e.g., laptops and smartphones). Thirdly, high variability due to the dynamicity of the entities’ behaviors and operating environment. TRM must also cater for the core features of 5G (e.g., millimeter wave transmission, and device-to-device communication) and the core technologies of 5G (e.g., massive MIMO and beamforming, and network virtualization). In this paper, a review of TRM schemes in 5G and traditional networks, which can be leveraged to 5G, is presented. We also provide an insight on some of the important open issues and vulnerabilities in 5G networks that can be resolved using a TRM framework

    GDP : using dataflow properties to accurately estimate interference-free performance at runtime

    Get PDF
    Multi-core memory systems commonly share resources between processors. Resource sharing improves utilization at the cost of increased inter-application interference which may lead to priority inversion, missed deadlines and unpredictable interactive performance. A key component to effectively manage multi-core resources is performance accounting which aims to accurately estimate interference-free application performance. Previously proposed accounting systems are either invasive or transparent. Invasive accounting systems can be accurate, but slow down latency-sensitive processes. Transparent accounting systems do not affect performance, but tend to provide less accurate performance estimates. We propose a novel class of performance accounting systems that achieve both performance-transparency and superior accuracy. We call the approach dataflow accounting, and the key idea is to track dynamic dataflow properties and use these to estimate interference-free performance. Our main contribution is Graph-based Dynamic Performance (GDP) accounting. GDP dynamically builds a dataflow graph of load requests and periods where the processor commits instructions. This graph concisely represents the relationship between memory loads and forward progress in program execution. More specifically, GDP estimates interference-free stall cycles by multiplying the critical path length of the dataflow graph with the estimated interference-free memory latency. GDP is very accurate with mean IPC estimation errors of 3.4% and 9.8% for our 4- and 8-core processors, respectively. When GDP is used in a cache partitioning policy, we observe average system throughput improvements of 11.9% and 20.8% compared to partitioning using the state-of-the-art Application Slowdown Model

    The Core Pillar: Ensuring Success Of The Early Warnings For All Initiative

    Get PDF
    • Disasters are a result of our social (including political, technological, and economic)environment; these enabling environmental factors must be considered fully in the Early Warnings for All Initiative to make sure warnings serve everyone. • People centred approaches and active stakeholder partnerships are needed to establish effective warnings. • The current Early Warnings for All Initiative Executive Action Plan risks failure as the four pillars operate in silos and people-centred approaches are not considered across all four pillars. • We propose to implement a “Core Pillar” to facilitate cross pillar collaboration and integration that includes the engagement of the wider community and most vulnerable. • Without this the Early Warnings for All Initiative may fail, resulting in billions spent on warnings that are not fit for the needs of those who are facing the risks and will not achieve the outlined impacts, with warnings continuing to operate in silos, and potentially causing more harm than benefit

    What lies beneath? The role of informal and hidden networks in the management of crises

    Get PDF
    Crisis management research traditionally focuses on the role of formal communication networks in the escalation and management of organisational crises. Here, we consider instead informal and unobservable networks. The paper explores how hidden informal exchanges can impact upon organisational decision-making and performance, particularly around inter-agency working, as knowledge distributed across organisations and shared between organisations is often shared through informal means and not captured effectively through the formal decision-making processes. Early warnings and weak signals about potential risks and crises are therefore often missed. We consider the implications of these dynamics in terms of crisis avoidance and crisis management

    The Footprint of Things: A hybrid approach towards the collection, storage and distribution of life cycle inventory data

    Get PDF
    Life cycle assessment is a well-established methodology for assessing the environmental impacts of products and services. Unfortunately, an essential part of this life cycle assessment method, collecting inventory data, is extremely time consuming. The quality of manually conducted LCA studies is often limited by uncertainty in the inventory data or narrow scope. Past attempts to overcome these challenges through automation of data collection utilizing the Internet of Things have relied on fully centralized architectures. The drawback of a central repository is the complex coordination between all involved actors in supply chains of products and services. This paper proposes an alternative hybrid approach combining a primary distributed system supplemented with a central repository reducing the need for coordination. This hybrid approach is named "the Footprint of Things". We present a system design that embeds the automatic reporting of life cycle inventory data, such as energy and material flows, into all product components involved in a service delivery. The major strength of our novel system design, among others, is its capacity for real-time and more precise impact calculation of ICT services

    Forecast-Based Interference : Modelling Multicore Interference from Observable Factors

    Get PDF
    While there is significant interest in the use of COTS multicore platforms for Real-time Systems, there has been very little in terms of practical methods to calculate the interference multiplier (i.e. the increase in execution time due to interference) between tasks on such systems. COTS multicore platforms present two distinct challenges: firstly, the variable interference between tasks competing for shared resources such as cache, and secondly the complexity of the hardware mechanisms and policies used, which may result in a system which is very difficult if not impossible to analyse; assuming that the exact details of the hardware are even disclosed! This paper proposes a new technique, Forecast-Based Interference analysis, which mitigates both of these issues by combining measurement-based techniques with statistical techniques and forecast modelling to enable the prediction of an interference multiplier for a given set of tasks, in an automated and reliable manner. The combination of execution times and interference multipliers can be used both in the design, e.g. for specifying timing watchdogs, and analysis, e.g. verifying schedulability

    Resource management and application customization for hardware accelerated systems

    Get PDF
    Computational demands are continuously increasing, driven by the growing resource demands of applications. At the era of big-data, big-scale applications, and real-time applications, there is an enormous need for quick processing of big amounts of data. To meet these demands, computer systems have shifted towards multi-core solutions. Technology scaling has allowed the incorporation of even larger numbers of transistors and cores into chips. Nevertheless, area constrains, power consumption limitations, and thermal dissipation limit the ability to design and sustain ever increasing chips. To overpassthese limitations, system designers have turned towards the usage of hardware accelerators. These accelerators can take the form of modules attached to each core of a multi-core system, forming a network on chip of cores with attached accelerators. Another option of hardware accelerators are Graphics Processing Units (GPUs). GPUs can be connected through a host-device model with a general purpose system, and are used to off-load parts of a workload to them. Additionally, accelerators can be functionality dedicated units. They can be part of a chip and the main processor can offload specific workloads to the hardware accelerator unit.In this dissertation we present: (a) a microcoded synchronization mechanism for systems with hardware accelerators that provide distributed shared memory, (b) a Streaming Multiprocessor (SM) allocation policy for single application execution on GPUs, (c) an SM allocation policy for concurrent applications that execute on GPUs, and (d) a framework to map neural network (NN) weights to approximate multiplier accuracy levels. Theaforementioned mechanisms coexist in the resource management domain. Specifically, the methodologies introduce ways to boost system performance by using hardware accelerators. In tandem with improved performance, the methodologies explore and balance trade-offs that the use of hardware accelerators introduce
    • …
    corecore