37 research outputs found

    On The Non-linear Distortion Effects in an OFDM-RoF Link

    No full text
    Radio over Fiber (RoF) system is a promising technique for microcell and picocell applications for deployment of future ubiquitous wireless data networks. However, the performance of RoF systems can be severely degraded due to non-linear effects in the channel. Also, Orthogonal Frequency Division Multiplexing (OFDM), as a standard for broadband wireless networks, is being proposed for deployment with RoF systems to facilitate the total performance of a system. In this research, at first, the performance of OFDM-based RoF link with Mach-Zehnder modulator distortion effects has been analyzed at 5.8 GHz. Evaluation of mean-squared error of the proposed OFDM-RoF system was carried out to compare with the conventional single carrier system based RoF link after the modulator distortion case and also for fixed Signal to Noise Ratio (SNR) of 20 dB using undistorted OFDM signal. Later, nominal and offset biasing pre-distortion techniques are applied in proposed system to linearize the OFDM-RoF link. Thus, finally a comparison between the aforementioned pre-distortion techniques applied showed important observation in terms of distortion-free dynamic range and SNR to choose offset pre-distortion technique for our proposed system

    On the error vector magnitude as a performance metric and comparative analysis

    No full text
    In this paper, we present the error vector magnitude (EVM) as a a figure of merit for assessing the quality of digitally modulated telecommunication signals. We define EVM for a common industry standard and derive the relationships among EVM, signal to noise ratio (SNR) and bit error rate (BER). We also compare among the different performance metrics and show that EVM can be equivalently useful as signal to noise ratio and bit error rate. A few simulation results are presented to illustrate the performance of EVM based on these relationships

    Adaptive energy minimization of OpenMP parallel applications on many-core systems

    No full text
    Energy minimization of parallel applications is an emerging challenge for current and future generations of many-core computing systems. In this paper, we propose a novel and scalable energy minimization approach that suitably applies DVFS in the sequential part and jointly considers DVFS and dynamic core allocations in the parallel part. Fundamental to this approach is an iterative learning based control algorithm that adapt the voltage/frequency scaling and core allocations dynamically based on workload predictions and is guided by the CPU performance counters at regular intervals. The adaptation is facilitated through performance annotations in the application codes, defined in a modified OpenMP runtime library. The proposed approach is validated on an Intel Xeon E5-2630 platform with up to 24 CPUs running NAS parallel benchmark applications. We show that our proposed approach can effectively adapt to different architecture and core allocations and minimize energy consumption by up to 17% compared to the existing approaches for a given performance requirement

    Learning-based runtime management of energy-efficient and reliable many-core systems

    No full text
    This paper highlights and demonstrates our research works to date addressing the energy-efficiency and reliability challenges of many-core systems through intelligent runtime management algorithms. The algorithms are implemented through cross-layer interactions between the three layers: application, runtime and hardware, forming our core theme of working together. The annotated application tasks communicate the performance, energy or reliability requirements to the runtime. With such requirements, the runtime exercises the hardware through various control knobs and gets the feedback of these controls through the performance monitors. The aim is to learn the best possible hardware controls during runtime to achieve energy-efficiency and improved reliability, while meeting the specified application requirements

    Thermal-aware adaptive energy minimization of open MP parallel applications

    No full text
    Energy minimization of parallel applications considering thermal distributions among the processor cores is an emerging challenge for current and future generations of many-core computing systems. This paper proposes an adaptive energy minimization approach that hierarchically applies dynamic voltage\slash frequency scaling (DVFS), thread-to-core affinity and dynamic concurrency controls (DCT) to address this challenge. The aim is to minimize the energy consumption and achieve balanced thermal distributions among cores, thereby improving the lifetime reliability of the system, while meeting a specified power budget requirement. Fundamental to this approach is an iterative learning-based control algorithm that adapts the VFS and core allocations dynamically based on the CPU workloads and thermal distributions of the processor cores, guided by the CPU performance counters at regular intervals. The adaptation is facilitated through modified OpenMP library-based power budget annotations. The proposed approach is extensively validated on an Intel Xeon E5-2630 platform with up to 12 CPUs running NAS parallel benchmark applications

    Learning transfer-based adaptive energy minimization in embedded systems

    No full text
    Embedded systems execute applications with different performance requirements. These applications exercise the hardware differently depending on the types of computation being carried out, generating varying workloads with time. We will demonstrate that energy minimization with such workload and performance variations within (intra) and across (inter) applications is particularly challenging. To address this challenge we propose an online energy minimization approach, capable of minimizing energy through adaptation to these variations. At the core of the approach is an initial learning through reinforcement learning algorithm that suitably selects the appropriate voltage/frequency scalings (VFS) based on workload predictions to meet the applications’ performance requirements. The adaptation is then facilitated and expedited through learning transfer, which uses the interaction between the system application, runtime and hardware layers to adjust the power control levers. The proposed approach is implemented as a power governor in Linux and validated on an ARM Cortex-A8 running different benchmark applications. We show that with intra- and inter-application variations, our proposed approach can effectively minimize energy consumption by up to 33% compared to existing approaches. Scaling the approach further to multi-core systems, we also show that it can minimize energy by up to 18% with 2X reduction in the learning time when compared with a recently reported approach

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Investigation into low power and reliable system-on-chip design

    No full text
    It is likely that the demand for multiprocessor system-on-chip (MPSoC) with low power consumption and high reliability in the presence of soft errors will continue to increase. However, low power and reliable MPSoC design is challenging due to conflicting trade-off between power minimisation and reliability objectives. This thesis is concerned with the development and validation of techniques to facilitate effective design of low power and reliable MPSoCs. Special emphasis is placed upon system-level design techniques for MPSoCs with voltage scaling enabled processors highlighting the trade-offs between performance, power consumption and reliability.An important aspect in the system-level design is to validate reliability in the presence of soft errors through simulation technique. The first part of the thesis addresses the development of a SystemC fault injection simulator based on a novel fault injection technique. Using MPEG-2 decoder and other examples, it is shown that the simulator benefits from minimum design intrusion and high fault representation. The simulator is used throughout the thesis to facilitate the study of reliability of MPSoC.On-chip communication architecture plays a vital role in determining the performance and reliability of MPSoCs. The second part of the thesis focuses on comparative study between two types of on-chip communication architectures: network-on-chip (NoC) and advanced microprocessor bus architecture (AMBA). The comparisons are carried out using real application traffic based on MPEG-2 video decoder demonstrating the trade-off between performance and reliability.The third part of the thesis concentrates on low power and reliable system-level design techniques. Two new techniques are presented, which are capable of generating optimised designs in terms of low power consumption and reliability. The first technique demonstrates a power minimisation technique through appropriate voltage scaling of the MPSoC cores, such that real-time constraints are met and reliability is maintained at acceptable-level. The second technique deals with joint optimisation of power minimisation and reliability improvement for time-constrained MPSoCs. Extensive experiments are conducted for these two new techniques using different applications, including MPEG-2 video decoder. It is shown that the proposed techniques give significant power reduction and reliability improvement compared to existing techniques

    Congestion Control of Ad Hoc Wireless LANs: A Control-theoretic paradigm to digital filter based solution

    No full text
    An ad hoc wireless LAN is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any pre-existing network infrastructure or centralized administration. Due to its distributed nature, flexibility, robustness and ease of installation, ad hoc wireless LAN has greatly increased the scope for research in wireless communications. Since there is no defined structure, congestion control for systems where each ad hoc node can request certain bandwidth can pose the challenge of uncertain delay and instability and thus remains as a challenge in research. An ideal congestion control scheme for multi-hop ad hoc network would have to ensure that the bandwidth requests and input and output rates are regulated from chosen bridges and also from source and destination controllers. In this thesis, a novel congestion control scheme for multihop wireless LAN based on time-delay model is developed. The design of the proposed control model is derived from internal model control principles, with the control being done by the model reference controller and the error controller. Based on the congestion scenarios, the reference controller sets up a feasible reference value for the queue length, while the error controller feeds back rate-based compensation for the error between the reference and instantaneous queue lengths to combat against congestive disturbances. The proposed scheme makes use of Smith Predictor in the error controller to compensate for backward delay time, which is often referred to as "dead time" in control-engineering terms, to mitigate the stability problems that may occur. Underpinning the continuous-time model, a discretized and simplified digital-filter based solution is devised to make use of fast digital-filters available to-date, without causing problem to scalability of the rate-based scheme and to propose a hardware based solution. The control objectives will be set with an aim to ensure full-link utilization and to achieve maximum rate recovery as soon as the congestion has been cleared under system stability. Simulations are performed to illustrate the performance of the controller under different congestion scenarios

    Personal multimedia communications: simulations and analyses

    No full text
    corecore