14 research outputs found

    A Problem-Oriented Approach for Dynamic Verification of Heterogeneous Embedded Systems

    Get PDF
    This work presents a virtual prototyping methodology for the design and verification of industrial devices in the field level of industrial automation systems. This work demonstrates that virtual prototypes can help increase the confidence in the correctness of a design thanks to a deeper understanding of the complex interactions between hardware, software, analog and mixed-signal components of embedded systems and the physical processes they interact with

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Blind adaptive equalization for QAM signals: New algorithms and FPGA implementation.

    Get PDF
    Adaptive equalizers remove signal distortion attributed to intersymbol interference in band-limited channels. The tap coefficients of adaptive equalizers are time-varying and can be adapted using several methods. When these do not include the transmission of a training sequence, it is referred to as blind equalization. The radius-adjusted approach is a method to achieve blind equalizer tap adaptation based on the equalizer output radius for quadrature amplitude modulation (QAM) signals. Static circular contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. The equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the radius-adjusted modified multitmodulus algorithm (RMMA) and the radius-adjusted multimodulus decision-directed algorithm (RMDA). An extension of the radius-adjusted approach is the selective update method, which is a computationally-efficient method for equalization. The selective update method employs a stop-and-go strategy based on the equalizer output radius to selectively update the equalizer tap coefficients, thereby, reducing the number of computations in steady-state operation. (Abstract shortened by UMI.) Source: Masters Abstracts International, Volume: 45-01, page: 0401. Thesis (M.A.Sc.)--University of Windsor (Canada), 2006

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Removal of Unnecessary Context Switches from the SystemC Simulation Kernel for Fast VP Simulation

    No full text
    Abstract-Virtual prototypes are widely employed in today's development of embedded hardware and software. To model and simulate the VPs, SystemC has been adopted as a standard language tool. With SystemC, hardware modules and software codes can be modeled as processes. To model concurrency, one process can be suspended and then the SystemC scheduler selects the next process to resume. This is also known as context switching. Context switching can consume a large portion of simulation time and decrease the simulation performance heavily. In this paper, a method is proposed to avoid unnecessary context switches. It achieves this by taking concurrency into account to determine the necessity of a context switch. The case study shows that, by applying the method, performance could be increased significantly without any loss in simulation accuracy

    Green Wave : A Semi Custom Hardware Architecture for Reverse Time Migration

    Get PDF
    Over the course of the last few decades the scientific community greatly benefited from steady advances in compute performance. Until the early 2000's this performance improvement was achieved through rising clock rates. This enabled plug-n-play performance improvements for all codes. In 2005 the stagnation of CPU clock rates drove the computing hardware manufactures to attain future performance through explicit parallelism. Now the HPC community faces a new, even bigger challenge. So far performance gains were achieved through replication of general-purpose cores and nodes. Unfortunately, rising cluster sizes resulted in skyrocketing energy costs - a paradigm change in HPC architecture design is inevitable. In combination with the increasing costs of data movement, the HPC community started exploring alternatives like GPUs and large arrays of simple, low-power cores (e.g. BlueGene) to offer the better performance per Watt and greatest scalability. As in general science, the seismic community faces large-scale, complex computational challenges that can only be limited solved with available compute capabilities. Such challenges include the physically correct modeling of subsurface rock layers. This thesis analyzes the requirements and performance of isotropic (ISO), vertical transverse isotropic (VTI) and tilted transverse isotropic (TTI) wave propagation kernels as they appear in the Reverse Time Migration (RTM) imaging method. It finds that even with leading-edge, commercial off-the-shelf hardware, large-scale survey sizes cannot be imaged within reasonable time and power constraints. This thesis uses a novel architecture design method leveraging a hardware/software co-design approach, adopted from the mobile- and embedded market, for HPC. The methodology tailors an architecture design to a class of applications without loss of generality like in full custom designs. This approach was first applied in the Green Flash project, which proved that the co-design approach has the potential for high energy efficiency gains. This thesis presents the novel Green Wave architecture that is derived from the Green Flash project. Rather than focusing on climate codes, like Green Flash, Green Wave chooses RTM wave propagation kernels as its target application. Thus, the goal of the application-driven, co-design Green Wave approach, is to enable full programmability while allowing greater computational efficiency than general-purpose processors or GPUs by offering custom extensions to the processor's ISA and correctly sizing software-managed memories and an efficient on-chip network interconnect. The lowest level building blocks of the Green Wave design are pre-verified IP components. This minimizes the amount of custom logic in the design, which in turn reduces verification costs and design uncertainty. In this thesis three Green Wave architecture designs derived from ISO, VTI and TTI kernel analysis are introduced. Further, a programming model is proposed capable of hiding all communication latencies. With production-strength, cycle-accurate hardware simulators Green Wave's performance is benchmarked and its performance compared to leading on-market systems from Intel, AMD and NVidia. Based on a large-scale example survey, the results show that Green Wave has the potential of an energy efficiency improvement of 5x compared to x86 and 1.4x-4x to GPU-based clusters for ISO, VTI and TTI kernels

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    El inglés de los ordenadores. Inglés técnico para informáticos, con apuntes de gramática y glosarios

    Get PDF
    El inglés de los ordenadores es un manual de inglés técnico destinado a estudiantes universitarios y profesionales de la informática que tengan un nivel intermedio de inglés general y necesiten profundizar en las estructuras y el vocabulario propios de los textos técnicos de Informática. En esta segunda edición, nuestro principal objetivo ha sido actualizar los temas, ampliar el vocabulario y enriquecer las estrategias de aprendizajes a fin de dotarlo de más flexibilidad y de una mayor capacidad comunicativa. Podemos decir, en definitiva, que por su organización, por la gran variedad de textos, temas y ejercicios expuestos y por sus espléndidos glosarios, la obra que tiene en sus manos es un auténtico «manual de manuales»; El inglés de los ordenadores es al mismo tiempo un libro de texto, una gramática, un cuaderno de ejercicios y un diccionario técnico, todo ello pensado para proporcionar al lector la información y los recursos necesarios de manera que se adapten con facilidad pero con rigor a las distintas situaciones de enseñanza y aprendizaje

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
    corecore