36 research outputs found
Anti-computing
We live in a moment of high anxiety around digital transformation. Computers are blamed for generating toxic forms of culture and ways of life. Once part of future imaginaries that were optimistic or even utopian, today there is a sense that things have turned out very differently. Anti-computing is widespread. This book seeks to understand its cultural and material logics, its forms, and its operations. Anti-Computing critically investigates forgotten histories of dissent – moments when the imposition of computational technologies, logics, techniques, imaginaries, utopias have been questioned, disputed, or refused. It asks why dissent is forgotten and how - under what circumstances - it revives. Constituting an engagement with media archaeology/medium theory and working through a series of case studies, this book is compelling reading for scholars in digital media, literary, cultural history, digital humanities and associated fields at all levels
Anti-computing
Anti-computing explores forgotten histories and contemporary forms of dissent – moments when the imposition of computational technologies, logics, techniques, imaginaries, utopias have been questioned, disputed, or refused. It also asks why these moments tend to be forgotten. What is it about computational capitalism that means we live so much in the present? What has this to do with computational logics and practices themselves? This book addresses these issues through a critical engagement with media archaeology and medium theory and by way of a series of original studies; exploring Hannah Arendt and early automation anxiety, witnessing and the database, Two Cultures from the inside out, bot fear, singularity and/as science fiction. Finally, it returns to remap long-standing concerns against new forms of dissent, hostility, and automation anxiety, producing a distant reading of contemporary hostility.At once an acute response to urgent concerns around toxic digital cultures, an accounting with media archaeology as a mode of medium theory, and a series of original and methodologically fluid case studies, this book crosses an interdisciplinary research field including cultural studies, media studies, medium studies, critical theory, literary and science fiction studies, media archaeology, medium theory, cultural history, technology history
Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021
The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
Optical Communication
Optical communication is very much useful in telecommunication systems, data processing and networking. It consists of a transmitter that encodes a message into an optical signal, a channel that carries the signal to its desired destination, and a receiver that reproduces the message from the received optical signal. It presents up to date results on communication systems, along with the explanations of their relevance, from leading researchers in this field. The chapters cover general concepts of optical communication, components, systems, networks, signal processing and MIMO systems. In recent years, optical components and other enhanced signal processing functions are also considered in depth for optical communications systems. The researcher has also concentrated on optical devices, networking, signal processing, and MIMO systems and other enhanced functions for optical communication. This book is targeted at research, development and design engineers from the teams in manufacturing industry, academia and telecommunication industries
Semi-automated Design of High-performance Digital Circuits with Xilinx FPGAs
Tato diplomová práce se zabývá návrhem sekvenčních digitálních obvodů s ohledem na optimalizaci zpoždění. V práci je popsána problematika dvou technik, které jsou běžně používané při optimalizaci – stručně je popsána technika tzv. synchronizace registrů (angl. retiming), větší pozornost je však věnována technice tzv. zřetězení (angl. pipelining). V rámci praktické části byla vypracována forma abstrakce sekvenčních digitálních obvodů pomocí acyklických orientovaných grafů. Obvod je tak přenesen do roviny, ve které je jednodušší jej transformovat. Zároveň je představen nástroj pro polo-automatickou optimalizaci digitálních obvodů vyvíjených v prostředí Xilinx ISE Design Suite využitím techniky zřetězení.This master's thesis deals with sequential digital circuit design optimization concerning delay optimization. Two techniques commonly used for the optimization are described in the thesis – a brief description of the retiming technique and a more in-depth description of the pipelining technique. A form of abstraction of sequential digital circuits using Directed Acyclic Graphs (DAGs) was developed in the practical part of the thesis. This abstraction represents the circuit in a more manageable way for transformations. At the same time, a tool for semi-automatic digital circuit optimization using pipelining is introduced. This tool is compatible with Xilinx ISE Design Suite.
Recommended from our members
Logical partitioning of parallel system simulations
Simulation has been a fundamental tool to prototype, hypothesize, and evaluate
new ideas to continue improving system performance. However, increasing levels
of processor parallelism and heterogeneity have introduced additional
constraints when evaluating new designs. The work embodied in this dissertation
explores how to leverage novel ideas in simulator partitioning to improve
simulator speed and flexibility for simulating these new types of systems.
The contribution of this work includes the introduction of optimistic
partitioned simulation to improve parallelization, and the introduction of
warped partitioned simulation for improved flexibility. These ideas are refined
and demonstrated through the use of prototypes to demonstrate their benefits
compared to state-of-the-art approaches. By leveraging partitioning in a
structured manner, it is possible to design simulators that better address the
open challenges of parallel and heterogeneous systems design.Electrical and Computer Engineerin
Recommended from our members
Cross-Layer Pathfinding for Off-Chip Interconnects
Off-chip interconnects for integrated circuits (ICs) today induce a diverse design space, spanning many different applications that require transmission of data at various bandwidths, latencies and link lengths. Off-chip interconnect design solutions are also variously sensitive to system performance, power and cost metrics, while also having a strong impact on these metrics. The costs associated with off-chip interconnects include die area, package (PKG) and printed circuit board (PCB) area, technology and bill of materials (BOM). Choices made regarding off-chip interconnects are fundamental to product definition, architecture, design implementation and technology enablement. Given their cross-layer impact, it is imperative that a cross-layer approach be employed to architect and analyze off-chip interconnects up front, so that a top-down design flow can comprehend the cross-layer impacts and correctly assess the system performance, power and cost tradeoffs for off-chip interconnects. Chip architects are not exposed to all the tradeoffs at the physical and circuit implementation or technology layers, and often lack the tools to accurately assess off-chip interconnects. Furthermore, the collaterals needed for a detailed analysis are often lacking when the chip is architected; these include circuit design and layout, PKG and PCB layout, and physical floorplan and implementation. To address the need for a framework that enables architects to assess the system-level impact of off-chip interconnects, this thesis presents power-area-timing (PAT) models for off-chip interconnects, optimization and planning tools with the appropriate abstraction using these PAT models, and die/PKG/PCB co-design methods that help expose the off-chip interconnect cross-layer metrics to the die/PKG/PCB design flows. Together, these models, tools and methods enable cross-layer optimization that allows for a top-down definition and exploration of the design space and helps converge on the correct off-chip interconnect implementation and technology choice. The tools presented cover off-chip memory interfaces for mobile and server products, silicon photonic interfaces, 2.5D silicon interposers and 3D through-silicon vias (TSVs). The goal of the cross-layer framework is to assess the key metrics of the interconnect (such as timing, latency, active/idle/sleep power, and area/cost) at an appropriate level of abstraction by being able to do this across layers of the design flow. In additional to signal interconnect, this thesis also explores the need for such cross-layer pathfinding for power distribution networks (PDN), where the system-on-chip (SoC) floorplan and pinmap must be optimized before the collateral layouts for PDN analysis are ready. Altogether, the developed cross-layer pathfinding methodology for off-chip interconnects enables more rapid and thorough exploration of a vast design space of off-chip parallel and serial links, inter-die and inter-chiplet links and silicon photonics. Such exploration will pave the way for off-chip interconnect technology enablement that is optimized for system needs. The basis of the framework can be extended to cover other interconnect technology as well, since it fundamentally relates to system-level metrics that are common to all off-chip interconnects
Modelling and performance analysis of multigigabit serial interconnects using real number based analog verification methods
The increasing importance of multigigabit transceiver circuits in modern chip design calls for new methods of analyzing and integrating these challenging building blocks. This work presents a design and analysis framework basend on the SystemVerilog real number modeling ansatz. It further extends the simulation possibilities thus obtained by introducing additional higher level numeric modelling and evaluation methods to support multigigabit statistical link budgeting procedures based on the Peak Distortion Algorithm
Design of Multi-Gigabit Network Interconnect Elements and Protocols for a Data Acquisition System in Radiation Environments
Modern High Energy Physics experiments (HEP) explore the fundamental nature
of matter in more depth than ever before and thereby benefit greatly from the
advances in the field of communication technology. The huge data volumes
generated by the increasingly precise detector setups pose severe problems for
the Data Acquisition Systems (DAQ), which are used to process and store this
information. In addition, detector setups and their read-out electronics need
to be synchronized precisely to allow a later correlation of experiment events
accurately in time. Moreover, the substantial presence of charged particles from
accelerator-generated beams results in strong ionizing radiation levels, which has
a severe impact on the electronic systems.
This thesis recommends an architecture for unified network protocol IP cores
with custom developed physical interfaces for the use of reliable data acquisition
systems in strong radiation environments. Special configured serial bidirectional
point-to-point interconnects are proposed to realize high speed data transmission,
slow control access, synchronization and global clock distribution on unified links
to reduce costs and to gain compact and efficient read-out setups. Special features
are the developed radiation hardened functional units against single and multiple
bit upsets, and the common interface for statistical error and diagnosis information,
which integrates well into the protocol capabilities and eases the error handling in
large experiment setups. Many innovative designs for several custom FPGA and
ASIC platforms have been implemented and are described in detail. Special focus
is placed on the physical layers and network interface elements from high-speed
serial LVDS interconnects up to 20 Gb/s SSTL links in state-of-the-art process
technology.
The developed IP cores are fully tested by an adapted verification environment for
electronic design automation tools and also by live application. They are available
in a global repository allowing a broad usage within further HEP experiments
Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems
With the increasing digital services demand, performance and power-efficiency
become vital requirements for digital circuits and systems. However, the
enabling CMOS technology scaling has been facing significant challenges of
device uncertainties, such as process, voltage, and temperature variations. To
ensure system reliability, worst-case corner assumptions are usually made in
each design level. However, the over-pessimistic worst-case margin leads to
unnecessary power waste and performance loss as high as 2.2x. Since
optimizations are traditionally confined to each specific level, those safe
margins can hardly be properly exploited.
To tackle the challenge, it is therefore advised in this Ph.D. thesis to
perform a cross-layer optimization for digital signal processing circuits and
systems, to achieve a global balance of power consumption and output quality.
To conclude, the traditional over-pessimistic worst-case approach leads to
huge power waste. In contrast, the adaptive voltage scaling approach saves
power (25% for the CORDIC application) by providing a just-needed supply
voltage. The power saving is maximized (46% for CORDIC) when a more aggressive
voltage over-scaling scheme is applied. These sparsely occurred circuit errors
produced by aggressive voltage over-scaling are mitigated by higher level error
resilient designs. For functions like FFT and CORDIC, smart error mitigation
schemes were proposed to enhance reliability (soft-errors and timing-errors,
respectively). Applications like Massive MIMO systems are robust against lower
level errors, thanks to the intrinsically redundant antennas. This property
makes it applicable to embrace digital hardware that trades quality for power
savings.Comment: 190 page