10 research outputs found
LSI/VLSI design for testability analysis and general approach
The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented
High level compilation for gate reconfigurable architectures
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 205-215).A continuing exponential increase in the number of programmable elements is turning management of gate-reconfigurable architectures as "glue logic" into an intractable problem; it is past time to raise this abstraction level. The physical hardware in gate-reconfigurable architectures is all low level - individual wires, bit-level functions, and single bit registers - hence one should look to the fetch-decode-execute machinery of traditional computers for higher level abstractions. Ordinary computers have machine-level architectural mechanisms that interpret instructions - instructions that are generated by a high-level compiler. Efficiently moving up to the next abstraction level requires leveraging these mechanisms without introducing the overhead of machine-level interpretation. In this dissertation, I solve this fundamental problem by specializing architectural mechanisms with respect to input programs. This solution is the key to efficient compilation of high-level programs to gate reconfigurable architectures. My approach to specialization includes several novel techniques. I develop, with others, extensive bitwidth analyses that apply to registers, pointers, and arrays. I use pointer analysis and memory disambiguation to target devices with blocks of embedded memory. My approach to memory parallelization generates a spatial hierarchy that enables easier-to-synthesize logic state machines with smaller circuits and no long wires.(cont.) My space-time scheduling approach integrates the techniques of high-level synthesis with the static routing concepts developed for single-chip multiprocessors. Using DeepC, a prototype compiler demonstrating my thesis, I compile a new benchmark suite to Xilinx Virtex FPGAs. Resulting performance is comparable to a custom MIPS processor, with smaller area (40 percent on average), higher evaluation speeds (2.4x), and lower energy (18x) and energy-delay (45x). Specialization of advanced mechanisms results in additional speedup, scaling with hardware area, at the expense of power. For comparison, I also target IBM's standard cell SA-27E process and the RAW microprocessor. Results include sensitivity analysis to the different mechanisms specialized and a grand comparison between alternate targets.by Jonathan William Babb.Ph.D
RTRLIB : a high-level modeling tool for dynamically partially reconfigurable systems
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2020.Reconfiguração dinâmica parcial é considerada uma interessante técnica a ser aplicada para o
aumento da flexibilidade de sistemas implementados em FPGA, em função da implementação
dinâmica de módulos de hardware enquanto o restante do circuito permanece em operação. Trata-
se de uma técnica utilizada em sistemas com requisitos muito restritos, como adaptabilidade,
robustez, consumo de potência, custo e tolerância à falhas. Entretanto, a complexidade de desen-
volvimento de sistemas com reconfiguração dinâmica parcial é consideravelmente alta quando
comparada à de sistemas com lógica totalmente estática. Nesse sentido, novas metodologias e
ferramentas de desenvolvimento são necessárias para reduzir a complexidade de implementação
desse tipo de sistema.
Nesse contexto, esse trabalho apresenta o RTRLib, uma ferramenta de modelagem em alto
nível para o desenvolvimento de sistemas com reconfiguração dinâmica parcial em dispositivos
Xilinx Zynq a partir da especificação e parametrização de alguns blocos. Sob condições específi-
cas, o RTRLib automaticamante produz os scripts de hardware e software para implementação da
solução utilizando o Vivado Design Suite e o SDK. Tais scripts são compostos pelos comandos
necessários para a implementação do sistema desde a criação do projeto de hardware até a criação
do arquivo de boot. Uma vez que o RTRLib é composto por IP-Cores previamente caracterizados,
a ferramenta também pode ser utilizada para a análise, em fase de modelagem, do sistema a ser
implementado, por meio da estimação de características importantes do sistema, como o consumo
de recursos e latência.
O presente trabalho também inclui novas funcionalidades implementadas no RTRLib no con-
texto do design de hardware e de software, como: generalização do script de hardware, mapea-
mento de IO, floorplanning por meio de uma GUI, criação de um gerador de script de software,
gerador de template de aplicação standalone que faz uso do partial reconfiguration controller
(PRC) e implementação de uma biblioteca para aplicações FreeRTOS.
Por fim, quatro estudos de casos foram implementados para demonstrar as funcionalidades da
ferramenta: um sistema de classificação de terrenos baseado em redes neurais, um sistema com
regressores lineares utilizado para controle de uma prótese miocinética de mão e, por último, uma
aplicação hipotética de um sistema com requisitos de tempo real.Partial dynamic reconfiguration is considered an interesting technique to increase flexibility in
FPGA designs due to the dynamic replacement of hardware modules while the remainder of the
circuit remains in operation. It is used in systems with hard requirements such as adaptability,
robustness, power consumption, cost, and fault-tolerance. However, the complexity to develop
dynamically partially reconfigurable systems in considerably higher comparing with static de-
signs. Therefore, new design methodologies and tools have been required to reduce the design
complexity of such systems.
In this context, this work presents the RTRLib, a high-level modeling tool for the development
of dynamically reconfigurable systems on Xilinx Zynq devices by a simple system specification
and parametrization of some blocks. Under specific conditions, RTRLib automatically generates
the hardware and software scripts to implement the solution using Vivado and SDK. These scripts
are composed by the sequential design steps from hardware project creation to the boot image
elaboration. Since RTRLib is composed of pre-characterized IP-Cores, the tool also can be used
to analyze the system behavior during the design process by the early estimation of essential
characteristics of the system such as resource consumption and latency.
The present work also includes the new functionalities implemented on RTRLib in the context
of the hardware and the software design, such as: hardware script generalization, IO mapping,
floorplanning by a GUI, software script creation, generator of a standalone template application
that uses PRC, and implementation of a FreeRTOS library application.
Finally, four case studies were implemented to demonstrate the tool capability: a system
for terrain classification based on neuron networks, a linear regressor system used to control a
myokinetic-based prosthetic hand, and a hypothetical real-time application
Dynamically and partially reconfigurable hardware architectures for high performance microarray bioinformatics data analysis
The field of Bioinformatics and Computational Biology (BCB) is a multidisciplinary field
that has emerged due to the computational demands of current state-of-the-art biotechnology.
BCB deals with the storage, organization, retrieval, and analysis of biological datasets,
which have grown in size and complexity in recent years especially after the completion of
the human genome project. The advent of Microarray technology in the 1990s has resulted in
the new concept of high throughput experiment, which is a biotechnology that measures the
gene expression profiles of thousands of genes simultaneously. As such, Microarray requires
high computational power to extract the biological relevance from its high dimensional data.
Current general purpose processors (GPPs) has been unable to keep-up with the increasing
computational demands of Microarrays and reached a limit in terms of clock speed.
Consequently, Field Programmable Gate Arrays (FPGAs) have been proposed as a low
power viable solution to overcome the computational limitations of GPPs and other methods.
The research presented in this thesis harnesses current state-of-the-art FPGAs and tools to
accelerate some of the most widely used data mining methods used for the analysis of
Microarray data in an effort to investigate the viability of the technology as an efficient, low
power, and economic solution for the analysis of Microarray data. Three widely used
methods have been selected for the FPGA implementations: one is the un-supervised Kmeans
clustering algorithm, while the other two are supervised classification methods,
namely, the K-Nearest Neighbour (K-NN) and Support Vector Machines (SVM). These
methods are thought to benefit from parallel implementation. This thesis presents detailed
designs and implementations of these three BCB applications on FPGA captured in Verilog
HDL, whose performance are compared with equivalent implementations running on GPPs.
In addition to acceleration, the benefits of current dynamic partial reconfiguration (DPR)
capability of modern Xilinx’ FPGAs are investigated with reference to the aforementioned
data mining methods.
Implementing K-means clustering on FPGA using non-DPR design flow has
outperformed equivalent implementations in GPP and GPU in terms of speed-up by two
orders and one order of magnitude, respectively; while being eight times more power
efficient than GPP and four times more than a GPU implementation. As for the energy
efficiency, the FPGA implementation was 615 times more energy efficient than GPPs, and 31 times more than GPUs. Over and above, the FPGA implementation outperformed the
GPP and GPU implementations in terms of speed-up as the dimensionality of the Microarray
data increases. Additionally, the DPR implementations of the K-means clustering have
shown speed-up in partial reconfiguration time of ~5x and 17x over full chip reconfiguration
for single-core and eight-core implementations, respectively.
Two architectures of the K-NN classifier have been implemented on FPGA, namely, A1
and A2. The K-NN implementation based on A1 architecture achieved a speed-up of ~76x
over an equivalent GPP implementation whereas the A2 architecture achieved ~68x speedup.
Furthermore, the FPGA implementation outperformed the equivalent GPP
implementation when the dimensionality of data was increased. In addition, The DPR
implementations of the K-NN classifier have achieved speed-ups in reconfiguration time
between ~4x to 10x over full chip reconfiguration when reconfiguring portion of the
classifier or the complete classifier.
Similar to K-NN, two architectures of the SVM classifier were implemented on FPGA
whereby the former outperformed an equivalent GPP implementation by ~61x and the latter
by ~49x. As for the DPR implementation of the SVM classifier, it has shown a speed-up of
~8x in reconfiguration time when reconfiguring the complete core or when exchanging it
with a K-NN core forming a multi-classifier.
The aforementioned implementations clearly show FPGAs to be an efficacious, efficient
and economic solution for bioinformatics Microarrays data analysis
Design and Implementation of a Scalable Hardware Platform for High Speed Optical Tracking
Optical tracking has been an important subject of research since several decades. The utilization of optical tracking systems can be found in a wide range of areas, including
military, medicine, industry, entertainment, etc.
In this thesis a complete hardware platform that targets high-speed optical tracking applications is presented. The implemented hardware system contains three main components: a high-speed camera which is equipped with a 1.3M pixel image sensor capable of operating at 500 frames per second, a CameraLink grabber which is able to interface three cameras, and an FPGA+Dual-DSP based image processing platform. The hardware system is designed using a modular approach. The flexible architecture enables to construct a scalable optical tracking system, which allows a large number of cameras to be used in the tracking environment.
One of the greatest challenges in a multi-camera based optical tracking system is the huge amounts of image data that must be processed in real-time. In this thesis,
the study on FPGA based high-speed image processing is performed. The FPGA implementation for a number of image processing operators is described. How to exploit
different levels of parallelisms in the algorithm to achieve high processing throughput is explained in detail. This thesis also presents a new single-pass blob analysis algorithm. With an optimized FPGA implementation, the geometrical features of a large number of blobs can be calculated in real-time.
At the end of this thesis, a prototype design which integrates all the implemented hardware and software modules is demonstrated to prove the usability of the proposed
optical tracking system
A Realist Argument for the Self: Emotions and Consciousness in Self-Making
The question of the self, of what the self is (or even if there is a self) has been one that has grown alongside humanity, one that has haunted humanity, throughout our collective history. It is the purpose of this study to add to that questioning, and to attempt a small contribution to a field that has been as widely covered as it is perplexing. We will undertake this effort by firstly examining some common and representative accounts of the self and what they pertain, and with that as background we will move into the interdisciplinary areas of psychological and neuroscientific concerns regarding the self. We will discover the central role that emotions and intuition play in self formation and function. Applying those lessons philosophically we will build on our (hopefully achieved) foundation and offer a unique definition of the self. Thereby finding phenomenological matters to be of importance, we will next examine two self accounts from those quarters as possible objections to our own, and too conduct a review of phenomenological methodology. Taking that as guide we will explore consciousness and its relation to the self in some depth before finally proposing a metaphysical manner in which the self on our definition might be judged to be realist. With all of the preceding as grounding we will then analyze time for our self-view and suggest that if one’s self is to be a personal work – a creation – rather than an accident of happenstance then it is out of the perspective on time whence it will come into fruition. Throughout these necessarily broad but deeply interrelated considerations we will strive to maintain a practical approach and limit ourselves only to the human case
Recommended from our members
Faceting: Rereading Feminism and Postmodernism
This project offers a feminist reconsideration of the postmodern aesthetic across a set of American fictions since 1945. From our current perspective, postmodernism is both overdetermined and undervalued; we limit our readings by equating its sheen and sparkle with irony, paranoia, and superficiality. I present an alternative to two longstanding default modes of interpreting the postmodern: the excavatory, hermeneutic model inaugurated by Fredric Jameson’s Postmodernism, and the poststructuralist model, which celebrates a seemingly infinite profusion of references and surfaces. My project’s impact is threefold: I demonstrate how feminism refashions the postmodern aesthetic, I reanimate a quintessentially postmodern language of surface and depth in terms of our current crisis of reading, and I show how feminism is uniquely equipped to supersede, though not erase, that binary. Drawing together new debates in feminist, postcritical, and film theory, I present another approach to novels by Sylvia Plath, Christopher Isherwood, Thomas Pynchon, Vladimir Nabokov, Maxine Hong Kingston, and Leslie Marmon Silko, as well as several films and the television series Mad Men.Feminist theory has a vexed relationship with postmodernism, both as an aesthetic category and in relation to its two major interpretive frameworks. For Jameson, the postmodern resists interpretation because of its baroque excesses, which he alternately compares to “heaps of fragments” and to “the distorting and fragmenting reflections of one enormous glass surface.” Jameson’s imagery emphasizes postmodernism’s illegibility, whether by profusion or impenetrability; while poststructuralist readings distinguish themselves by ennobling and elaborating upon these assumptions, they do not fundamentally unseat them. I argue that postmodernism’s aesthetic, supremely fragmented but also flatly reflective, actually invites the reader to make sense of the text in a pleasurable act of construction. This calls for a method of reading I term faceting, from the Latin facere, “to make or do,” a word that connotes reflection, refraction, and repositioning. To constellate meanings in a postmodern text is to negotiate a plural but limited set of interrelations from its vast networks of data and its myriad surfaces. The reader fastens shifting, tessellated planes into a provisional, dimensional, if hollow, narrative whole. If, in Rita Felski’s terms, intersectional feminism is always a “reworking,” an essentially “purposeful and hopeful” project of improvement, its history brings much to bear on the recent disciplinary turn to the postcritical, which is rooted in feminist and queer theory and eudaimonic in its aims. The pleasures of postmodernism, I maintain, lie precisely at its jagged seams and shifting juxtapositions, which the reader herself is constantly in the process of remaking. Rather than a readerly pose of ironic detachment or paranoid suspicion, faceting entails attachment, effort, and desire.Faceting seizes specifically on metonymy as an alternative, feminist form of figuration that is both prominent in and amenable to the aims of postmodernism. Unlike metaphor, which encourages a binary reading, whereby the reader searches for significance behind a surface, metonymy enables the reader to perceive the postmodern aesthetic as a severalty of surfaces – as, in a word, multifaceted. In each chapter, I analyze a seemingly binary mode of representation that faceting transforms into a limited plurality. I begin by using The Bell Jar and A Single Man to counter Jameson’s claims in Postmodernism, as figures that appear to be dual yield greater complexity when viewed via faceting. I go on to trace the implications of narrative eversion – the process by which a shape is pulled inside out – in The Crying of Lot 49 and Ada, or Ardor. I consider how projections into the past and future in The Woman Warrior disrupt narrative teleology, building on those observations in an analysis of the later Almanac of the Dead and Mason & Dixon. The project is bracketed by analyses of film and television, which offer insight into the visual aspects of faceting, evident in its relationship to terms like face and façade. The introduction reviews the literature that contributed to faceting as a concept, as well as the hollow pleasures of two mid-century films, Gentlemen Prefer Blondes and Imitation of Life; the epilogue addresses the contemporary nostalgia for the postmodern in the tension between photographic and moving images in Mad Men