392 research outputs found

    New FPGA design tools and architectures

    Get PDF

    Rapid SoC Design: On Architectures, Methodologies and Frameworks

    Full text link
    Modern applications like machine learning, autonomous vehicles, and 5G networking require an order of magnitude boost in processing capability. For several decades, chip designers have relied on Moore’s Law - the doubling of transistor count every two years to deliver improved performance, higher energy efficiency, and an increase in transistor density. With the end of Dennard’s scaling and a slowdown in Moore’s Law, system architects have developed several techniques to deliver on the traditional performance and power improvements we have come to expect. More recently, chip designers have turned towards heterogeneous systems comprised of more specialized processing units to buttress the traditional processing units. These specialized units improve the overall performance, power, and area (PPA) metrics across a wide variety of workloads and applications. While the GPU serves as a classical example, accelerators for machine learning, approximate computing, graph processing, and database applications have become commonplace. This has led to an exponential growth in the variety (and count) of these compute units found in modern embedded and high-performance computing platforms. The various techniques adopted to combat the slowing of Moore’s Law directly translates to an increase in complexity for modern system-on-chips (SoCs). This increase in complexity in turn leads to an increase in design effort and validation time for hardware and the accompanying software stacks. This is further aggravated by fabrication challenges (photo-lithography, tooling, and yield) faced at advanced technology nodes (below 28nm). The inherent complexity in modern SoCs translates into increased costs and time-to-market delays. This holds true across the spectrum, from mobile/handheld processors to high-performance data-center appliances. This dissertation presents several techniques to address the challenges of rapidly birthing complex SoCs. The first part of this dissertation focuses on foundations and architectures that aid in rapid SoC design. It presents a variety of architectural techniques that were developed and leveraged to rapidly construct complex SoCs at advanced process nodes. The next part of the dissertation focuses on the gap between a completed design model (in RTL form) and its physical manifestation (a GDS file that will be sent to the foundry for fabrication). It presents methodologies and a workflow for rapidly walking a design through to completion at arbitrary technology nodes. It also presents progress on creating tools and a flow that is entirely dependent on open-source tools. The last part presents a framework that not only speeds up the integration of a hardware accelerator into an SoC ecosystem, but emphasizes software adoption and usability.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168119/1/ajayi_1.pd

    Monitoring the Coastal Sand Wedge Outbreak from the Stony River, Taranaki

    Get PDF
    The Taranaki headland protrudes into a high energy wave climate system with a potential for strong littoral transport. This shoreline is typically comprised of a narrow cobble to boulder reflective beach, surmounting a rugged wave cut shore platform which is carved into lahar deposits from the nearby andesitic composite cone of Mount Taranaki. Historically along this coast, pocket sandy beaches have endured within embayments and adjacent to headland features; otherwise the the Taranaki littoral system is sand-starved. In 1998, persistent heavy rainfall resulted in the collapse of scoriaceous sand and gravel on the side of Mount Taranaki, leading to massive injection of sand and gravel directly into the Stony River, from which the adjacent coastal shoreline has experienced a continuous flux of dense 'black' titanomagnetite-rich volcanic sands. These sediments are rapidly transported to the north-east by the energetic wave climate, creating sandy beaches on what is normally a rocky boulder coast. Coastline changes between Cape Egmont and New Plymouth were analysed by the comparison of aerial photographs dated 1995, 2001, and 2007. These photos illustrated a marked increase in sandy sediment in 2001 which has diminished rapidly by 2007. Sub-aerial beach profiles were undertaken, over the duration of this study, at 12 locations between Komene Road, just south of the Stony River mouth, and Back Beach in New Plymouth. The beaches including and south of Ahuahu Beach are characterised by dunes which reach greater elevations than those further north where the dune elevation decreases. This evidence along with visual observations suggest that the sediment derived from the Stony River sits on the upper beach forming a berm and high dunes. Sediment textural analysis was conducted on samples collected at the 12 beach profiling locations. This analysis was undertaken every three months over the course of this study and showed (1) a decrease in mean grain size with distance north of the Stony River mouth; and (2) sorting generally improves with distance north-east in the direction of the dominant littoral drift. Mineralogical analysis illustrated that beach sediments are dominated by the heavy mineral titanomagnetite and the opaque minerals of augite, hornblende and plagioclase feldspar. The longshore sediment transport flux between Rahotu Road, south of Cape Egmont) and Back Beach in New Plymouth was examined. Wave climate parameters were generated at 55 locations between 1998 and 2007 using the SWAN third generation numerical model. The CERC (1973) formula was used to calculate the potential longshore sediment transport flux, at each of these locations using the significant wave height and angle of incidence. Analysis of the wave data illustrate a wave energy gradient extends from the Cape toward New Plymouth, caused by: (1) the exposed nature of the coast near Cape Egmont to the dominant south-westerly swells; (2) the greater seabed gradient near Cape Egmont, and (3) the refraction shadowing that occurs with distance to the northeast of the Cape. The potential longshore sediment transport results indicate low potential sediment transport fluxes south of Cape Egmont, which increase, reaching a maximum potential flux slightly south of the Stony River mouth. These potential longshore sediment transport fluxes then decrease towards Back Beach as a result of refraction in the vicinity of the Sugar Loaf Islands. The size of the 'slug' of sand derived from the headwaters of the Stony River is likely to have diminished in size substantially when it has been transported as far north as Back Beach, and these results indicate that there will be insufficient energy for any substantial volumes to be transported around the Paritutu Headland. A sediment budget has been ascertained from the volume of sediment that has recently been eroded from the scarp on Little Pyramid on Mount Taranaki, and the volume of sediment that can be accounted for today on the beaches, and in the Stony River channel as aggradation deposits. Up to 14.4 M m3 of sediment was estimated to have been deposited in the Stony River system directly from the scarp. Of this, ~ 3.5 M m3 has accumulated on the sub-aerial beach over a distance of 14 km to east from just south of the river mouth, and ~3.3 M m3 has been deposited along the Stony River channel. The remaining ~7.6M m3 of sediment is likely to be deposited offshore, overlying the wave-swept boulder platform and in the interstitial space of the sediments that form these shore platforms. Significant quantities of this sediment are also likely to have been transported in the north-east directed littoral transport

    FieldPlacer - A flexible, fast and unconstrained force-directed placement method for heterogeneous reconfigurable logic architectures

    Get PDF
    The field of placement methods for components of integrated circuits, especially in the domain of reconfigurable chip architectures, is mainly dominated by a handful of concepts. While some of these are easy to apply but difficult to adapt to new situations, others are more flexible but rather complex to realize. This work presents the FieldPlacer framework, a flexible, fast and unconstrained force-directed placement method for heterogeneous reconfigurable logic architectures, in particular for the ever important heterogeneous FPGAs. In contrast to many other force-directed placers, this approach is called ‘unconstrained’ as it does not require a priori fixed logic elements in order to calculate a force equilibrium as the solution to a system of equations. Instead, it is based on a free spring embedder simulation of a graph representation which includes all logic block types of a design simultaneously. The FieldPlacer framework offers a huge amount of flexibility in applying different distance norms (e. g., the Manhattan distance) for the force-directed layout and aims at creating adapted layouts for various objective functions, e. g., highest performance or improved routability. Depending on the individual situation, a runtime-quality trade-off can be considered to either produce a decent placement in a very short time or to generate an exceptionally good placement, which takes longer. An extensive comparison with the latest simulated annealing placement method from the well-known Versatile Place and Route (VPR) framework shows that the FieldPlacer approach can create placements of comparable quality much faster than VPR or, alternatively, generate better placements in the same time. The flexibility in defining arbitrary objective functions and the intuitive adaptability of the method, which, among others, includes different concepts from the field of graph drawing, should facilitate further developments with this framework, e. g., for new upcoming optimization targets like the energy consumption of an implemented design

    The Customizable Virtual FPGA: Generation, System Integration and Configuration of Application-Specific Heterogeneous FPGA Architectures

    Get PDF
    In den vergangenen drei Jahrzehnten wurde die Entwicklung von Field Programmable Gate Arrays (FPGAs) stark von Moore’s Gesetz, Prozesstechnologie (Skalierung) und kommerziellen Märkten beeinflusst. State-of-the-Art FPGAs bewegen sich einerseits dem Allzweck näher, aber andererseits, da FPGAs immer mehr traditionelle Domänen der Anwendungsspezifischen integrierten Schaltungen (ASICs) ersetzt haben, steigen die Effizienzerwartungen. Mit dem Ende der Dennard-Skalierung können Effizienzsteigerungen nicht mehr auf Technologie-Skalierung allein zurückgreifen. Diese Facetten und Trends in Richtung rekonfigurierbarer System-on-Chips (SoCs) und neuen Low-Power-Anwendungen wie Cyber Physical Systems und Internet of Things erfordern eine bessere Anpassung der Ziel-FPGAs. Neben den Trends für den Mainstream-Einsatz von FPGAs in Produkten des täglichen Bedarfs und Services wird es vor allem bei den jüngsten Entwicklungen, FPGAs in Rechenzentren und Cloud-Services einzusetzen, notwendig sein, eine sofortige Portabilität von Applikationen über aktuelle und zukünftige FPGA-Geräte hinweg zu gewährleisten. In diesem Zusammenhang kann die Hardware-Virtualisierung ein nahtloses Mittel für Plattformunabhängigkeit und Portabilität sein. Ehrlich gesagt stehen die Zwecke der Anpassung und der Virtualisierung eigentlich in einem Konfliktfeld, da die Anpassung für die Effizienzsteigerung vorgesehen ist, während jedoch die Virtualisierung zusätzlichen Flächenaufwand hinzufügt. Die Virtualisierung profitiert aber nicht nur von der Anpassung, sondern fügt auch mehr Flexibilität hinzu, da die Architektur jederzeit verändert werden kann. Diese Besonderheit kann für adaptive Systeme ausgenutzt werden. Sowohl die Anpassung als auch die Virtualisierung von FPGA-Architekturen wurden in der Industrie bisher kaum adressiert. Trotz einiger existierenden akademischen Werke können diese Techniken noch als unerforscht betrachtet werden und sind aufstrebende Forschungsgebiete. Das Hauptziel dieser Arbeit ist die Generierung von FPGA-Architekturen, die auf eine effiziente Anpassung an die Applikation zugeschnitten sind. Im Gegensatz zum üblichen Ansatz mit kommerziellen FPGAs, bei denen die FPGA-Architektur als gegeben betrachtet wird und die Applikation auf die vorhandenen Ressourcen abgebildet wird, folgt diese Arbeit einem neuen Paradigma, in dem die Applikation oder Applikationsklasse fest steht und die Zielarchitektur auf die effiziente Anpassung an die Applikation zugeschnitten ist. Dies resultiert in angepassten anwendungsspezifischen FPGAs. Die drei Säulen dieser Arbeit sind die Aspekte der Virtualisierung, der Anpassung und des Frameworks. Das zentrale Element ist eine weitgehend parametrierbare virtuelle FPGA-Architektur, die V-FPGA genannt wird, wobei sie als primäres Ziel auf jeden kommerziellen FPGA abgebildet werden kann, während Anwendungen auf der virtuellen Schicht ausgeführt werden. Dies sorgt für Portabilität und Migration auch auf Bitstream-Ebene, da die Spezifikation der virtuellen Schicht bestehen bleibt, während die physische Plattform ausgetauscht werden kann. Darüber hinaus wird diese Technik genutzt, um eine dynamische und partielle Rekonfiguration auf Plattformen zu ermöglichen, die sie nicht nativ unterstützen. Neben der Virtualisierung soll die V-FPGA-Architektur auch als eingebettetes FPGA in ein ASIC integriert werden, das effiziente und dennoch flexible System-on-Chip-Lösungen bietet. Daher werden Zieltechnologie-Abbildungs-Methoden sowohl für Virtualisierung als auch für die physikalische Umsetzung adressiert und ein Beispiel für die physikalische Umsetzung in einem 45 nm Standardzellen Ansatz aufgezeigt. Die hochflexible V-FPGA-Architektur kann mit mehr als 20 Parametern angepasst werden, darunter LUT-Grösse, Clustering, 3D-Stacking, Routing-Struktur und vieles mehr. Die Auswirkungen der Parameter auf Fläche und Leistung der Architektur werden untersucht und eine umfangreiche Analyse von über 1400 Benchmarkläufen zeigt eine hohe Parameterempfindlichkeit bei Abweichungen bis zu ±95, 9% in der Fläche und ±78, 1% in der Leistung, was die hohe Bedeutung von Anpassung für Effizienz aufzeigt. Um die Parameter systematisch an die Bedürfnisse der Applikation anzupassen, wird eine parametrische Entwurfsraum-Explorationsmethode auf der Basis geeigneter Flächen- und Zeitmodellen vorgeschlagen. Eine Herausforderung von angepassten Architekturen ist der Entwurfsaufwand und die Notwendigkeit für angepasste Werkzeuge. Daher umfasst diese Arbeit ein Framework für die Architekturgenerierung, die Entwurfsraumexploration, die Anwendungsabbildung und die Evaluation. Vor allem ist der V-FPGA in einem vollständig synthetisierbaren generischen Very High Speed Integrated Circuit Hardware Description Language (VHDL) Code konzipiert, der sehr flexibel ist und die Notwendigkeit für externe Codegeneratoren eliminiert. Systementwickler können von verschiedenen Arten von generischen SoC-Architekturvorlagen profitieren, um die Entwicklungszeit zu reduzieren. Alle notwendigen Konstruktionsschritte für die Applikationsentwicklung und -abbildung auf den V-FPGA werden durch einen Tool-Flow für Entwurfsautomatisierung unterstützt, der eine Sammlung von vorhandenen kommerziellen und akademischen Werkzeugen ausnutzt, die durch geeignete Modelle angepasst und durch ein neues Werkzeug namens V-FPGA-Explorer ergänzt werden. Dieses neue Tool fungiert nicht nur als Back-End-Tool für die Anwendungsabbildung auf dem V-FPGA sondern ist auch ein grafischer Konfigurations- und Layout-Editor, ein Bitstream-Generator, ein Architekturdatei-Generator für die Place & Route Tools, ein Script-Generator und ein Testbenchgenerator. Eine Besonderheit ist die Unterstützung der Just-in-Time-Kompilierung mit schnellen Algorithmen für die In-System Anwendungsabbildung. Die Arbeit schliesst mit einigen Anwendungsfällen aus den Bereichen industrielle Prozessautomatisierung, medizinische Bildgebung, adaptive Systeme und Lehre ab, in denen der V-FPGA eingesetzt wird

    Ground water and surface water under stress

    Get PDF
    Presented at Ground water and surface water under stress: competition, interaction, solutions: a USCID water management conference on October 25-28, 2006 in Boise, Idaho.Includes bibliographical references.The A&B Irrigation District in south-central Idaho supplies water to irrigate over 76,000 acres. The district's 14,660-acre Unit A is supplied with water from the Snake River. Unit B is comprised of 62,140 acres of land irrigated by pumping groundwater from the Eastern Snake Plain Aquifer (ESPA) using 177 deep wells. Pumping depths range from 200 to 350 feet. Water from Unit B wells is distributed to irrigated lands via a system of short, unlined lateral canals averaging about 3/4-mile in length with capacities of 2 to 12 cfs. During the period from 1975 to 2005, the average level of the ESPA under the A&B Irrigation District dropped 25 ft and as much as 40 ft in some locations. This has forced the district to deepen some existing wells and drill several new wells. To help mitigate the declining aquifer, the district and its farmers have implemented a variety of irrigation system and management improvements. Improvements have involved a concerted effort by the district, landowners, and local and federal resource agencies. The district has installed variable speed drives on some supply wells, installed a SCADA system to remotely monitor and control well pumps, and piped portions of the open distribution laterals. This has permitted farmers to connect farm pressure pumps directly to supply well outlets. Farmers have helped by converting many of their surface irrigation application systems to sprinklers, moving farm deliveries to central locations to reduce conveyance losses, and installing systems to reclaim irrigation spills and return flows

    Ground water and surface water under stress

    Get PDF
    Presented at Ground water and surface water under stress: competition, interaction, solutions: a USCID water management conference on October 25-28, 2006 in Boise, Idaho.Includes bibliographical references.The METRIC evapotranspiration (ET) estimation model was applied using MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images in New Mexico to evaluate the applicability of MODIS images to ET estimation and water resources management. With the coarse resolution of MODIS (approximately 1km thermal resolution), MODIS was not found to be suitable for field-scale applications. In project and regional scale applications, MODIS has potential to contribute to ET estimation and water resources management. MODIS based ET maps for New Mexico were compared with Landsat based results for 12 dates. Average ET calculations using MODIS and Landsat applications were similar, indicating that MODIS images can be useful as an ET estimation tool in project and regional scale applications

    Preliminary Results on the Structure and Functioning of a Taiga Watershed

    Get PDF
    Comprehensive research in ecosystem functioning may logically be undertaken in the conceptual and physical context of complete drainage basins (watersheds or catchments). The watershed forms a fundamental, cohesive landscape unit in terms of water movement following initial receipt of precipitation. Water itself is a fundamental agent in energy flux, nutrient transport, and in plant and animal life. The Caribou-Poker Creeks Research Watershed is an interagency endeavor aimed at understanding hydrologic and, ultimately, ecological functioning in the subarctic taiga, the discontinuous permafrost uplands of central Alaska. Initial work includes acquisition and analysis of data on soils, vegetation, local climate, hydrology, and stream quality. Information acquired in the research watershed is summarized here, and implications for future data acquisition and research are considered
    corecore