43 research outputs found

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument

    The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report

    Get PDF
    The Habitable Exoplanet Observatory, or HabEx, has been designed to be the Great Observatory of the 2030s. For the first time in human history, technologies have matured sufficiently to enable an affordable space-based telescope mission capable of discovering and characterizing Earthlike planets orbiting nearby bright sunlike stars in order to search for signs of habitability and biosignatures. Such a mission can also be equipped with instrumentation that will enable broad and exciting general astrophysics and planetary science not possible from current or planned facilities. HabEx is a space telescope with unique imaging and multi-object spectroscopic capabilities at wavelengths ranging from ultraviolet (UV) to near-IR. These capabilities allow for a broad suite of compelling science that cuts across the entire NASA astrophysics portfolio. HabEx has three primary science goals: (1) Seek out nearby worlds and explore their habitability; (2) Map out nearby planetary systems and understand the diversity of the worlds they contain; (3) Enable new explorations of astrophysical systems from our own solar system to external galaxies by extending our reach in the UV through near-IR. This Great Observatory science will be selected through a competed GO program, and will account for about 50% of the HabEx primary mission. The preferred HabEx architecture is a 4m, monolithic, off-axis telescope that is diffraction-limited at 0.4 microns and is in an L2 orbit. HabEx employs two starlight suppression systems: a coronagraph and a starshade, each with their own dedicated instrument.Comment: Full report: 498 pages. Executive Summary: 14 pages. More information about HabEx can be found here: https://www.jpl.nasa.gov/habex

    Remixing physical objects through tangible tools

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 147-164).In this document we present new tools for remixing physical objects. These tools allow users to copy, edit and manipulate the properties of one or more objects to create a new physical object. We already have these capabilities using digital media: we can easily mash up videos, music and text. However, it remains difficult to remix physical objects and we cannot access the advantages of digital media, which are nondestructive, scalable and scriptable. We can bridge this gap by both integrating 2D and 3D scanning technology into design tools and employing aordable rapid prototyping technology to materialize these remixed objects. In so doing, we hope to promote copying as a tool for creation. This document presents two tools, CopyCAD and KidCAD, the first designed for makers and crafters, the second for children. CopyCAD is an augmented Computer Numerically Controlled (CNC) milling machine which allows users to copy arbitrary real world object geometry into 2D CAD designs at scale through the use of a camera-projector system. CopyCAD gathers properties from physical objects, sketches and touch interactions directly on a milling machine, allowing novice users to copy parts of real world objects, modify them and create a new physical part. KidCAD is a sculpting interface built on top of a gel-based realtime 2.5D scanner. It allows children to stamp objects into the block of gel, which are scanned in realtime, as if they were stamped into clay. Children can use everyday objects, their hands and tangible tools to design new toys or objects that will be 3D printed. This work enables novice users to easily approach designing physical objects by copying from other objects and sketching new designs. With increased access to such tools we hope that a wide range of people will be empowered to create their own objects, toys, tools and parts.by Sean Follmer.S.M

    GPU Integration into a Software Defined Radio Framework

    Get PDF
    Software Defined Radio (SDR) was brought about by moving processing done on specific hardware components to reconfigurable software. Hardware components like General Purpose Processors (GPPs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs) are used to make the software and hardware processing of the radio more portable and as efficient as possible. Graphics Processing Units (GPUs) designed years ago for video rendering, are now finding new uses in research. The parallel architecture provided by the GPU gives developers the ability to speed up the performance of computationally intense programs. An open source tool for SDR, Open Source Software Communications Architecture (SCA) Implementation: Embedded (OSSIE), is a free waveform development environment for any developer who wants to experiment with SDR. In this work, OSSIE is integrated with a GPU computing framework to show how performance improvement can be gained from GPU parallelization. GPU research performed with SDR encompasses improving SDR simulations to implementing specific wireless protocols. In this thesis, we are aiming to show performance improvement within an SCA architected SDR implementation. The software components within OSSIE gained significant performance increases with little software changes due to the natural parallelism of the GPU, using Compute Unified Device Architecture (CUDA), Nvidia\u27s GPU programming API. Using sample data sizes for the I and Q channel inputs, performance improvements were seen in as little as 512 samples when using the GPU optimized version of OSSIE. As the sample size increased, the CUDA performance improved as well. Porting OSSIE components onto the CUDA architecture showed that improved performance can be seen in SDR related software through the use of GPU technology

    Layoutautomatisierung im analogen IC-Entwurf mit formalisiertem und nicht-formalisiertem Expertenwissen

    Get PDF
    After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively. Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone. Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale. Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.Nach mehr als drei Jahrzehnten Entwurfsautomatisierung werden die meisten Layouts für analoge integrierte Schaltkreise heute immer noch in aufwändiger Handarbeit entworfen. Gegenüber den hochautomatisierten Synthesewerkzeugen im Digitalbereich (die sich mit dem quantitativen Problem auseinandersetzen, mehr und mehr Komponenten auf einem einzelnen Chip unterzubringen – bestens bekannt als More Moore) kämpft die analoge Layoutautomatisierung mit den vielen verschiedenen und stark korrelierten funktionalen Anforderungen, die das analoge Entwurfsproblem zu einer More than Moore Herausforderung machen. Angesichts dieser qualitativen Komplexität bedarf es des umfassenden Expertenwissens erfahrener Layouter um sämtliche Entwurfsconstraints, die zwingend eingehalten werden müssen, zu berücksichtigen. Meist beinhaltet dies formal spezifiziertes als auch nicht-formal übermitteltes Expertenwissen, was eine explizite bzw. implizite Constraint Berücksichtigung nach sich zieht. Existierende Automatisierungsansätze können grundsätzlich unterteilt werden in Optimierungsalgorithmen (wo die Constraint Berücksichtigung explizit erfolgt) und prozedurale Generatoren (die Constraints nur implizit berücksichtigen können). Wie in dieser Arbeit eruiert wird, folgen diese beiden Automatisierungsstrategien zwei grundlegend unterschiedlichen Paradigmen, bezeichnet als top-down Automatisierung und bottom-up Automatisierung. Wesentliches Merkmal der top-down Automatisierung ist die Notwendigkeit einer umfassenden Problemformalisierung um eine eigenintelligente Lösungsfindung zu ermöglichen, während ein bottom-up Automatismus –parametergesteuert– lediglich Lösungen reproduziert, die vorab von einem Layoutexperten vorgedacht wurden. Da die Stärken des einen Paradigmas die Schwächen des anderen ausgleichen können, ist anzunehmen, dass eine Kombination beider Paradigmen –genannt bottom-up meets top down– weitaus mehr Potenzial hat, das analoge Entwurfsproblem in seiner Gesamtheit zu lösen als optimierungsbasierte oder generatorbasierte Ansätze für sich allein. Vor diesem Hintergrund stellt die vorliegende Arbeit Self-organized Wiring and Arrangement of Responsive Modules (SWARM) vor, eine interdisziplinäre Methodik, die das Entwurfsproblem mit einem dezentralisierten Multi-Agenten-System angeht. Das Grundprinzip besteht darin, ähnlich dem Zusammentreiben einer Schafherde, reaktionsfähige mobile Layoutmodule (realisiert als kontextbewusste prozedurale Generatoren) in einer benutzerdefinierten Layoutzone interagieren zu lassen. Jedes Modul darf sich selbständig bewegen, drehen und verformen, wobei ein übergeordnetes Kontrollorgan die Zone schrittweise verkleinert, um die Interaktion auf zunehmend kompakte (und constraintkonforme) Layoutanordnungen hinzulenken. Durch die Berücksichtigung diverser Selbstorganisationsgrundsätze und die Einarbeitung von Ideen bestehender dezentralisierter Systeme ist SWARM in der Lage, das Phänomen der Emergenz hervorzurufen: obwohl jedes Modul nur eine begrenzte Sichtweise hat und egoistisch seine eigenen Ziele verfolgt, können sich auf globaler Ebene bemerkenswerte Gesamtlösungen herausbilden. Mehrere Beispiele veranschaulichen dieses emergente Verhalten in SWARM, wobei besonders interessant ist, dass sogar optimale Lösungen aus der Modulinteraktion entstehen können. Weitere Beispiele demonstrieren SWARMs Eignung zwecks Floorplanning sowie die Anwendung auf praktische Place-and-Route Probleme. Letzteres verdeutlicht, wie die interagierenden Module ihre jeweiligen Entwurfsanforderungen implizit (also: bottom-up) beachten, während sie gleichzeitig High-Level-Constraints berücksichtigen (z.B. die Layoutkontur, die top-down vom übergeordneten Kontrollorgan auferlegt wird). Experimentelle Ergebnisse zeigen, dass Optimierungsalgorithmen und prozedurale Generatoren von SWARM sowohl bezüglich Layoutqualität als auch Entwurfsproduktivität übertroffen werden können. Aus akademischer Sicht besteht SWARMs große Errungenschaft in der Erschließung fruchtbaren Neulands für zukünftige Arbeiten an neuartigen bottom-up meets top-down Automatismen. Diese könnten eines Tages der Schlüssel sein, um die Automatisierungslücke im analogen Layoutentwurf zu schließen

    The Problematic of Privacy in the Namespace

    Get PDF
    In the twenty-first century, the issue of privacy--particularly the privacy of individuals with regard to their personal information and effects--has become highly contested terrain, producing a crisis that affects both national and global social formations. This crisis, or problematic, characterizes a particular historical conjuncture I term the namespace. Using cultural studies and the theory of articulation, I map the emergent ways that the namespace articulates economic, juridical, political, cultural, and technological forces, materials, practices and protocols. The cohesive articulation of the namespace requires that privacy be reframed in ways that make its diminution seem natural and inevitable. In the popular media, privacy is often depicted as the price we pay as citizens and consumers for security and convenience, respectively. This discursive ideological shift supports and underwrites the interests of state and corporate actors who leverage the ubiquitous network of digitally connected devices to engender a new regime of informational surveillance, or dataveillance. The widespread practice of dataveillance represents a strengthening of the hegemonic relations between these actors--each shares an interest in promoting an emerging surveillance society, a burgeoning security politics, and a growing information economy--that further empowers them to capture and store the personal information of citizens/consumers. In characterizing these shifts and the resulting crisis, I also identify points of articulation vulnerable to rearticulation and suggest strategies for transforming the namespace in ways that might empower stronger protections for privacy and related civil rights

    NASA Tech Briefs, April 1990

    Get PDF
    Topics: New Product Ideas; NASA TU Services; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences

    Monen näkymän stereon rekonstruktio tietokonegrafiikan sisällön taltiointiin

    Get PDF
    Rendering of photorealistic models is becoming increasingly feasible and important in computer graphics. Due to the high amount of work in creating such models by hand, required by the need of high level of detail in geometry, colors, and animation, it is desirable to automate the model creation. This task is realised by recovering content from photographs that describe the modeled subject from multiple viewpoints. In this thesis, elementary theory on image acquisition and photograph-based reconstruction of geometry and colors is studied and recent research and software for graphics content reconstruction are reviewed. Based on the studied background, a rig is built as a grid of nine off-the-shelf digital cameras, a custom remote shutter trigger, and supporting software. The purpose of the rig is to capture high-quality photographs and video in a reliable and practical manner, to be processed in multi-view reconstruction programs. The camera rig was configured to shoot small static subjects for experiments. The resulting photos and video were then processed directly for the subject's geometry and color, with little or no image enhancements done. Two typical reconstruction software pipelines were described and tested. The rig was found to perform well for several subjects with little subject-specific tuning; issues in the setup were analyzed and further improvements suggested based on the captured test models. Image quality of the cameras was found to be excellent for the task, and most problems arose from uneven lighting and practical issues. The developed rig was found to produce sub-millimeter scale accuracy in geometry and texture of subjects such as human faces. Future work was suggested for lighting, video synchronization and study of state-of-the-art image processing and reconstruction algorithms.Fotorealististen mallien renderöinti on yhä tärkeämpää ja mahdollisempaa tietokonegrafiikassa. Näiden mallien luominen käsityönä on työlästä vaaditun korkean tarkkuuden takia geometrian, värien ja animaation osalta, ja onkin haluttavaa korvata käsityö automatiikalla. Automatisointi voidaan suorittaa taltioimalla sisältö valokuvista, jotka kuvastavat samaa kohdetta useammasta näkymästä. Tässä diplomityössä tarkastellaan kuvantamista ja valokuvapohjaisen geometrian rekonstruktiota geometrian ja värien kannalta ja katsastetaan viimeaikainen tutkimus ja ohjelmistot grafiikkasisällön rekonstruointiin. Taustan pohjalta rakennetaan laitteisto, joka koostuu yhdeksästä valmiina saatavilla olevasta digikamerasta aseteltuna hilaksi, itse tehdystä etälaukaisimesta sekä ohjelmistoista. Laitteiston tarkoituksena on taltioida luotettavalla ja käytännöllisellä tavalla korkealaatuisia valokuvia ja videokuvaa, joita voi käsitellä monen näkymän stereon rekonstruktion tietokonesovelluksissa. Laitteisto säädettiin kuvaamaan pieniä staattisia kohteita kokeita varten. Tuloksena saadusta kuvamateriaalista laskettiin kohteen geometria ja värit ilman mainittavaa kuvanparannuksen käyttöä. Käytiin läpi kaksi tyypillistä rekonstruktio-ohjelmistoa testiksi ja laitteiston havaittiin soveltuvan hyvin useisiin kohteisiin ilman erityistä säätämistä. Kameroiden kuvanlaatu todettiin tehtävään erinomaiseksi ja useimmat haasteet johtuivat epätasaisesta valaistuksesta ja käytännön pulmista. Laitteiston todettiin tuottavan alle millimetriskaalan geometriaa ja pintavärikuvaa ihmiskasvojen kaltaisista kohteista. Jatkotyötä ehdotettiin valaistukseen, videon synkronointiin ja viimeisimpien kuvankäsittely- ja rekonstruktioalgoritmien tutkimiseen

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Null convention logic circuits for asynchronous computer architecture

    Get PDF
    For most of its history, computer architecture has been able to benefit from a rapid scaling in semiconductor technology, resulting in continuous improvements to CPU design. During that period, synchronous logic has dominated because of its inherent ease of design and abundant tools. However, with the scaling of semiconductor processes into deep sub-micron and then to nano-scale dimensions, computer architecture is hitting a number of roadblocks such as high power and increased process variability. Asynchronous techniques can potentially offer many advantages compared to conventional synchronous design, including average case vs. worse case performance, robustness in the face of process and operating point variability and the ready availability of high performance, fine grained pipeline architectures. Of the many alternative approaches to asynchronous design, Null Convention Logic (NCL) has the advantage that its quasi delay-insensitive behavior makes it relatively easy to set up complex circuits without the need for exhaustive timing analysis. This thesis examines the characteristics of an NCL based asynchronous RISC-V CPU and analyses the problems with applying NCL to CPU design. While a number of university and industry groups have previously developed small 8-bit microprocessor architectures using NCL techniques, it is still unclear whether these offer any real advantages over conventional synchronous design. A key objective of this work has been to analyse the impact of larger word widths and more complex architectures on NCL CPU implementations. The research commenced by re-evaluating existing techniques for implementing NCL on programmable devices such as FPGAs. The little work that has been undertaken previously on FPGA implementations of asynchronous logic has been inconclusive and seems to indicate that asynchronous systems cannot be easily implemented in these devices. However, most of this work related to an alternative technique called bundled data, which is not well suited to FPGA implementation because of the difficulty in controlling and matching delays in a 'bundle' of signals. On the other hand, this thesis clearly shows that such applications are not only possible with NCL, but there are some distinct advantages in being able to prototype complex asynchronous systems in a field-programmable technology such as the FPGA. A large part of the value of NCL derives from its architectural level behavior, inherent pipelining, and optimization opportunities such as the merging of register and combina- tional logic functions. In this work, a number of NCL multiplier architectures have been analyzed to reveal the performance trade-offs between various non-pipelined, 1D and 2D organizations. Two-dimensional pipelining can easily be applied to regular architectures such as array multipliers in a way that is both high performance and area-efficient. It was found that the performance of 2D pipelining for small networks such as multipliers is around 260% faster than the equivalent non-pipelined design. However, the design uses 265% more transistors so the methodology is mainly of benefit where performance is strongly favored over area. A pipelined 32bit x 32bit signed Baugh-Wooley multiplier with Wallace-Tree Carry Save Adders (CSA), which is representative of a real design used for CPUs and DSPs, was used to further explore this concept as it is faster and has fewer pipeline stages compared to the normal array multiplier using Ripple-Carry adders (RCA). It was found that 1D pipelining with ripple-carry chains is an efficient implementation option but becomes less so for larger multipliers, due to the completion logic for which the delay time depends largely on the number of bits involved in the completion network. The average-case performance of ripple-carry adders was explored using random input vectors and it was observed that it offers little advantage on the smaller multiplier blocks, but this particular timing characteristic of asynchronous design styles be- comes increasingly more important as word size grows. Finally, this research has resulted in the development of the first 32-Bit asynchronous RISC-V CPU core. Called the Redback RISC, the architecture is a structure of pipeline rings composed of computational oscillations linked with flow completeness relationships. It has been written using NELL, a commercial description/synthesis tool that outputs standard Verilog. The Redback has been analysed and compared to two approximately equivalent industry standard 32-Bit synchronous RISC-V cores (PicoRV32 and Rocket) that are already fabricated and used in industry. While the NCL implementation is larger than both commercial cores it has similar performance and lower power compared to the PicoRV32. The implementation results were also compared against an existing NCL design tool flow (UNCLE), which showed how much the results of these implementation strategies differ. The Redback RISC has achieved similar level of throughput and 43% better power and 34% better energy compared to one of the synchronous cores with the same benchmark test and test condition such as input sup- ply voltage. However, it was shown that area is the biggest drawback for NCL CPU design. The core is roughly 2.5× larger than synchronous designs. On the other hand its area is still 2.9× smaller than previous designs using UNCLE tools. The area penalty is largely due to the unavoidable translation into a dual-rail topology when using the standard NCL cell library
    corecore