63 research outputs found

    CAREER: Automated software understanding for retargeting embedded image processing software for data parallel execution

    Get PDF
    Issued as final reportNational Science Foundation (U.S.

    Orbiting Rainbows: Optical Manipulation of Aerosols and the Beginnings of Future Space Construction

    Get PDF
    Our objective is to investigate the conditions to manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an ultra-lightweight surface with useful and adaptable electromagnetic characteristics, for instance, in the optical, RF, or microwave bands. Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. See Figure 1 for a scenario of application of this concept. The solution that we propose is to construct an optical system in space in which the nonlinear optical properties of a cloud of micron-sized particles are shaped into a specific surface by light pressure, allowing it to form a very large and lightweight aperture of an optical system, hence reducing overall mass and cost. Other potential advantages offered by the cloud properties as optical system involve possible combination of properties (combined transmit/receive), variable focal length, combined refractive and reflective lens designs, and hyper-spectral imaging. A cloud of highly reflective particles of micron-size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Sun's light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft clouds to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exoplanet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exoplanet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities

    Automatic mapping of graphical programming applications to microelectronic technologies

    Get PDF
    Adaptive computing systems (ACSs) and application-specific integrated circuits (ASICs) can serve as flexible hardware accelerators for applications in domains such as image processing and digital signal processing. However, the mapping of applications onto ACSs and ASICs using the traditional methods can take months for a hardware engineer to develop and debug. In this dissertation, a new approach for automatic mapping of software applications onto ACSs and ASICs has been developed, implemented and validated. This dissertation presents the design flow of the software environment called CHAMPION, which is being developed at the University of Tennessee. This environment permits high-level design entry using the Cantata graphical programming software fromKRI. Using Cantata as the design entry, CHAMPION hides from the user the low-level details of the hardware architecture and the finer issues of application mapping onto the hardware. Validation of the CHAMPION environment was performed using multiple applications of moderate complexity. In one case, theapplication mapping time which required six weeks to perform manually took only six minutes for CHAMPION, yet comparable results were produced. Furthermore, the CHAMPION environment was constructed such that retargeting to a new adaptive computing system could be accomplished in just a few hours as opposed to weeks using manual methods. Thus, CHAMPION permits both ACSs and ASICs to be utilized by a wider audience and application development accomplished in less time

    Compilation techniques and language support to facilitate dependence-driven computation

    Get PDF
    As the demand increases for high performance and power efficiency in modern computer runtime systems and architectures, programmers are left with the daunting challenge of fully exploiting these systems for efficiency, high-level expressibility, and portability across different computing architectures. Emerging programming models such as the task-based runtime StarPU and many-core architectures such as GPUs force programmers into choosing either low-level programming languages or putting complete faith in the compiler. As has been previously studied in extensive detail, both development approaches have their own respective trade-offs. The goal of this thesis is to help make parallel programming easier. It addresses these challenges by providing new compilation techniques for high-level programming languages that conform to commonly-accepted paradigms in order to leverage these emerging runtime systems and architectures. In particular, this dissertation makes several contributions to these challenges by leveraging the high-level programming language Chapel in order to efficiently map computation and data onto both the task-based runtime system StarPU and onto GPU-based accelerators. Different loop-based parallel programs and experiments are evaluated in order to measure the effectiveness of the proposed compiler algorithms and their optimizations, while also providing programmability metrics when leveraging high-level languages. In order to exploit additional performance when mapping onto shared memory systems, this thesis proposes a set of compiler and runtime-based heuristics that determine the profitable processor tile shapes and sizes when mapping multiply-nested parallel loops. Finally, a new benchmark-suite named P-Ray is presented. This is used to provide machine characteristics in a portable manner that can be used by either a compiler, an auto-tuning framework, or the programmer when optimizing their applications

    NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems

    Get PDF
    Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft swarms to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exo-planet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exo-planet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Furthermore, future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities. Our objective in Phase II was to experimentally and numerically investigate how to optically manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an adaptable ultra-lightweight surface. Our solution is based on the aperture being an engineered granular medium, instead of a conventional monolithic aperture. This allows building of apertures at a reduced cost, enables extremely fault-tolerant apertures that cannot otherwise be made, and directly enables classes of missions for exoplanet detection based on Fourier spectroscopy with tight angular resolution and innovative radar systems for remote sensing. In this task, we have examined the advanced feasibility of a crosscutting concept that contributes new technological approaches for space imaging systems, autonomous systems, and space applications of optical manipulation. The proposed investigation has matured the concept that we started in Phase I to TRL 3, identifying technology gaps and candidate system architectures for the space-borne cloud as an aperture

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Optimising and evaluating designs for reconfigurable hardware

    No full text
    Growing demand for computational performance, and the rising cost for chip design and manufacturing make reconfigurable hardware increasingly attractive for digital system implementation. Reconfigurable hardware, such as field-programmable gate arrays (FPGAs), can deliver performance through parallelism while also providing flexibility to enable application builders to reconfigure them. However, reconfigurable systems, particularly those involving run-time reconfiguration, are often developed in an ad-hoc manner. Such an approach usually results in low designer productivity and can lead to inefficient designs. This thesis covers three main achievements that address this situation. The first achievement is a model that captures design parameters of reconfigurable hardware and performance parameters of a given application domain. This model supports optimisations for several design metrics such as performance, area, and power consumption. The second achievement is a technique that enhances the relocatability of bitstreams for reconfigurable devices, taking into account heterogeneous resources. This method increases the flexibility of modules represented by these bitstreams while reducing configuration storage size and design compilation time. The third achievement is a technique to characterise the power consumption of FPGAs in different activity modes. This technique includes the evaluation of standby power and dedicated low-power modes, which are crucial in meeting the requirements for battery-based mobile devices

    Investigation of heterogeneous computing through novel parallel programming platforms

    Get PDF
    The computational landscape is dominated by the use of a very high number of CPU resources; this has however provided diminishing returns in recent years, pushing for a paradigm shift in the choice for computational systems. The following work was aimed at determining the maturity of heterogeneous computer systems in terms of computational performance and their possible integration within High-Performance Computing resources through the use of the OpenCL parallel programming platform. An introduction is given in the existing hardware architectures targeted by the OpenCL platform, existing literature regarding the integration of heterogeneous systems for computational applications, and the OpenCL platform as a development environment. A number of applications are developed to benchmark the capabilities of the framework in multi-architecture environments, the results of which show up to 160 times performance gain when targeting GPU architectures, as opposed to CPU, for matrix multiplication algorithms. Based on this, an extensive test-bench is designed targeting the HTCondor resource pool for a Fast-Fourier Transform application. Results from these machines once again showed a significant performance increase against CPU systems, while also enabling the expansion of the HTCondor system and the uncovering of 30 Teraflops of dormant computing power. The FPGA architecture is also investigated for its potential in OpenCL computational acceleration, with a focus on the platforms ease of use. It is determined that the framework is mature enough for FPGA application development

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    3rd Many-core Applications Research Community (MARC) Symposium. (KIT Scientific Reports ; 7598)

    Get PDF
    This manuscript includes recent scientific work regarding the Intel Single Chip Cloud computer and describes approaches for novel approaches for programming and run-time organization
    • …
    corecore