116 research outputs found

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    Feasibility study of an Integrated Program for Aerospace vehicle Design (IPAD). Volume 4: IPAD system design

    Get PDF
    The computing system design of IPAD is described and the requirements which form the basis for the system design are discussed. The system is presented in terms of a functional design description and technical design specifications. The functional design specifications give the detailed description of the system design using top-down structured programming methodology. Human behavioral characteristics, which specify the system design at the user interface, security considerations, and standards for system design, implementation, and maintenance are also part of the technical design specifications. Detailed specifications of the two most common computing system types in use by the major aerospace companies which could support the IPAD system design are presented. The report of a study to investigate migration of IPAD software between the two candidate 3rd generation host computing systems and from these systems to a 4th generation system is included

    Algorithm Libraries for Multi-Core Processors

    Get PDF
    By providing parallelized versions of established algorithm libraries, we ease the exploitation of the multiple cores on modern processors for the programmer. The Multi-Core STL provides basic algorithms for internal memory, while the parallelized STXXL enables multi-core acceleration for algorithms on large data sets stored on disk. Some parallelized geometric algorithms are introduced into CGAL. Further, we design and implement sorting algorithms for huge data in distributed external memory

    A Cognitive Science Reasoning in Recognition of Emotions in Audio-Visual Speech

    Get PDF
    In this report we summarize the state-of-the-art of speech emotion recognition from the signal processing point of view. On the bases of multi-corporal experiments with machine-learning classifiers, the observation is made that existing approaches for supervised machine learning lead to database dependent classifiers which can not be applied for multi-language speech emotion recognition without additional training because they discriminate the emotion classes following the used training language. As there are experimental results showing that Humans can perform language independent categorisation, we made a parallel between machine recognition and the cognitive process and tried to discover the sources of these divergent results. The analysis suggests that the main difference is that the speech perception allows extraction of language independent features although language dependent features are incorporated in all levels of the speech signal and play as a strong discriminative function in human perception. Based on several results in related domains, we have suggested that in addition, the cognitive process of emotion-recognition is based on categorisation, assisted by some hierarchical structure of the emotional categories, existing in the cognitive space of all humans. We propose a strategy for developing language independent machine emotion recognition, related to the identification of language independent speech features and the use of additional information from visual (expression) features

    A Mini-History of Computing

    Get PDF
    This book was produced by George K. Thiruvathukal for the American Institute of Physics to promote interest in the interdisciplinary publication, Computing in Science and Engineering. It accompanied a limited edition set of playing cards that is no longer available (except in PDF). This book features a set of 54 significant computers by era/category, including ancient calculating instruments, pre-electronic mechanical calculators and computers, electronic era computers, and modern computing (minicomputers, maniframes, personal computers, devices, and gaming consoles)

    Doctor of Philosophy

    Get PDF
    dissertationIn the past few years, we have seen a tremendous increase in digital data being generated. By 2011, storage vendors had shipped 905 PB of purpose-built backup appliances. By 2013, the number of objects stored in Amazon S3 had reached 2 trillion. Facebook had stored 20 PB of photos by 2010. All of these require an efficient storage solution. To improve space efficiency, compression and deduplication are being widely used. Compression works by identifying repeated strings and replacing them with more compact encodings while deduplication partitions data into fixed-size or variable-size chunks and removes duplicate blocks. While we have seen great improvements in space efficiency from these two approaches, there are still some limitations. First, traditional compressors are limited in their ability to detect redundancy across a large range since they search for redundant data in a fine-grain level (string level). For deduplication, metadata embedded in an input file changes more frequently, and this introduces more unnecessary unique chunks, leading to poor deduplication. Cloud storage systems suffer from unpredictable and inefficient performance because of interference among different types of workloads. This dissertation proposes techniques to improve the effectiveness of traditional compressors and deduplication in improving space efficiency, and a new IO scheduling algorithm to improve performance predictability and efficiency for cloud storage systems. The common idea is to utilize similarity. To improve the effectiveness of compression and deduplication, similarity in content is used to transform an input file into a compression- or deduplication-friendly format. We propose Migratory Compression, a generic data transformation that identifies similar data in a coarse-grain level (block level) and then groups similar blocks together. It can be used as a preprocessing stage for any traditional compressor. We find metadata have a huge impact in reducing the benefit of deduplication. To isolate the impact from metadata, we propose to separate metadata from data. Three approaches are presented for use cases with different constrains. For the commonly used tar format, we propose Migratory Tar: a data transformation and also a new tar format that deduplicates better. We also present a case study where we use deduplication to reduce storage consumption for storing disk images, while at the same time achieving high performance in image deployment. Finally, we apply the same principle of utilizing similarity in IO scheduling to prevent interference between random and sequential workloads, leading to efficient, consistent, and predictable performance for sequential workloads and a high disk utilization

    Experimental approaches for assessing time and temperature dependent performances of fractured laminated safety glass

    Get PDF
    Laminated glass is for a few decades a well-known product in the construction industry for conferring safety performances to glazing units. Besides to the safeguarding of persons, laminated glass products are contributing to a variety of other safety performances, in case of accidental or attack situations leading to breakage of or crack propagation in the glass panes of a laminated glass unit. The ultimate residual load-bearing capacity of a damaged element can be resumed to one critical load-transfer mechanism, in the form of interlayer ligaments bridging the glass fragments. The characterization for design purposes of the mechanical properties of the interlayer involved in this load-transfer mechanism through the ligament appears however far from obvious. This results from specificities on the one hand of adhesive polymer components and on the other of design and control processes in the building industry. These specificities are mainly related to two aspects : firstly to the time- and temperature dependent behaviour of interlayer materials and their possible sensitivity to ageing effects, and secondly to initially vaguely defined intended fields of use, especially when non-conventional structural applications are within the considered application scope. The combination of these two aspects raises constraints for the development of experimental methods, test configurations and assessment strategies for laminated glass products. This research proposes analysis grids to get an overview of the constitutive elements of application scopes and of the possibilities and limitations for experimental assessment, with purpose to distinguish and estimate different types of border effects. These are used to evaluate the representativeness and the robustness of different test methods and test configurations, corresponding to different experimental scales. An incremental experimental approach has been developed for investigating the time- and temperature dependent performances of damaged laminated glass elements, on the basis of tests on pre-cracked specimens of small dimensions. The assessment of the residual load-bearing capacity of damaged elements used in structural applications was the main focus of these investigations. This research highlights the need for adapting experimental assessment approaches to characterize properties of laminated glass products for design purposes with respect to their post-fracture performances, in comparison with other construction materials. It also explains specific difficulties for obtaining quantitatively meaningful results and the challenges for harmonizing experimental assessment strategies for different applications and products made with a same type of interlayer material

    Designing Fractal Line Pied-de-poules: A Case Study in Algorithmic Design Mediating between Culture and Fractal Mathematics

    Get PDF
    Millions of people own and wear pied-de-poule (houndstooth) garments. The pattern has an intriguing basic figure and a typical set of symmetries. The origin of the pattern lies in a specific type of weaving. In this article I apply computational techniques to modernize this ancient decorative pattern. In particular I describe a way to enrich pied-de-poule with a fractal structure. Although a first fractal line pied-de-poule was shown at Bridges 2015, a number of fundamental questions still remained. The following questions are addressed in this article: Does the original pied-de-poule appear as a limit case when the fractal structure is increasingly refined? Can we prove that the pattern is regular in the sense that one formula describes all patterns? What is special about pied-de-poule when it comes to making these fractals? Can the technique be generalized? The results and techniques in this article anticipate a fashion future in which decorative patterns, including pied-de-poule, will be part of our global culture, as they are now, but rendered in more refined ways and using new technologies. These new technologies include digital manufacturing technologies such as laser-cutting and 3D printing, but also computational and mathematical tools such as Lindenmayer rules (originally devised to describe the algorithmic beauty of plants)
    corecore