1,876 research outputs found

    Advancing automation and robotics technology for the space station and for the US economy: Submitted to the United States Congress October 1, 1987

    Get PDF
    In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on the space station. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the Law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the fifth in a series of progress updates and covers the period between 16 May 1987 and 30 September 1987. NASA has accepted the basic recommendations of ATAC for its space station efforts. ATAC and NASA agree that the mandate of Congress is that an advanced automation and robotics technology be built to support an evolutionary space station program and serve as a highly visible stimulator affecting the long-term U.S. economy

    A Network-Based Design Synthesis of Distributed Ship Services Systems for a Non Nuclear Powered Submarine in Early Stage Design

    Get PDF
    Even though the early-stage design of a complex vessel is where the important decisions are made, the synthesis of the distributed ship service systems (DS3) often relies on “past practice” and simple vessel displacement based weight algorithms. Such an approach inhibits the ability of the concept designer to consider the impact of different DS3 options. It also reduces the ability to undertake Requirements Elucidation, especially regarding the DS3. Given the vital role the many DS3 provide to a submarine, this research considers whether there is a better way to synthesise DS3 without resorting to the detailed design of the distributed systems, which is usually inappropriate at the exploratory stages of design. The research proposes a new approach, termed the Network Block Approach (NBA), combining the advantages of the 3D physical based synthesis UCL Design Building Block (DBB) approach with the Virgina Tech Architectural Flow Optimisation (AFO) method, when applied to submarine DS3 design. Utilising a set of novel frameworks and the Paramarine CASD tool, the proposed approach also enabled the development of the submarine concept design at different levels of granularities, ranging from modelling individual spaces to various DS3 components and routings. The proposed approach also allowed the designer to balance the energy demands of various distributed systems, performing a steady-state flow simulation, and visualising the complexity of the submarine DS3 in a 3D multiplex network configuration. Such 3D based physical and network syntheses provide potential benefits in early-stage submarine DS3 design. The overall aim of proposing and demonstrating a novel integrated DS3 synthesis approach applicable to concept naval submarine design was achieved, although several issues and limitations emerged during both the development and the implementation of the approach. Through identification of the research limitations, areas for future work aimed at improving the proposal have been outlined

    Addressing Memory Bottlenecks for Emerging Applications

    Full text link
    There has been a recent emergence of applications from the domain of machine learning, data mining, numerical analysis and image processing. These applications are becoming the primary algorithms driving many important user-facing applications and becoming pervasive in our daily lives. Due to their increasing usage in both mobile and datacenter workloads, it is necessary to understand the software and hardware demands of these applications, and design techniques to match their growing needs. This dissertation studies the performance bottlenecks that arise when we try to improve the performance of these applications on current hardware systems. We observe that most of these applications are data-intensive, i.e., they operate on a large amount of data. Consequently, these applications put significant pressure on the memory. Interestingly, we notice that this pressure is not just limited to one memory structure. Instead, different applications stress different levels of the memory hierarchy. For example, training Deep Neural Networks (DNN), an emerging machine learning approach, is currently limited by the size of the GPU main memory. On the other spectrum, improving DNN inference on CPUs is bottlenecked by Physical Register File (PRF) bandwidth. Concretely, this dissertation tackles four such memory bottlenecks for these emerging applications across the memory hierarchy (off-chip memory, on-chip memory and physical register file), presenting hardware and software techniques to address these bottlenecks and improve the performance of the emerging applications. For on-chip memory, we present two scenarios where emerging applications perform at a sub-optimal performance. First, many applications have a large number of marginal bits that do not contribute to the application accuracy, wasting unnecessary space and transfer costs. We present ACME, an asymmetric compute-memory paradigm, that removes marginal bits from the memory hierarchy while performing the computation in full precision. Second, we tackle the contention in shared caches for these emerging applications that arise in datacenters where multiple applications can share the same cache capacity. We present ShapeShifter, a runtime system that continuously monitors the runtime environment, detects changes in the cache availability and dynamically recompiles the application on the fly to efficiently utilize the cache capacity. For physical register file, we observe that DNN inference on CPUs is primarily limited by the PRF bandwidth. Increasing the number of compute units in CPU requires increasing the read ports in the PRF. In this case, PRF quickly reaches a point where latency could no longer be met. To solve this problem, we present LEDL, locality extensions for deep learning on CPUs, that entails a rearchitected FMA and PRF design tailored for the heavy data reuse inherent in DNN inference. Finally, a significant challenge facing both the researchers and industry practitioners is that as the DNNs grow deeper and larger, the DNN training is limited by the size of the GPU main memory, restricting the size of the networks which GPUs can train. To tackle this challenge, we first identify the primary contributors to this heavy memory footprint, finding that the feature maps (intermediate layer outputs) are the heaviest contributors in training as opposed to the weights in inference. Then, we present Gist, a runtime system, that uses three efficient data encoding techniques to reduce the footprint of DNN training.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146016/1/anijain_1.pd

    Preserving the Quality of Architectural Tactics in Source Code

    Get PDF
    In any complex software system, strong interdependencies exist between requirements and software architecture. Requirements drive architectural choices while also being constrained by the existing architecture and by what is economically feasible. This makes it advisable to concurrently specify the requirements, to devise and compare alternative architectural design solutions, and ultimately to make a series of design decisions in order to satisfy each of the quality concerns. Unfortunately, anecdotal evidence has shown that architectural knowledge tends to be tacit in nature, stored in the heads of people, and lost over time. Therefore, developers often lack comprehensive knowledge of underlying architectural design decisions and inadvertently degrade the quality of the architecture while performing maintenance activities. In practice, this problem can be addressed through preserving the relationships between the requirements, architectural design decisions and their implementations in the source code, and then using this information to keep developers aware of critical architectural aspects of the code. This dissertation presents a novel approach that utilizes machine learning techniques to recover and preserve the relationships between architecturally significant requirements, architectural decisions and their realizations in the implemented code. Our approach for recovering architectural decisions includes the two primary stages of training and classification. In the first stage, the classifier is trained using code snippets of different architectural decisions collected from various software systems. During this phase, the classifier learns the terms that developers typically use to implement each architectural decision. These ``indicator terms\u27\u27 represent method names, variable names, comments, or the development APIs that developers inevitably use to implement various architectural decisions. A probabilistic weight is then computed for each potential indicator term with respect to each type of architectural decision. The weight estimates how strongly an indicator term represents a specific architectural tactics/decisions. For example, a term such as \emph{pulse} is highly representative of the heartbeat tactic but occurs infrequently in the authentication. After learning the indicator terms, the classifier can compute the likelihood that any given source file implements a specific architectural decision. The classifier was evaluated through several different experiments including classical cross-validation over code snippets of 50 open source projects and on the entire source code of a large scale software system. Results showed that classifier can reliably recognize a wide range of architectural decisions. The technique introduced in this dissertation is used to develop the Archie tool suite. Archie is a plug-in for Eclipse and is designed to detect wide range of architectural design decisions in the code and to protect them from potential degradation during maintenance activities. It has several features for performing change impact analysis of architectural concerns at both the code and design level and proactively keep developers informed of underlying architectural decisions during maintenance activities. Archie is at the stage of technology transfer at the US Department of Homeland Security where it is purely used to detect and monitor security choices. Furthermore, this outcome is integrated into the Department of Homeland Security\u27s Software Assurance Market Place (SWAMP) to advance research and development of secure software systems

    CHIPS: Custom Hardware Instruction Processor Synthesis

    Full text link

    A framework to improve the architecture quality of software intensive systems

    Get PDF
    Over the past decade, the amount and complexity of software for almost any business sector has increased substantially. Unfortunately, the increased complexity of software in the systems to be built has often lead to a significant mismatch between the planned and the implemented products. One common problem is that system-wide quality attributes such as safety, reliability, performance, and modifiability are not sufficiently considered in software architecture design. Typically, they are addressed in an ad-hoc and unstructured fashion. Since rationales for architectural decisions are frequently missing, risks associated with those decisions can be neither identified, nor mitigated in a systematic way. Consequently, there is a high probability that the resulting software architecture fails to meet business goals and does not allow the building of an adequate system. This work presents QUADRAD, a framework for Quality-Driven Architecture Development. QUADRAD is capable of improving architecture quality for software-intensive systems in a systematic way. It supports the development of architectures that are optimized according to their essential quality requirements. Such architectures permit the building of systems that are better aligned to the principal market needs and business goals. QUADRAD is complemented by the Architecture Exploration Tool (AET), which supports architecture evaluations and helps in documenting the fundamental design decisions of an architecture. QUADRAD has been validated in three industrial projects. For each of these projects the architecture quality could be significantly increased. The results confirm the hypothesis of this work and demonstrate how critical problems in the transition from requirements to architecture design can be mitigated
    corecore