4,681 research outputs found

    Development of assembly and joint concepts for erectable space structures

    Get PDF
    The technology associated with the on-orbit assembly of tetrahedral truss platforms erected of graphite epoxy tapered columns is examined. Associated with the assembly process is the design and fabrication of nine member node joints. Two such joints demonstrating somewhat different technology were designed and fabricated. Two methods of automatic assembly using the node designs were investigated, and the time of assembly of tetrahedral truss structures up to 1 square km in size was estimated. The effect of column and node joint packaging on the Space Shuttle cargo bay is examined. A brief discussion is included of operating cost considerations and the selection of energy sources. Consideration was given to the design assembly machines from 5 m to 20 m. The smaller machines, mounted on the Space Shuttle, are deployable and restowable. They provide a means of demonstrating the capabilities of the concept and of erecting small specialized platforms on relatively short notice

    Fault-tolerant computer study

    Get PDF
    A set of building block circuits is described which can be used with commercially available microprocessors and memories to implement fault tolerant distributed computer systems. Each building block circuit is intended for VLSI implementation as a single chip. Several building blocks and associated processor and memory chips form a self checking computer module with self contained input output and interfaces to redundant communications buses. Fault tolerance is achieved by connecting self checking computer modules into a redundant network in which backup buses and computer modules are provided to circumvent failures. The requirements and design methodology which led to the definition of the building block circuits are discussed

    Breeding without Breeding: Is a Complete Pedigree Necessary for Efficient Breeding?

    Get PDF
    Complete pedigree information is a prerequisite for modern breeding and the ranking of parents and offspring for selection and deployment decisions. DNA fingerprinting and pedigree reconstruction can substitute for artificial matings, by allowing parentage delineation of naturally produced offspring. Here, we report on the efficacy of a breeding concept called “Breeding without Breeding” (BwB) that circumvents artificial matings, focusing instead on a subset of randomly sampled, maternally known but paternally unknown offspring to delineate their paternal parentage. We then generate the information needed to rank those offspring and their paternal parents, using a combination of complete (full-sib: FS) and incomplete (half-sib: HS) analyses of the constructed pedigrees. Using a random sample of wind-pollinated offspring from 15 females (seed donors), growing in a 41-parent western larch population, BwB is evaluated and compared to two commonly used testing methods that rely on either incomplete (maternal half-sib, open-pollinated: OP) or complete (FS) pedigree designs. BwB produced results superior to those from the incomplete design and virtually identical to those from the complete pedigree methods. The combined use of complete and incomplete pedigree information permitted evaluating all parents, both maternal and paternal, as well as all offspring, a result that could not have been accomplished with either the OP or FS methods alone. We also discuss the optimum experimental setting, in terms of the proportion of fingerprinted offspring, the size of the assembled maternal and paternal half-sib families, the role of external gene flow, and selfing, as well as the number of parents that could be realistically tested with BwB

    Integrated control and health management. Orbit transfer rocket engine technology program

    Get PDF
    To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods

    Statistical Physics of Design

    Full text link
    Modern life increasingly relies on complex products that perform a variety of functions. The key difficulty of creating such products lies not in the manufacturing process, but in the design process. However, design problems are typically driven by multiple contradictory objectives and different stakeholders, have no obvious stopping criteria, and frequently prevent construction of prototypes or experiments. Such ill-defined, or "wicked" problems cannot be "solved" in the traditional sense with optimization methods. Instead, modern design techniques are focused on generating knowledge about the alternative solutions in the design space. In order to facilitate such knowledge generation, in this dissertation I develop the "Systems Physics" framework that treats the emergent structures within the design space as physical objects that interact via quantifiable forces. Mathematically, Systems Physics is based on maximal entropy statistical mechanics, which allows both drawing conceptual analogies between design problems and collective phenomena and performing numerical calculations to gain quantitative understanding. Systems Physics operates via a Model-Compute-Learn loop, with each step refining our thinking of design problems. I demonstrate the capabilities of Systems Physics in two very distinct case studies: Naval Engineering and self-assembly. For the Naval Engineering case, I focus on an established problem of arranging shipboard systems within the available hull space. I demonstrate the essential trade-off between minimizing the routing cost and maximizing the design flexibility, which can lead to abrupt phase transitions. I show how the design space can break into several locally optimal architecture classes that have very different robustness to external couplings. I illustrate how the topology of the shipboard functional network enters a tight interplay with the spatial constraints on placement. For the self-assembly problem, I show that the topology of self-assembled structures can be reliably encoded in the properties of the building blocks so that the structure and the blocks can be jointly designed. The work presented here provides both conceptual and quantitative advancements. In order to properly port the language and the formalism of statistical mechanics to the design domain, I critically re-examine such foundational ideas as system-bath coupling, coarse graining, particle distinguishability, and direct and emergent interactions. I show that the design space can be packed into a special information structure, a tensor network, which allows seamless transition from graphical visualization to sophisticated numerical calculations. This dissertation provides the first quantitative treatment of the design problem that is not reduced to the narrow goals of mathematical optimization. Using statistical mechanics perspective allows me to move beyond the dichotomy of "forward" and "inverse" design and frame design as a knowledge generation process instead. Such framing opens the way to further studies of the design space structures and the time- and path-dependent phenomena in design. The present work also benefits from, and contributes to the philosophical interpretations of statistical mechanics developed by the soft matter community in the past 20 years. The discussion goes far beyond physics and engages with literature from materials science, naval engineering, optimization problems, design theory, network theory, and economic complexity.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163133/1/aklishin_1.pd

    Crew systems and flight station concepts for a 1995 transport aircraft

    Get PDF
    Aircraft functional systems and crew systems were defined for a 1995 transport aircraft through a process of mission analysis, preliminary design, and evaluation in a soft mockup. This resulted in a revolutionary pilot's desk flight station design featuring an all-electric aircraft, fly-by-wire/light flight and thrust control systems, large electronic color head-down displays, head-up displays, touch panel controls for aircraft functional systems, voice command and response systems, and air traffic control systems projected for the 1990s. The conceptual aircraft, for which crew systems were designed, is a generic twin-engine wide-body, low-wing transport, capable of worldwide operation. The flight control system consists of conventional surfaces (some employed in unique ways) and new surfaces not used on current transports. The design will be incorporated into flight simulation facilities at NASA-Langley, NASA-Ames, and the Lockheed-Georgia Company. When interfaced with advanced air traffic control system models, the facilities will provide full-mission capability for researching issues affecting transport aircraft flight stations and crews of the 1990s

    A general architecture for robotic swarms

    Get PDF
    Swarms are large groups of simplistic individuals that collectively solve disproportionately complex tasks. Individual swarm agents are limited in perception, mechanically simple, have no global knowledge and are cheap, disposable and fallible. They rely exclusively on local observations and local communications. A swarm has no centralised control. These features are typifed by eusocial insects such as ants and termites, who construct nests, forage and build complex societies comprised of primitive agents. This project created the basis of a general swarm architecture for the control of insect-like robots. The Swarm Architecture is inspired by threshold models of insect behaviour and attempts to capture the salient features of the hive in a closely defined computer program that is hardware agnostic, swarm size indifferent and intended to be applicable to a wide range of swarm tasks. This was achieved by exploiting the inherent limitations of swarm agents. Individual insects were modelled as a machine capable only of perception, locomotion and manipulation. This approximation reduced behaviour primitives to a fixed tractable number and abstracted sensor interpretation. Cooperation was achieved through stigmergy and decisions made via a behaviour threshold model. The Architecture represents an advance on previous robotic swarms in its generality - swarm control software has often been tied to one task and robot configuration. The Architecture's exclusive focus on swarms, sets it apart from existing general cooperative systems, which are not usually explicitly swarm orientated. The Architecture was implemented successfully on both simulated and real-world swarms

    Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks

    Get PDF
    In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables. Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions

    Development of Comparative Test Protocols for the Assessment of Autonomous Tractor Performance

    Get PDF
    As autonomous and semi-autonomous tractors (ASAT) become more prevalent and affordable within the agricultural industry, various standards that outline both the safety and design principles for ASATs have been developed. Current features on late model tractors inform most of the major components that would be required for an autonomous tractor – with these technologies ultimately providing a pathway from basic automation to full autonomy. While such standards ensure certain levels of safety and/or performance are achieved, there are currently no universally accepted documents or testing protocols that assess the field-readiness or the level of performance/maturity of (semi-)autonomous tractors. Therefore, this project aims to develop ASAT testing protocols and to assess the performance of current tractor technology, relative to the suggested requirements. Building upon existing research and standards of the mining, transport and agricultural sectors, a list of expected operations that an ASAT should be expected to perform was compiled, prior to developing test procedures to exploit certain operations and/or protocols. A John Deere 6120R case study was then implemented, to assess the appropriateness of test procedures and the performance of a market-ready ASAT. The project presents recommendations for the introduction of universally accepted, independent testing procedures to ensure ASATs meet accepted levels of performance and field-readiness, pertaining to awareness and perception, automated tractor guidance, headland management and operational safety. The project also assessed the maturity and performance of existing tractor technologies, relevant for autonomous and semi-autonomous farming operations. Implementing this scoring method, the 6120R case study performed well across a number of elements, obtaining an overall mark of 8.3/10. The tractor benefitted from advanced headland management and operational safety protocols, while lacking in-depth perception and awareness practices – ultimately limiting its driverless capabilities. While procedures were outlined for obstacle detection and avoidance systems, these protocols could not be tested due to limitations of available machinery. Therefore, further work should involve assessing the practical implementation of perception systems, prior to presenting these recommended tests to tractor manufacturers for feedback and refinement – thereby accelerating acceptance and uptake of these tests
    • …
    corecore